LAS VEGAS, Jan. 9, 2017 /PRNewswire/ -- The 50th International Consumer Electronics Show (CES) was successfully held in Las Vegas from January 5-8, 2017. 7Invensun unveiled two Virtual Reality and Artificial Intelligence products at the show, aGlass and aSeePro..
aGlass is the world's first VR eye-tracking module that is manually removable and available for visual correction with a user-friendly design. aSeePro is a remote eye tracker for eye movement data that adopts a gaze tracking algorithm based upon a 3D model of human eye. The core of both algorithms are image character extraction and gaze estimation based on deep learning.
Huang Tongbing, founder and CEO of 7Invensun says, "The interaction of virtual reality is an important innovation. GPU performance is a main bottleneck for the new interactive system in VR content. Thanks to NVIDIA, which makes a deep learning algorithm, for their support getting a good application on September 13, 2016."
Deep learning has greatly promoted the visual, auditory and tactile abilities of robots, so that a robot's eyes are no longer furnishings. They can simulate human thinking and decision-making and achieve more intelligent and autonomous behavior.
7Invensun chief scientist, Dr. Ren Dongchun, pointed out what they are trying to do via deep learning in AI:
In the future, a user's gaze point information will be estimated directly by deep learning.
aGlass features full FOV (Field of View) tracking, high precision and low latency. The tracking speed ranges from120Hz to 380 Hz. Eye-tracking interaction, foveated rendering and eye tracking data humanize the VR industry.
aSeePro provides eye movement analysis and solutions from eyes to insight. It can be widely used in business research and cognitive research. Adopted from a gaze tracking algorithm that is based upon 3D model of human eye, it can accurately calculate fixation points within the scope of 55cm-80cm with free head movement.
The core of the algorithm also lies in image feature extraction and gaze estimation.
aSeePro can find the human eye area under a complex background, then further extract the key feature, finally calculating the gaze point based on the above points.
As for human eye area detection, the accuracy of the traditional image processing algorithm is about 95%. When reflections occur from wearing glasses, the accuracy will be reduced to about 70%. Now based upon deep learning, the accuracy of human eye area detection will be above 99.5% in all circumstances.
In the aspect of key feature extraction of the human eye, the deep learning method is also adopted. In terms of gaze point estimation, an approach based on deep learning is being attempted. Then calibration will be no longer needed; user experience will be improved.
7Invensun is a high-tech corporation dedicated in machine vision and Artificial Intelligence (AI), empowered by complete proprietary intellectual property rights. Since its foundation, the company has been focusing on R&D and innovation in eye tracking technologies, aiming to upgrade human-machine interactive experiences on all terminal devices.
Over the past 7 years, with eye tracking communication assistant devices, 7Invensun has helped tens of thousands of patients who had amyotrophic lateral sclerosis (ALS), lost communicative abilities or the disabilites restore their communication ability and regain motivation to live.
Recent company highlights:
7Invensun's mission is to grow into one of China's best enterprises in original technologies, lead constant development of the frontier technology of eye tracking, and put eye tracking into a wide range of applications in intelligent medicine, VR/AR, smart phones, advertisement and media, smart cars, robots, aerospace, and more.
Connect the World with Your Eyes!
Find out more at http://www.7invensun.com.
Follow us on Facebook: https://www.facebook.com/7invensun/
+86-10-8646-2718 ext. 806