And whether it is in strong light or dark environment, it cannot affect the transparency of the screen, thereby affecting the wearer's field of vision. This requires that the transparent screen must adjust the display intensity of its picture according to the environment.
Enhancing the display will inevitably affect the transparency of the screen, thereby affecting the wearer's field of vision. Reducing the intensity of the display image will affect the quality of the image, thereby affecting the viewing experience.
This is a contradictory issue, and if you want to solve it, you must adapt to local conditions. When should the display screen be enhanced in that usage scenario, and when should the display screen intensity be reduced. This not only requires human control, but also requires the system to intelligently and automatically adjust according to the relevant wearing environment.
In addition to display technical problems, there is also the information data processing capability, which is also divided into hardware and software parts.
First of all, in terms of hardware, AR glasses can be different from VR glasses. Because the environments and scenes of use are different, AR glasses need to be worn for a long time and adapt to a variety of environments. Therefore, the size and weight of AR glasses must be as light as possible.
The most ideal state is a pair of glasses, or not much bigger than glasses, and not much heavier. If it is too big or too heavy, it will affect the wearing experience.
Equally paradoxical is how to place a large number of hardware devices while being as light and small as possible, which places extremely high requirements on the integration and integration capabilities of the entire hardware.
What is currently commonly done is to integrate these hardware devices on the frames and temples at both ends of the glasses, but even so, they are still very bulky and inconvenient to wear.
Due to size and weight constraints, the power of hardware devices cannot be too powerful, which also greatly limits the computing processing capabilities of the system. How to improve the system's information and data processing capabilities is also a problem that the R&D team must solve.
Although with the popularization of 5G technology, the high-speed dissemination of information data is no longer a problem. But how to receive and process this massive amount of information in a timely manner is also a very difficult problem.
It's okay in a single environment, but what about in a complex environment.
Assume a scene, when you are walking on a busy cross street, all the surrounding buildings, billboards, and even some facilities are equipped with AR interpretation functions. This means that your AR glasses must accept a large amount of AR data information at once and display it on your screen at the same time, which places great demands on the processor and system.
The last problem is in the interactive system. VR can be controlled using wearable glove sensors or handheld joysticks.
AR will not work, because AR needs to adapt to a variety of environments and scenarios, so there must be a simpler and more direct way.
There are currently three ways to think of this. The first is eye tracking control technology.
The eye capture sensor is used to capture the rotation, blinking, and eye focus center of the eye in real time for interactive control. This technology has been implemented and has good application performance on many devices.
Typically, this technology is also used with head motion sensors. For example, when you look up, the content on the screen slides up; when you look down, the content on the screen slides down. When you look left or right, the screen display content will slide left or right accordingly.
When you blink, you can perform operations such as confirming selections. For example, blink once to confirm, twice to cancel, etc. This is equivalent to the left and right buttons of the mouse.
The focus of the eyes also corresponds to the mouse cursor. Wherever you look, the focus is just as flexible as the mouse cursor.
The second way is to use gesture control technology, using sensors to capture the movement changes of previous gestures for interactive control.
For example, if your hand slides up and down, the content displayed on the screen will also slide up and down, and the same goes for left and right. Finger pulls can also move the screen position, or zoom the screen in and out. Click with your finger to confirm, wave to cancel, etc.
Gesture recognition control technology is currently developing rapidly, but there are still some difficulties in recognizing gesture changes in high-speed movements. This requires that the sensor must have the ability to accurately recognize and capture gestures, and the processor must be able to quickly and accurately convert these gestures into relevant operating instructions.
Another point is that everyone's gestures are different, or everyone's gestures are different every time. Even a gesture will have some changes under different time and environmental scenarios.
This brings certain difficulties to the system's capture and identification, and therefore requires the system to have good fault tolerance.
The third way of interaction seems more sci-fi, which is the brain-computer control technology that has become popular recently. To put it simply, it is to control operations through thinking and imagination.
When we imagine a thing, a picture or an object, the brain waves released are different. Brain-computer control technology uses our different brain waves to control and interact with devices.
For example, after your brain imagines an idea of moving forward, the brain will release such a brain wave. The brain-computer system will recognize this brain wave and convert it into corresponding electrical signal instructions to control the device to move forward.
At present, this technology has been applied in some fields, including this kind of brain-computer controlled wheelchair for patients with high paraplegia. Patients can control the wheelchair through their brain to stop movement and so on.
There is also the use of this brain-computer control technology to perform text-related input. It is said that the input speed can reach 70 words per minute, which can be said to be very fast.
Although this technology is developing rapidly, it is also a hot area that technology giants from various countries are competing to study. But the controversy about this technology has not stopped, and has even become more intense.
An important core question everyone is discussing is, is this technology safe? First of all, is it safe to use? Will wearing this sensor for such a long time to capture brain waves cause damage to the brain, affect intelligence, nervous system, and have any impact on health?
Second, since brain-computer equipment can read brain waves, this means that it can also input brain waves. Nowadays, Internet security is becoming more and more serious. If hackers master the relevant technology and then use brain-computer control technology to invade the human brain, wouldn't it be possible to steal the data and secrets in the human brain?
Or even more serious, what if hackers use this method to transmit transplanted viruses into human brains? Is it really necessary to restart the human brain, or directly format it? Or should I install an anti-virus software in my brain and set up a firewall?
"Add bookmarks for easy reading"