
[ad_1]

A New Camera It mimics the involuntary movements of the human eye to create clearer, more precise images for robots, smartphones and other image-capturing devices.
A team led by computer scientists at the University of Maryland (UMD) has invented a camera mechanism that could improve the way people shoot Los Robotics They observe and respond to the world around them. Camera Prototyping and Testing Details of the team’s work, called the Artificial Microbeat Enhanced Event Camera (AMI-EV), are detailed in an article published in the journal Science Robotics.
“Las camera Event tracking is a relatively new technology that allows for better tracking of moving objects. Traditional Camerabut when there is little motion, current event cameras struggle to capture clear, unblurred images,” Botao He, lead author of the paper and a UMD doctoral student in computer science, told Metro.
“This is a big problem because robot and many other technologies, such as self-driving cars, Depends on accurate image “So we asked ourselves: how do humans and animals ensure that their vision is focused on moving objects?” he added.
“When using robots, you have to replace your eyes with cameras and your brain with computers. Better cameras mean the robot has better perception and response.”
— Yiannis Aloimonos, co-author of the study and professor of computer science at the University of Maryland
For He’s team, the answer is microsaccades, small, rapid eye movements that occur involuntarily when a person is trying to focus. Through these tiny but continuous movements, Human Eye Can accurately maintain attention over time on objects and their visual textures (e.g., color, depth, and shading).
“We think that just like our eyes need those Small movements To maintain focus, cameras can use similar principles to capture clear, precise images without the blur caused by motion,” he explained.
The team successfully reproduced microsaccades by inserting a rotating prism inside the AMI-EV to redirect the light beam captured by the lens. Continuous rotational motion The prism simulates the movement that occurs naturally in the human eye, allowing the camera to stabilize the texture of the recorded objects in the same way a human would.
Useful features of the new camera
– Can be used on robots
– Improved self-driving car cameras
– Make scientific research more accurate
-Can be used with virtual reality glasses
5 questions…
He Botao, The article’s lead author and a University of Maryland computer science doctoral student.
Q: What inspired you to develop this camera?
– Human vision responds to perceptual decay through an active mechanism of small involuntary eye movements, the most prominent of which are microsaccades. Microsaccades can essentially maintain the stability and persistence of textures by continually moving the eyes slightly during fixation.
Inspired by microsaccades, we designed an event-based perception system that can simultaneously maintain low reaction time and stable texture.
Q: Why are you inspired by the human eye?
– Event cameras and human eyes both suffer from perceptual fading, which means that images gradually fade in the absence of motion. Human vision copes with this problem through an active mechanism of tiny involuntary eye movements, of which microsaccades are the most prominent. Microsaccades can essentially keep textures stable and persistent by constantly moving the eyes slightly during fixation. Inspired by microsaccades, we design an event-based perceptual system that is able to simultaneously maintain low reaction times and stable textures.
Q: How does the camera work?
– In this design, a rotating wedge prism is mounted in front of the aperture of the event chamber to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for additional rotational motion to be compensated by the algorithm, resulting in a stable texture appearance and high information performance independent of external motion. The hardware device and software solution are integrated into a system we call the Artificial Microsaccade Enhanced Event (AMI-EV) Camera.
Q: How do you mimic the tiny involuntary movements that the eye uses to maintain clear, stable vision over the long term?
– Humans constantly move their eyes slightly to stimulate visual neurons and keep our visual perception stable. AMI-EV also has a rotating wedge-shaped prism mounted in front of the aperture of the event chamber to redirect the incoming light and constantly trigger events (just like microsaccades stimulate visual neurons).
Q: What practical functions or uses does this camera have?
——First, it can be used directly as a high-speed camera. It can record and reconstruct videos at 3000 fps with good image quality. In addition, since it can retain the advantages of standard event cameras in high dynamic range and fast response, it is suitable for autonomous driving or smartphones, especially in difficult lighting situations.
More importantly, since it fundamentally solves the motion-dependency problem of event-based vision, it has the potential to improve the performance of most existing event-based vision tasks and may open the door to new tasks that are currently incapable of being handled by existing cameras, such as high-speed motion tracking without perceptual fading, as detailed in our paper.
[ad_2]
Source link