University of California San Diego researchers have created a new model that helps four-legged robots navigate complex terrain. The robots can now cross rocky ground, navigate gap-filled paths and climb stairs with ease. The model is set to be presented at the 2023 Conference on Computer Vision and Pattern Recognition in Canada.
How the Model Works
The robot is equipped with a depth camera positioned at an angle to capture both the terrain beneath and the scene in front of it. The model takes 2D images from the camera and translates them into 3D space. A short video sequence is used to extract 3D information from each 2D frame, including information about the robot’s leg movements such as joint angle, joint velocity and distance from the ground. The model compares the information from the previous frames with information from the current frame to estimate the 3D transformation between the past and the present.
The model then combines all the information to synthesize the previous frames with the current frame. As the robot moves, the model checks the synthesized frames against the frames that the camera has already captured. If they match, the model knows that it has learned the correct representation of the 3D scene. The 3D representation is used to control the robot’s movement. By synthesizing visual information from the past, the robot remembers what it has seen and the actions its legs have taken before to inform its next moves.
Advantages and Limitations
The new model improves the robot’s 3D perception and combines it with proprioception, which is the sense of movement, direction, speed, location and touch. This allows the robot to traverse more challenging terrain than before. However, the model does not guide the robot to a specific goal or destination. When deployed, the robot simply takes a straight path and avoids obstacles by walking away via another straight path. The researchers plan to include more planning techniques and complete the navigation pipeline in future work.
The new model developed by University of California San Diego researchers allows four-legged robots to see more clearly in 3D and traverse challenging terrain with ease. By improving the robot’s 3D perception and combining it with proprioception, the robot can remember what it has seen and the actions its legs have taken before to inform its next moves. Although the model has its limitations, the researchers plan to include more planning techniques in future work.
Leave a Reply