Computer vision, a field of artificial intelligence, allows computers to acquire, process, and analyze digital images to make recommendations or decisions based on that analysis. The market for computer vision is growing rapidly, and researchers are looking for ways to improve the technology. USC Viterbi’s Information Sciences Institute (ISI) and the Ming Hsieh Department of Electrical and Computer Engineering (ECE) recently completed Phases 1 and 2 of a DARPA (Defense Advanced Research Projects Agency) project to make advances in computer vision.

Challenges in Computer Vision

In the scenario of an autonomous car encountering a rabbit on the road, the car’s sensors capture images of the rabbit, which are sent to a computer for processing. The processed data is then used to make a decision and adjust the car’s controls to safely avoid the rabbit. However, in applications requiring large amounts of data to be sent from the image sensor to the backend processors, physically separated systems and hardware lead to bottlenecks in throughput, bandwidth, and energy efficiency.

Researchers have traditionally approached the problem from a proximity standpoint, studying how to bring the backend processing closer to the frontend image collection. However, this approach is not feasible for drones, which do not have the computing power or battery capacity to process the data. To address this challenge, the USC Viterbi team proposed a new way of fusing together sensing, memory, and computing within a camera chip.

The Proposed Solution: RPIXELS

The USC Viterbi team proposed a novel in-pixel intelligent processing (IP2) paradigm called processing-in-pixel-in-memory (P2M) to enable the pixel array to perform a wider range of complex operations, including image processing. With IP2, the processing occurs right under the data on the pixel itself, and only relevant information is extracted. This is possible thanks to advances in computer microchips, specifically CMOS (complementary metal–oxide–semiconductors), which are used for image processing.

The resulting proposed solution for the DARPA challenge is RPIXELS (Recurrent Neural Network Processing In-Pixel for Efficient Low-energy Heterogeneous Systems). RPIXELS combines the front-end in-pixel processing with a back end that the ISI team has optimized to support the front. In testing the RPIXEL framework, the team has seen promising results: a reduction in both data size and bandwidth of 13.5x (the DARPA goal was 10x reduction of both metrics).

RPIXELS reduces both the latency (time taken to do the image processing) and needed bandwidth by tightly coupling the first layers of a neural network directly into the pixel for computing. This allows for faster decisions to be made based on what is ‘seen’ by the sensor. It also enables researchers to develop novel back end object detection and tracking algorithms to continue to innovate for more accurate and higher performance systems.

The USC Viterbi team’s proposed solution for the DARPA challenge is a promising advancement in computer vision. By reducing the amount of data that needs to be transmitted downstream to the AI processor, RPIXELS significantly reduces power consumption and bandwidth. The next step for the team is to create a physical chip by putting the circuit onto silicon and testing it in the real world. The proposed solution has the potential to save lives, including those of rabbits, by improving the safety and efficiency of autonomous vehicles, drone surveillance, and other applications of computer vision.

Technology

Articles You May Like

The Economic Potential of Generative AI
YouTube to Stop Removing Misleading Content About US Election Results
Revolutionizing Light: The Power of 2D Photonic Circuits
Apple Unveils Vision Pro – Its First Mixed-Reality Headset

Leave a Reply

Your email address will not be published. Required fields are marked *