photonic-processor-could-streamline-6g-wireless-signal-processing

As an increasing number of connected devices require more bandwidth for activities like remote work and cloud services, managing the limited amount of wireless spectrum for all users to utilize will become notably difficult.

Engineers are leveraging artificial intelligence to dynamically regulate the existing wireless spectrum, aiming to decrease latency and enhance performance. However, most AI techniques for categorizing and handling wireless signals demand substantial power and are unable to function in real-time.

Recently, researchers from MIT have created an innovative AI hardware accelerator specifically tailored for wireless signal processing. Their optical processor conducts machine-learning calculations at light speed, classifying wireless signals in just nanoseconds.

The photonic chip operates approximately 100 times faster than the leading digital alternative while achieving about 95 percent accuracy in signal classification. This new hardware accelerator is also adaptable and versatile, making it suitable for various high-performance computing tasks. Additionally, it is smaller, lighter, more affordable, and more energy-efficient than traditional digital AI hardware accelerators.

This device could prove especially beneficial in upcoming 6G wireless technologies, such as cognitive radios that enhance data rates by adjusting wireless modulation techniques to the dynamic wireless landscape.

By allowing an edge device to execute deep-learning computations in real-time, this novel hardware accelerator could result in significant speed improvements in numerous applications beyond just signal processing. For example, it could assist autonomous vehicles in making instant decisions regarding changes in their environment or enable smart pacemakers to continually track the well-being of a patient’s heart.

“There are countless applications that could be facilitated by edge devices capable of analyzing wireless signals. What we have presented in our paper could unlock numerous opportunities for real-time and dependable AI inference. This work marks the inception of something that could be quite impactful,” states Dirk Englund, a professor at the MIT Department of Electrical Engineering and Computer Science, the principal investigator of the Quantum Photonics and Artificial Intelligence Group, the Research Laboratory of Electronics (RLE), and the senior author of the paper.

Joining him on the paper are lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research is published today in Science Advances.

Light-speed processing  

Cutting-edge digital AI accelerators for wireless signal processing convert signals into images and process them through a deep-learning model for classification. Although this method is highly reliable, the resource-intensive nature of deep neural networks makes it impractical for many time-sensitive tasks.

Optical systems can expedite deep neural networks by utilizing light to encode and process data, which is also less energy-demanding than digital computation. However, researchers have encountered difficulties in maximizing the effectiveness of general-purpose optical neural networks for signal processing while maintaining scalability.

By devising an optical neural network architecture tailored specifically for signal processing, named the multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled this challenge directly.

The MAFT-ONN resolves the scalability issue by encoding all signal data and executing all machine-learning functions in what is referred to as the frequency domain—prior to the digitization of the wireless signals.

The researchers crafted their optical neural network to carry out both linear and nonlinear operations inline. Both types of operations are essential for deep learning.

Thanks to this groundbreaking design, they require only one MAFT-ONN device per layer for the complete optical neural network, unlike other approaches that need one device for each individual computational unit, or “neuron.”

“We can accommodate 10,000 neurons on a single device and execute the necessary multiplications in one go,” remarks Davis.

The researchers achieve this through a method called photoelectric multiplication, which significantly enhances efficiency. It also allows them to develop an optical neural network that can be easily scaled with additional layers without incurring extra overhead.

Results in nanoseconds

MAFT-ONN takes a wireless signal as input, processes the signal data, and transmits the information onwards for subsequent operations performed by the edge device. For instance, by classifying a signal’s modulation, MAFT-ONN would allow a device to automatically determine the type of signal to extract the data it holds.

One of the primary challenges the researchers encountered when designing MAFT-ONN was determining how to map machine-learning calculations to the optical hardware.

“We couldn’t merely take a run-of-the-mill machine-learning framework and apply it. We had to tailor it to fit the hardware and figure out how to leverage the physics to perform the calculations we required,” notes Davis.

When they evaluated their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single attempt, which can rapidly converge to over 99 percent accuracy through multiple measurements. MAFT-ONN completed the entire process in approximately 120 nanoseconds.

“The longer you measure, the more accuracy you will obtain. Since MAFT-ONN computes in nanoseconds, you don’t sacrifice much speed to gain additional accuracy,” Davis adds.

While cutting-edge digital radio frequency devices can conduct machine-learning inference in microseconds, optics can accomplish it in nanoseconds or even picoseconds.

Looking ahead, the researchers aim to utilize what are known as multiplexing schemes to perform more calculations and further scale up the MAFT-ONN. They also plan to expand their research into more complex deep-learning structures that could operate transformer models or LLMs.

This research was partially funded by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This