Intel works on optical processors for AI: up to 100 times more efficient.

We are coming to the end of the road of reduction in transistors, where in a few years the limit may be a reality. Therefore, it is necessary to look for new forms of speed instead of miniaturization. Intel is working on this point, where its engineers are developing chips that use light instead of electrons to work, improving their efficiency for AI by up to 100 times.

It’s all based on silicon photonics, but what exactly is it?

Silicon photonics is a combination of the two most important inventions of the 20th century: the silicon integrated circuit and the semiconductor laser. This technology enables faster data transfer over longer distances compared to traditional electronics, while improving the efficiency of manufacturing large volumes of silicon.

Intel works on optical processors for AI: up to 100 times more efficient

Although it is true that the use of silicon photonics has been restricted to data centers, now Intel wants to take it a step further, towards the so-called optical neural networks (ONN). The theory is simple, photons (light) are used instead of electrons as the means of calculating and using traditional silicon.

One of the common components of these photonic circuits is known as a Mach-Zehnder inferometer (MZI), which can be configured to perform 2 x 2 matrix multiplication on a triangular mesh to create larger matrices.

Intel works on optical processors for AI: up to 100 times more efficient

The result is a photonic circuit that implements a multiplication with matrix vectors and that improves the computation of the so-called Deep Learning. This technology is not yet perfect, in fact, it is constantly changing and adapting, where it seeks to create larger meshes to understand the sensitivity to process variations, where it tries to make it more robust based on different circuit architectures.

The power of scale

For ONNs to finally become a viable part of the AI ​​hardware ecosystem, they will need to scale to larger circuits than current ones, but above all improve manufacturing techniques. Intel’s improvements in this regard will help larger circuits require a set number of MZI devices per chip.

Intel works on optical processors for AI: up to 100 times more efficient

This implies that adjusting these devices will be increasingly complicated, which is why other strategies are already being studied, such as training the ONNs with specific software to then mass-produce circuits based on those already known parameters. To test this, the researchers created a more tunable design called GridNet and at the same time a more failure-prone architecture called FFTNet.

When there were no imperfections, GridNet achieved higher accuracy compared to FFTNet (98% vs 95%). However, when noise was introduced into these photonic circuits, FFTNet was found to be more robust, almost maintaining the accuracy percentage, while GridNet’s dropped below 50%.

Intel works on optical processors for AI: up to 100 times more efficient

Based on results Intel has shown, choosing the right architecture early can greatly increase the likelihood that the resulting circuits will achieve desired performance even in the face of manufacturing variations. In any case, what has been demonstrated reduced latency by 10,000 times and improved efficiency by at least 100 times, where it was even stated that this could be of “orders of magnitudes” not seen until now.

We are getting closer to optical neural network processors, where they can change the fundamental computing basis of our technology. We will see what the future holds for us.