Tsinghua University has achieved breakthroughs in photonic computing with a large-scale photonic computing chiplet "Taichi" and a "fully-forward-mode photonic training" architecture, addressing the challenges of computing power and energy efficiency in artificial intelligence.
The research team led by Prof. Lu Fang and Prof. Qionghai Dai proposed a general-purpose distributed computing architecture and an on-chip integrated diffractive-interference-hybrid model. Based on these architectures and theories, they developed the large-scale photonic chiplet “Taichi” with 160–TOPS/W energy efficiency. It achieved on-chip 1000-category-level classification and high-fidelity artificial intelligence generated content with up to two orders of magnitude of improvement in efficiency. The work was published in Science on 12 April, 2024. Moreover, leveraging the spatial symmetry and Lorentz reciprocity, they introduced a full-forward-mode (FFM) training architecture for photonic neural networks, ensuring precise and efficient onsite machine learning. In this way, the compute-intensive training process can be implemented with the on-site physical system while alleviating the constraints from numerical modelling. The work was published in Nature on 8 August, 2024.
These innovations indicate the great potential of photonic computing for the inference/training of large-scale AI models. It is anticipated that “Taichi” would accelerate the development of ultra-fast and energy-efficient photonic AI solutions as critical support for the foundation models and a new era of AGI.

Editor: Guo Lili