The parallelism of optics and the miniaturization of optical components using nanophotonic structures, such as metasurfaces, present a compelling alternative to electronic implementations of convolutional neural networks. The lack of a low-power optical nonlinearity, however, requires slow and energy-inefficient conversions between the electronic and optical domains. Here, we design an architecture that utilizes a single electrical to optical conversion by designing a free-space optical frontend unit that implements the linear operations of the first layer with the subsequent layers realized electronically. Speed and power analysis of the architecture indicates that the hybrid photonic-electronic architecture outperforms a fully electronic architecture for large image sizes and kernels. Benchmarking of the photonic-electronic architecture on a modified version of AlexNet achieves high classification accuracies on images from the Kaggle's Cats and Dogs challenge and MNIST databases.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1364/AO.58.003179 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!