Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5492298PMC
http://dx.doi.org/10.3390/s17061263DOI Listing

Publication Analysis

Top Keywords

adaptive optics
16
neural network
12
network frameworks
8
generation adaptive
8
optics systems
8
cuda code
8
comparative study
4
study neural
4
frameworks generation
4
adaptive
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!