Publications by authors named "B Rosenow"

Unlike bosons and fermions, quasiparticles in two-dimensional quantum systems, known as anyons, exhibit statistical exchange phases that range between 0 and π. In fractional quantum Hall states, these anyons, possessing a fraction of the electron charge, traverse along chiral edge channels. This movement facilitates the creation of anyon colliders, where coupling different edge channels through a quantum point contact enables the observation of two-particle interference effects.

View Article and Find Full Text PDF

Deep neural networks have been successfully applied to a broad range of problems where overparametrization yields weight matrices which are partially random. A comparison of weight matrix singular vectors to the Porter-Thomas distribution suggests that there is a boundary between randomness and learned information in the singular value spectrum. Inspired by this finding, we introduce an algorithm for noise filtering, which both removes small singular values and reduces the magnitude of large singular values to counteract the effect of level repulsion between the noise and the information part of the spectrum.

View Article and Find Full Text PDF

As the complexity of quantum systems such as quantum bit arrays increases, efforts to automate expensive tuning are increasingly worthwhile. We investigate machine learning based tuning of gate arrays using the covariance matrix adaptation evolution strategy algorithm for the case study of Majorana wires with strong disorder. We find that the algorithm is able to efficiently improve the topological signatures, learn intrinsic disorder profiles, and completely eliminate disorder effects.

View Article and Find Full Text PDF

Neural networks have been used successfully in a variety of fields, which has led to a great deal of interest in developing a theoretical understanding of how they store the information needed to perform a particular task. We study the weight matrices of trained deep neural networks using methods from random matrix theory (RMT) and show that the statistics of most of the singular values follow universal RMT predictions. This suggests that they are random and do not contain system specific information, which we investigate further by comparing the statistics of eigenvector entries to the universal Porter-Thomas distribution.

View Article and Find Full Text PDF

Overparametrized deep neural networks trained by stochastic gradient descent are successful in performing many tasks of practical relevance. One aspect of overparametrization is the possibility that the student network has a larger expressivity than the data generating process. In the context of a student-teacher scenario, this corresponds to the so-called over-realizable case, where the student network has a larger number of hidden units than the teacher.

View Article and Find Full Text PDF