Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data. Paradoxically, empirical evidence indicates that even systems which are robust to large random perturbations of the input data remain susceptible to small, easily constructed, adversarial perturbations of their inputs. Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
December 2023
Mammalian brains operate in very special surroundings: to survive they have to react quickly and effectively to the pool of stimuli patterns previously recognized as danger. Many learning tasks often encountered by living organisms involve a specific set-up centered around a relatively small set of patterns presented in a particular environment. For example, at a party, people recognize friends immediately, without deep analysis, just by seeing a fragment of their clothes.
View Article and Find Full Text PDFMyocardial infarction (MI) occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is cardiovascular magnetic resonance imaging (MRI) with intravenously administered gadolinium-based contrast (with damaged areas apparent as late gadolinium enhancement [LGE]). However, no "gold standard" fully automated method for the quantification of MI exists.
View Article and Find Full Text PDFIn this article, we consider a version of the challenging problem of learning from datasets whose size is too limited to allow generalisation beyond the training set. To address the challenge, we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects. In this study, the objects were smartphone photographs of near-complete Roman pottery vessels from the collection of the Museum of London.
View Article and Find Full Text PDFThis work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special 'external' devices, correctors.
View Article and Find Full Text PDFPhenomenon of stochastic separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities. In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher's discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data.
View Article and Find Full Text PDFHigh-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the "curse of dimensionality" states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the "blessing of dimensionality", has attracted much attention.
View Article and Find Full Text PDFWe report a novel state of active matter-a swirlonic state. It is comprised of swirlons, formed by groups of active particles orbiting their common center of mass. These quasi-particles demonstrate a surprising behavior: In response to an external load they move with a constant velocity proportional to the applied force, just as objects in viscous media.
View Article and Find Full Text PDFLiving neuronal networks in dissociated neuronal cultures are widely known for their ability to generate highly robust spatiotemporal activity patterns in various experimental conditions. Such patterns are often treated as neuronal avalanches that satisfy the power scaling law and thereby exemplify self-organized criticality in living systems. A crucial question is how these patterns can be explained and modeled in a way that is biologically meaningful, mathematically tractable and yet broad enough to account for neuronal heterogeneity and complexity.
View Article and Find Full Text PDFComplexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain.
View Article and Find Full Text PDFWe consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without re-training and, most importantly, without requiring significant computational resources. Here "learning" is broadly understood as an ability of one system to mimic responses of the other to an incoming stimulation and vice-versa. We call such learning an Artificial Intelligence knowledge transfer.
View Article and Find Full Text PDFSocial learning is widely observed in many species. Less experienced agents copy successful behaviors exhibited by more experienced individuals. Nevertheless, the dynamical mechanisms behind this process remain largely unknown.
View Article and Find Full Text PDFComplex networks emerging in natural and human-made systems tend to assume small-world structure. Is there a common mechanism underlying their self-organisation? Our computational simulations show that network diffusion (traffic flow or information transfer) steers network evolution towards emergence of complex network structures. The emergence is effectuated through adaptive rewiring: progressive adaptation of structure to use, creating short-cuts where network diffusion is intensive while annihilating underused connections.
View Article and Find Full Text PDF