Publications by authors named "Shira Sardi"

Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute RP could exceed 10 ms and its tail is 20 ms, independent of a large stimulation frequency range. It is an order of magnitude longer than anticipated and comparable with the decaying membrane potential time scale.

View Article and Find Full Text PDF

Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increased with the number of hidden layers. For the largest dataset, the obtained test error was estimated to be in the proximity of state-of-the-art algorithms for large epoch numbers.

View Article and Find Full Text PDF

Attempting to imitate the brain's functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST.

View Article and Find Full Text PDF

Recently, deep learning algorithms have outperformed human experts in various tasks across several domains; however, their characteristics are distant from current knowledge of neuroscience. The simulation results of biological learning algorithms presented herein outperform state-of-the-art optimal learning curves in supervised learning of feedforward networks. The biological learning algorithms comprise asynchronous input signals with decaying input summation, weights adaptation, and multiple outputs for an input signal.

View Article and Find Full Text PDF

Experimental evidence recently indicated that neural networks can learn in a different manner than was previously assumed, using adaptive nodes instead of adaptive links. Consequently, links to a node undergo the same adaptation, resulting in cooperative nonlinear dynamics with oscillating effective link weights. Here we show that the biological reality of stationary log-normal distribution of effective link weights in neural networks is a result of such adaptive nodes, although each effective link weight varies significantly in time.

View Article and Find Full Text PDF

Experimental and theoretical results reveal a new underlying mechanism for fast brain learning process, dendritic learning, as opposed to the misdirected research in neuroscience over decades, which is based solely on slow synaptic plasticity. The presented paradigm indicates that learning occurs in closer proximity to the neuron, the computational unit, dendritic strengths are self-oscillating, and weak synapses, which comprise the majority of our brain and previously were assumed to be insignificant, play a key role in plasticity. The new learning sites of the brain call for a reevaluation of current treatments for disordered brain functionality and for a better understanding of proper chemical drugs and biological mechanisms to maintain, control and enhance learning.

View Article and Find Full Text PDF

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses.

View Article and Find Full Text PDF

Neurons are the computational elements that compose the brain and their fundamental principles of activity are known for decades. According to the long-lasting computational scheme, each neuron sums the incoming electrical signals via its dendrites and when the membrane potential reaches a certain threshold the neuron typically generates a spike to its axon. Here we present three types of experiments, using neuronal cultures, indicating that each neuron functions as a collection of independent threshold units.

View Article and Find Full Text PDF

We present an analytical framework that allows the quantitative study of statistical dynamic properties of networks with adaptive nodes that have memory and is used to examine the emergence of oscillations in networks with response failures. The frequency of the oscillations was quantitatively found to increase with the excitability of the nodes and with the average degree of the network and to decrease with delays between nodes. For networks of networks, diverse cluster oscillation modes were found as a function of the topology.

View Article and Find Full Text PDF

The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network.

View Article and Find Full Text PDF

Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors.

View Article and Find Full Text PDF

The experimental study of neural networks requires simultaneous measurements of a massive number of neurons, while monitoring properties of the connectivity, synaptic strengths and delays. Current technological barriers make such a mission unachievable. In addition, as a result of the enormous number of required measurements, the estimated network parameters would differ from the original ones.

View Article and Find Full Text PDF

Broadband spontaneous macroscopic neural oscillations are rhythmic cortical firing which were extensively examined during the last century, however, their possible origination is still controversial. In this work we show how macroscopic oscillations emerge in solely excitatory random networks and without topological constraints. We experimentally and theoretically show that these oscillations stem from the counterintuitive underlying mechanism-the intrinsic stochastic neuronal response failures (NRFs).

View Article and Find Full Text PDF

Realizations of low firing rates in neural networks usually require globally balanced distributions among excitatory and inhibitory links, while feasibility of temporal coding is limited by neuronal millisecond precision. We show that cooperation, governing global network features, emerges through nodal properties, as opposed to link distributions. Using in vitro and in vivo experiments we demonstrate microsecond precision of neuronal response timings under low stimulation frequencies, whereas moderate frequencies result in a chaotic neuronal phase characterized by degraded precision.

View Article and Find Full Text PDF