Optical tweezers (OT) have become an essential technique in several fields of physics, chemistry, and biology as precise micromanipulation tools and microscopic force transducers. Quantitative measurements require the accurate calibration of the trap stiffness of the optical trap and the diffusion constant of the optically trapped particle. This is typically done by statistical estimators constructed from the position signal of the particle, which is recorded by a digital camera or a quadrant photodiode.
View Article and Find Full Text PDFExponential random graphs are important to model the structure of real-world complex networks. Here we solve the two-star model with degree-degree correlations in the sparse regime. The model constraints the average correlation between the degrees of adjacent nodes (nearest neighbors) and between the degrees at the end-points of two-stars (next nearest neighbors).
View Article and Find Full Text PDFWe introduce a powerful analytic method to study the statistics of the number N_{A}(γ) of eigenvalues inside any smooth Jordan curve γ∈C for infinitely large non-Hermitian random matrices A. Our generic approach can be applied to different random matrix ensembles of a mean-field type, even when the analytic expression for the joint distribution of eigenvalues is not known. We illustrate the method on the adjacency matrices of weighted random graphs with asymmetric couplings, for which standard random-matrix tools are inapplicable, and obtain explicit results for the diluted real Ginibre ensemble.
View Article and Find Full Text PDFA 1929 Gedankenexperiment proposed by Szilárd, often referred to as "Szilárd's engine", has served as a foundation for computing fundamental thermodynamic bounds to information processing. While Szilárd's original box could be partitioned into two halves and contains one gas molecule, we calculate here the maximal average work that can be extracted in a system with N particles and q partitions, given an observer which counts the molecules in each partition, and given a work extraction mechanism that is limited to pressure equalization. We find that the average extracted work is proportional to the mutual information between the one-particle position and the vector containing the counts of how many particles are in each partition.
View Article and Find Full Text PDFEpidemiological models usually contain a set of parameters that must be adjusted based on available observations. Once a model has been calibrated, it can be used as a forecasting tool to make predictions and to evaluate contingency plans. It is customary to employ only point estimators of model parameters for such predictions.
View Article and Find Full Text PDFDespite the importance of having robust estimates of the time-asymptotic total number of infections, early estimates of COVID-19 show enormous fluctuations. Using COVID-19 data from different countries, we show that predictions are extremely sensitive to the reporting protocol and crucially depend on the last available data point before the maximum number of daily infections is reached. We propose a physical explanation for this sensitivity, using a susceptible-exposed-infected-recovered model, where the parameters are stochastically perturbed to simulate the difficulty in detecting patients, different confinement measures taken by different countries, as well as changes in the virus characteristics.
View Article and Find Full Text PDFWe study the one-dimensional motion of a Brownian particle inside a confinement described by two reactive boundaries which can partially reflect or absorb the particle. Understanding the effects of such boundaries is important in physics, chemistry, and biology. We compute the probability density of the particle displacement exactly, from which we derive expressions for the survival probability and the mean absorption time as a function of the reactive coefficients.
View Article and Find Full Text PDFThe problem of efficiently reconstructing tomographic images can be mapped into a Bayesian inference problem over the space of pixels densities. Solutions to this problem are given by pixels assignments that are compatible with tomographic measurements and maximize a posterior probability density. This maximization can be performed with standard local optimization tools when the log-posterior is a convex function, but it is generally intractable when introducing realistic nonconcave priors that reflect typical images features such as smoothness or sharpness.
View Article and Find Full Text PDFWe study the Ginibre ensemble of N×N complex random matrices and compute exactly, for any finite N, the full distribution as well as all the cumulants of the number N_{r} of eigenvalues within a disk of radius r centered at the origin. In the limit of large N, when the average density of eigenvalues becomes uniform over the unit disk, we show that for 0
Due to their conceptual and mathematical simplicity, Erdös-Rényi or classical random graphs remain as a fundamental paradigm to model complex interacting systems in several areas. Although condensation phenomena have been widely considered in complex network theory, the condensation of degrees has hitherto eluded a careful study. Here we show that the degree statistics of the classical random graph model undergoes a first-order phase transition between a Poisson-like distribution and a condensed phase, the latter characterized by a large fraction of nodes having degrees in a limited sector of their configuration space.
View Article and Find Full Text PDFPseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm provide a fairly simple method to generate non-deterministic sequences of random numbers, based on measurements of quantum states. In practice, however, the experimental devices on which quantum random number generators are based are often unable to pass some tests of randomness.
View Article and Find Full Text PDFWe present a complete theory for the full particle statistics of the positions of bulk and extremal particles in a one-dimensional Coulomb gas (CG) with an arbitrary potential, in the typical and large deviations regimes. Typical fluctuations are described by a universal function which depends solely on the general properties of the external potential. The rate function controlling large deviations is, rather unexpectedly, not strictly convex and has a discontinuous third derivative around its minimum for both extremal and bulk particles.
View Article and Find Full Text PDFWe develop a theoretical approach to compute the conditioned spectral density of N×N noninvariant random matrices in the limit N→∞. This large deviation observable, defined as the eigenvalue distribution conditioned to have a fixed fraction k of eigenvalues smaller than x∈R, provides the spectrum of random matrix samples that deviate atypically from the average behavior. We apply our theory to sparse random matrices and unveil strikingly different and generic properties, namely, (i) their conditioned spectral density has compact support, (ii) it does not experience any abrupt transition for k around its typical value, and (iii) its eigenvalues do not accumulate at x.
View Article and Find Full Text PDFWishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues I_{N}(x) smaller than x∈R^{+}, from which all cumulants of I_{N}(x) and the rate function Ψ_{x}(k) controlling its large-deviation probability Prob[I_{N}(x)=kN]≍e^{-NΨ_{x}(k)} follow.
View Article and Find Full Text PDFWhen complex systems are driven to depletion by some external factor, their non-stationary dynamics can present an intermittent behaviour between relative tranquility and burst of activity whose consequences are often catastrophic. To understand and ultimately be able to predict such dynamics, we propose an underlying mechanism based on sharp thresholds of a local generalized energy density that naturally leads to negative feedback. We find a transition from a continuous regime to an intermittent one, in which avalanches can be predicted despite the stochastic nature of the process.
View Article and Find Full Text PDFRandom number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characterization derived from the Algorithmic Theory of Information), on the other, for they require testing all the possible computer programs that could produce the sequence to be analysed.
View Article and Find Full Text PDFWe present a general method to obtain the exact rate function Ψ_{[a,b]}(k) controlling the large deviation probability Prob[I_{N}[a,b]=kN]≍e^{-NΨ_{[a,b]}(k)} that an N×N sparse random matrix has I_{N}[a,b]=kN eigenvalues inside the interval [a,b]. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erdös-Rényi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum.
View Article and Find Full Text PDFPhys Rev E Stat Nonlin Soft Matter Phys
November 2014
The mechanical properties of molecules are today captured by single molecule manipulation experiments, so that polymer features are tested at a nanometric scale. Yet devising mathematical models to get further insight beyond the commonly studied force-elongation relation is typically hard. Here we draw from techniques developed in the context of disordered systems to solve models for single and double-stranded DNA stretching in the limit of a long polymeric chain.
View Article and Find Full Text PDFPhys Rev E Stat Nonlin Soft Matter Phys
November 2014
We study the statistics of the condition number κ=λ_{max}/λ_{min} (the ratio between largest and smallest squared singular values) of N×M Gaussian random matrices. Using a Coulomb fluid technique, we derive analytically and for large N the cumulative P(κ
Phys Rev E Stat Nonlin Soft Matter Phys
October 2014
We compute the full order statistics of a one-dimensional gas of spinless fermions (or, equivalently, hard bosons) in a harmonic trap at zero temperature, including its large deviation tails. The problem amounts to computing the probability distribution of the kth smallest eigenvalue λ(k) of a large dimensional Gaussian random matrix. We find that this probability behaves for large N as P[λ(k)=x]≈exp[-βN(2)ψ(k/N,x)], where β is the Dyson index of the ensemble.
View Article and Find Full Text PDFBackground: The energetics of cerebral activity critically relies on the functional and metabolic interactions between neurons and astrocytes. Important open questions include the relation between neuronal versus astrocytic energy demand, glucose uptake and intercellular lactate transfer, as well as their dependence on the level of activity.
Results: We have developed a large-scale, constraint-based network model of the metabolic partnership between astrocytes and glutamatergic neurons that allows for a quantitative appraisal of the extent to which stoichiometry alone drives the energetics of the system.