Publications by authors named "R Zecchina"

Article Synopsis
  • Researchers are exploring the high-dimensional protein sequence space to understand how its geometric structure influences natural evolution and protein foldability.
  • Using advanced transformer models for structure prediction, they found that natural proteins are mostly located in wide, flat energy minima, which resembles optimization problems in machine learning.
  • Their specialized statistical mechanics algorithms outperform traditional methods by identifying high entropy valleys, showing that these areas may lead to sequences similar in stability and features to natural proteins.
View Article and Find Full Text PDF

We study the binary and continuous negative-margin perceptrons as simple nonconvex neural network models learning random rules and associations. We analyze the geometry of the landscape of solutions in both models and find important similarities and differences. Both models exhibit subdominant minimizers which are extremely flat and wide.

View Article and Find Full Text PDF

Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear. Yet they can fit data almost perfectly through variants of gradient descent algorithms and achieve unexpected levels of prediction accuracy without overfitting. These are formidable results that defy predictions of statistical learning and pose conceptual challenges for nonconvex optimization.

View Article and Find Full Text PDF

The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems. In particular, the fact that learning algorithms based on simple variants of gradient methods are able to find near-optimal minima of highly nonconvex loss functions is an unexpected feature of neural networks. Moreover, such algorithms are able to fit the data even in the presence of noise, and yet they have excellent predictive capabilities.

View Article and Find Full Text PDF

The differing ability of polypeptide conformations to act as the native state of proteins has long been rationalized in terms of differing kinetic accessibility or thermodynamic stability. Building on the successful applications of physical concepts and sampling algorithms recently introduced in the study of disordered systems, in particular artificial neural networks, we quantitatively explore how well a quantity known as the local entropy describes the native state of model proteins. In lattice models and all-atom representations of proteins, we are able to efficiently sample high local entropy states and to provide a proof of concept of enhanced stability and folding rate.

View Article and Find Full Text PDF