We test here the prediction capabilities of the new generation of deep learning predictors in the more challenging situation of multistate multidomain proteins by using as a case study a coiled-coil family of Nucleotide-binding Oligomerization Domain-like (NOD-like) receptors from and a few extra examples for reference. Results reveal a truly remarkable ability of these platforms to correctly predict the 3D structure of modules that fold in well-established topologies. A lower performance is noticed in modeling morphing regions of these proteins, such as the coiled coils. Predictors also display a good sensitivity to local sequence drifts upon the modeling solution of the overall modular configuration. In multivalued 1D to 3D mappings, the platforms display a marked tendency to model proteins in the most compact configuration and must be retrained by information filtering to drive modeling toward the sparser ones. Bias toward order and compactness is seen at the secondary structure level as well. All in all, using AI predictors for modeling multidomain multistate proteins when global templates are at hand is fruitful, but the above challenges have to be taken into account. In the absence of global templates, a piecewise modeling approach with experimentally constrained reconstruction of the global architecture might give more realistic results.

Download full-text PDF

Source
http://dx.doi.org/10.3390/ijms26020500DOI Listing

Publication Analysis

Top Keywords

deep learning
8
multistate multidomain
8
case study
8
study coiled-coil
8
nod-like receptors
8
global templates
8
modeling
5
lessons deep
4
learning structural
4
structural prediction
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!