We test here the prediction capabilities of the new generation of deep learning predictors in the more challenging situation of multistate multidomain proteins by using as a case study a coiled-coil family of Nucleotide-binding Oligomerization Domain-like (NOD-like) receptors from and a few extra examples for reference. Results reveal a truly remarkable ability of these platforms to correctly predict the 3D structure of modules that fold in well-established topologies. A lower performance is noticed in modeling morphing regions of these proteins, such as the coiled coils. Predictors also display a good sensitivity to local sequence drifts upon the modeling solution of the overall modular configuration. In multivalued 1D to 3D mappings, the platforms display a marked tendency to model proteins in the most compact configuration and must be retrained by information filtering to drive modeling toward the sparser ones. Bias toward order and compactness is seen at the secondary structure level as well. All in all, using AI predictors for modeling multidomain multistate proteins when global templates are at hand is fruitful, but the above challenges have to be taken into account. In the absence of global templates, a piecewise modeling approach with experimentally constrained reconstruction of the global architecture might give more realistic results.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3390/ijms26020500 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!