Depending on what we mean by "explanation," challenges to the explanatory depth and reach of deep neural network models of visual and other forms of intelligent behavior may need revisions to both the elementary building blocks of neural nets (the explananda) and to the ways in which experimental environments and training protocols are engineered (the explanantia). The two paths assume and imply sharply different conceptions of how an explanation explains and of the explanatory function of models.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1017/S0140525X23001632 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!