Incremental extreme learning machines (I-ELMs) can automatically determine the structure of neural networks and achieve high learning speeds. However, during the process of adding hidden nodes, unnecessary hidden nodes that have little relevance to the target may be added. Several studies have proposed methods to overcome this problem by measuring the relevance between hidden nodes and outputs and adding or removing hidden nodes accordingly. Random hidden nodes have the advantage of creating diverse patterns, but they encounter a problem in which hidden nodes that generate patterns with little or no relevance to the target can be added, thereby increasing the number of hidden nodes. Unlike in existing I-ELMs, which use random hidden nodes, we propose a compact I-ELM algorithm that initially adds linear regression nodes and subsequently applies a method to ensure that the hidden nodes have patterns differing from the existing ones. Based on benchmark data, we confirmed that the proposed method constructs a compact neural network structure with fewer hidden nodes compared to the existing I-ELM systems.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11445478 | PMC |
http://dx.doi.org/10.1038/s41598-024-74446-w | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!