In this paper, we explore some aspects of the problem of online unsupervised learning of a switching time series, i.e., a time series which is generated by a combination of several alternately activated sources. This learning problem can be solved by a two-stage approach: 1) separating and assigning each incoming datum to a specific dataset (one dataset corresponding to each source) and 2) developing one model per dataset (i.e., one model per source). We introduce a general data allocation (DA) methodology, which combines the two steps into an iterative scheme: existing models compete for the incoming data; data assigned to each model are used to refine the model. We distinguish between two modes of DA: in parallel DA, every incoming datablock is allocated to the model with lowest prediction error; in serial DA, the incoming datablock is allocated to the first model with prediction error below a prespecified threshold. We present sufficient conditions for asymptotically correct allocation of the data. We also present numerical experiments to support our theoretical analysis.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TNN.2002.804288 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!