This paper takes a parallel learning approach in continual learning scenarios. We define parallel continual learning as learning a sequence of tasks where the data for the previous tasks, whose distribution may have shifted over time, are also available while learning new tasks. We propose a parallel continual learning method by assigning subnetworks to each task, and simultaneously training only the assigned subnetworks on their corresponding tasks.
View Article and Find Full Text PDFMany species show a diverse range of sizes; for example, domestic dogs have large variation in body mass. Yet, the internal structure of the organism remains similar, i.e.
View Article and Find Full Text PDF