Achieving continual learning in deep neural networks through pseudo-rehearsal

Author: Atkinson, Craig Robert

Date: 2020

Publisher: University of Otago

Type: Thesis

Link to this item using this URL:

University of Otago


Neural networks are very powerful computational models, capable of outperforming humans on a variety of tasks. However, unlike humans, these networks tend to catastrophically forget previous information when learning new information. This thesis aims to solve this catastrophic forgetting problem, so that a deep neural network model can sequentially learn a number of complex reinforcement learning tasks. The primary model proposed by this thesis, termed RePR, prevents catastrophic forgetting by introducing a generative model and a dual memory system. The generative model learns to produce data representative of previously seen tasks. This generated data is rehearsed, while learning a new task, through a process called pseudo-rehearsal. This process allows the network to learn the new task, without forgetting previous tasks. The dual memory system is used to split learning into two systems. The short-term system is only responsible for learning the new task through reinforcement learning and the long-term system is responsible for retaining knowledge of previous tasks, while being taught the new task by the short-term system. The RePR model was shown to learn and retain a short sequence of reinforcement tasks to above human performance levels. Additionally, RePR was found to substantially outcompete state-of-the-art solutions and prevent forgetting similarly to a model which rehearsed real data from previously learnt tasks. RePR achieved this without: increasing in memory size as the number of tasks expands; revisiting previously learnt tasks; or directly storing data from previous tasks. Further results showed that RePR could be improved by informing the generator which image features are most important to retention and that, when challenged by a longer sequence of tasks, RePR would typically demonstrate gradual forgetting rather than dramatic forgetting. Finally, results also demonstrated RePR can successfully be adapted to other deep reinforcement learning algorithms.

Subjects: Deep Reinforcement Learning, Pseudo-Rehearsal, Catastrophic Forgetting, Generative Adversarial Network, Continual Learning

Citation: ["Atkinson, C. R. (2020). Achieving continual learning in deep neural networks through pseudo-rehearsal (Thesis, Doctor of Philosophy). University of Otago. Retrieved from"]

Copyright: All items in OUR Archive are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.