Publications by authors named "Nat Dilokthanakul"

Legged robots that can instantly change motor patterns at different walking speeds are useful and can accomplish various tasks efficiently. However, state-of-the-art control methods either are difficult to develop or require long training times. In this study, we present a comprehensible neural control framework to integrate probability-based black-box optimization (PI) and supervised learning for robot motor pattern generation at various walking speeds.

View Article and Find Full Text PDF

Objective: Advances in the motor imagery (MI)-based brain-computer interfaces (BCIs) allow control of several applications by decoding neurophysiological phenomena, which are usually recorded by electroencephalography (EEG) using a non-invasive technique. Despite significant advances in MI-based BCI, EEG rhythms are specific to a subject and various changes over time. These issues point to significant challenges to enhance the classification performance, especially in a subject-independent manner.

View Article and Find Full Text PDF

Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual.

View Article and Find Full Text PDF

One of the main concerns of deep reinforcement learning (DRL) is the data inefficiency problem, which stems both from an inability to fully utilize data acquired and from naive exploration strategies. In order to alleviate these problems, we propose a DRL algorithm that aims to improve data efficiency via both the utilization of unrewarded experiences and the exploration strategy by combining ideas from unsupervised auxiliary tasks, intrinsic motivation, and hierarchical reinforcement learning (HRL). Our method is based on a simple HRL architecture with a metacontroller and a subcontroller.

View Article and Find Full Text PDF