Publications by authors named "Tetsuya Ogata"

Multifingered robot hands can be extremely effective in physically exploring and recognizing objects, especially if they are extensively covered with distributed tactile sensors. Convolutional neural networks (CNNs) have been proven successful in processing high dimensional data, such as camera images, and are, therefore, very well suited to analyze distributed tactile information as well. However, a major challenge is to organize tactile inputs coming from different locations on the hand in a coherent structure that could leverage the computational properties of the CNN.

View Article and Find Full Text PDF

The brain attenuates its responses to self-produced exteroceptions (e.g., we cannot tickle ourselves).

View Article and Find Full Text PDF

Robots need robust models to effectively perform tasks that humans do on a daily basis. These models often require substantial developmental costs to maintain because they need to be adjusted and adapted over time. Deep reinforcement learning is a powerful approach for acquiring complex real-world models because there is no need for a human to design the model manually.

View Article and Find Full Text PDF

We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information.

View Article and Find Full Text PDF

In many robotics studies, deep neural networks (DNNs) are being actively studied due to their good performance. However, existing robotic techniques and DNNs have not been systematically integrated, and packages for beginners are yet to be developed. In this study, we proposed a basic educational kit for robotic system development with DNNs.

View Article and Find Full Text PDF

Neurodevelopmental disorders are characterized by heterogeneous and non-specific nature of their clinical symptoms. In particular, hyper- and hypo-reactivity to sensory stimuli are diagnostic features of autism spectrum disorder and are reported across many neurodevelopmental disorders. However, computational mechanisms underlying the unusual paradoxical behaviors remain unclear.

View Article and Find Full Text PDF

Neurodevelopmental disorders, including autism spectrum disorder, have been intensively investigated at the neural, cognitive, and behavioral levels, but the accumulated knowledge remains fragmented. In particular, developmental learning aspects of symptoms and interactions with the physical environment remain largely unexplored in computational modeling studies, although a leading computational theory has posited associations between psychiatric symptoms and an unusual estimation of information uncertainty (precision), which is an essential aspect of the real world and is estimated through learning processes. Here, we propose a mechanistic explanation that unifies the disparate observations a hierarchical predictive coding and developmental learning framework, which is demonstrated in experiments using a neural network-controlled robot.

View Article and Find Full Text PDF

Recently, applying computational models developed in cognitive science to psychiatric disorders has been recognized as an essential approach for understanding cognitive mechanisms underlying psychiatric symptoms. Autism spectrum disorder is a neurodevelopmental disorder that is hypothesized to affect information processes in the brain involving the estimation of sensory precision (uncertainty), but the mechanism by which observed symptoms are generated from such abnormalities has not been thoroughly investigated. Using a humanoid robot controlled by a neural network using a precision-weighted prediction error minimization mechanism, it is suggested that both increased and decreased sensory precision could induce the behavioral rigidity characterized by resistance to change that is characteristic of autistic behavior.

View Article and Find Full Text PDF

We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots.

View Article and Find Full Text PDF

An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents.

View Article and Find Full Text PDF

To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task.

View Article and Find Full Text PDF

We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other's behavior.

View Article and Find Full Text PDF

This letter presents a new algorithm for blind dereverberation and echo cancellation based on independent component analysis (ICA) for actual acoustic signals. We focus on frequency domain ICA (FD-ICA) because its computational cost and speed of learning convergence are sufficiently reasonable for practical applications such as hands-free speech recognition. In applying conventional FD-ICA as a preprocessing of automatic speech recognition in noisy environments, one of the most critical problems is how to cope with reverberations.

View Article and Find Full Text PDF

We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters.

View Article and Find Full Text PDF