The neural mechanisms of motor planning have been extensively studied in rodents. Preparatory activity in the frontal cortex predicts upcoming choice, but limitations of typical tasks have made it challenging to determine whether the spatial information is in a self-centered direction reference frame or a world-centered position reference frame. Here, we trained male rats to make delayed visually guided orienting movements to six different directions, with four different target positions for each direction, which allowed us to disentangle direction versus position tuning in neural activity.
View Article and Find Full Text PDFAs the wastewater sector moves towards achieving net zero greenhouse gas (GHG) emissions, quantifying and understanding fugitive emissions from various sewage treatment steps is crucial for developing effective GHG abatement strategies. Methane (CH) emissions from a sludge drying pan (SDP) were measured at a wastewater treatment plant in Australia for more than a year, using a micrometeorological technique paired with open-path lasers. The emission rate was tightly associated with sludge additions, climatology, and operational processes.
View Article and Find Full Text PDFHow do humans and other animals learn new tasks? A wave of brain recording studies has investigated how neural representations change during task learning, with a focus on how tasks can be acquired and coded in ways that minimise mutual interference. We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex, and computational models that have exploited these findings to understand how the brain may partition knowledge between tasks. We discuss how ideas from machine learning, including those that combine supervised and unsupervised learning, are helping neuroscientists understand how natural tasks are learned and coded in biological brains.
View Article and Find Full Text PDFHumans can learn several tasks in succession with minimal mutual interference but perform more poorly when trained on multiple tasks at once. The opposite is true for standard deep neural networks. Here, we propose novel computational constraints for artificial neural networks, inspired by earlier work on gating in the primate prefrontal cortex, that capture the cost of interleaved training and allow the network to learn two tasks in sequence without forgetting.
View Article and Find Full Text PDF