Publications by authors named "Conradt J"

Fisheries worldwide face uncertain futures as climate change manifests in environmental effects of hitherto unseen strengths. Developing climate-ready management strategies traditionally requires a good mechanistic understanding of stock response to climate change in order to build projection models for testing different exploitation levels. Unfortunately, model-based projections of fish stocks are severely limited by large uncertainties in the recruitment process, as the required stock-recruitment relationship is usually not well represented by data.

View Article and Find Full Text PDF

Marine fisheries are increasingly impacted by climate change, affecting species distribution and productivity, and necessitating urgent adaptation efforts. Climate vulnerability assessments (CVA), integrating expert knowledge, are vital for identifying species that could thrive or suffer under changing environmental conditions. This study presents a first CVA for the Western Baltic Sea's fish community, a crucial fishing area for Denmark and Germany.

View Article and Find Full Text PDF

Dexterous manipulation in robotic hands relies on an accurate sense of artificial touch. Here we investigate neuromorphic tactile sensation with an event-based optical tactile sensor combined with spiking neural networks for edge orientation detection. The sensor incorporates an event-based vision system (mini-eDVS) into a low-form factor artificial fingertip (the NeuroTac).

View Article and Find Full Text PDF

Neuromorphic hardware enables fast and power-efficient neural network-based artificial intelligence that is well suited to solving robotic tasks. Neuromorphic algorithms can be further developed following neural computing principles and neural network architectures inspired by biological neural systems. In this Viewpoint, we provide an overview of recent insights from neuroscience that could enhance signal processing in artificial neural networks on chip and unlock innovative applications in robotics and autonomous intelligent systems.

View Article and Find Full Text PDF

Falling down is a serious problem for health and has become one of the major etiologies of accidental death for the elderly living alone. In recent years, many efforts have been paid to fall recognition based on wearable sensors or standard vision sensors. However, the prior methods have the risk of privacy leaks, and almost all these methods are based on video clips, which cannot localize where the falls occurred in long videos.

View Article and Find Full Text PDF

The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g.

View Article and Find Full Text PDF

Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range.

View Article and Find Full Text PDF

Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding.

View Article and Find Full Text PDF

Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies.

View Article and Find Full Text PDF

A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g.

View Article and Find Full Text PDF

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject's motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements.

View Article and Find Full Text PDF

Objective: The objective of this work is to use the capability of spiking neural networks to capture the spatio-temporal information encoded in time-series signals and decode them without the use of hand-crafted features and vector-based learning and the realization of the spiking model on low-power neuromorphic hardware.

Approach: The NeuCube spiking model was used to classify different grasp movements directly from raw surface electromyography signals (sEMG), the estimations of the applied finger forces as well as the classification of two motor imagery movements from raw electroencephalography (EEG). In a parallel investigation, the designed spiking decoder was implemented on SpiNNaker neuromorphic hardware, which allows low-energy real-time processing.

View Article and Find Full Text PDF

Objective: The objective of this work is to present gumpy, a new free and open source Python toolbox designed for hybrid brain-computer interface (BCI).

Approach: Gumpy provides state-of-the-art algorithms and includes a rich selection of signal processing methods that have been employed by the BCI community over the last 20 years. In addition, a wide range of classification methods that span from classical machine learning algorithms to deep neural network models are provided.

View Article and Find Full Text PDF

In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost.

View Article and Find Full Text PDF

Ageing has an effect on many parameters of the physical condition, and one of them is the way a person walks. This property, the gait pattern, can unintrusively be observed by letting people walk over a sensor floor. The electric capacitance sensors built into the floor deliver information about when and where feet get into close proximity and contact with the floor during the phases of human locomotion.

View Article and Find Full Text PDF

Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition.

View Article and Find Full Text PDF

Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex.

View Article and Find Full Text PDF

Artificial light-harvesting systems have until now not been able to self-assemble into structures with a large photon capture cross-section that upon a stimulus reversibly can switch into an inactive state. Here we describe a simple and robust FLFL-dipeptide construct to which a meso-tetraphenylporphyrin has been appended and which self-assembles to fibrils, platelets or nanospheres depending on the solvent composition. The fibrils, functioning as quenched antennas, give intense excitonic couplets in the electronic circular dichroism spectra which are mirror imaged if the unnatural FDFD-analogue is used.

View Article and Find Full Text PDF

After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time.

View Article and Find Full Text PDF

We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker).

View Article and Find Full Text PDF

Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function.

View Article and Find Full Text PDF

The present study evaluated the effectiveness of a universal school-based cognitive behavior prevention program (the FRIENDS program) for childhood anxiety. Participants were 638 children, ages 9 to 12 years, from 14 schools in North Rhine-Westphalia, Germany. All the children completed standardized measures of anxiety and depression, social and adaptive functioning, coping strategies, social skills, and perfectionism before and after the 10-week FRIENDS program and at two follow-up assessments (6 and 12 months) or wait period.

View Article and Find Full Text PDF

An efficient noncovalent assembly process involving high geometrical control was applied to a linear bis(imidazolyl zinc porphyrin) 7Zn, bearing C(18) substitutents, to generate linear multiporphyrin wires. The association process is based on imidazole recognition within the cavity of the phenanthroline-strapped zinc porphyrin. In chlorinated solvents, discrete soluble oligomers were obtained after (7Zn)(n) was end-capped with a terminal single imidazolyl zinc porphyrin derivative 4Zn.

View Article and Find Full Text PDF

Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data.

View Article and Find Full Text PDF