Publications by authors named "Yael Edan"

By supporting autonomy, aging in place, and wellbeing in later life, Socially Assistive Robots are expected to help humanity face the challenges posed by the rapid aging of the world's population. For the successful acceptance and assimilation of SARs by older adults, it is necessary to understand the factors affecting their Quality Evaluations Previous studies examining Human-Robot Interaction in later life indicated that three aspects shape older adults' overall QEs of robots: uses, constraints, and outcomes. However, studies were usually limited in duration, focused on acceptance rather than assimilation, and typically explored only one aspect of the interaction.

View Article and Find Full Text PDF

Mobile robotic telepresence systems require that information about the environment, the task, and the robot be presented to a remotely located user (operator) who controls the robot for a specific task. In this study, two interaction modes, proactive and reactive, that differ in the way the user receives information from the robot, were compared in an experimental system simulating a healthcare setting. The users controlled a mobile telepresence robot that delivered and received items (medication, food, or drink), and also obtained metrics (vital signs) from a simulated patient while the users performed a secondary healthcare-related task (they compiled health records which were displayed to them on the screen and answered related questions).

View Article and Find Full Text PDF

Image-based root phenotyping technologies, including the minirhizotron (MR), have expanded our understanding of the in situ root responses to changing environmental conditions. The conventional manual methods used to analyze MR images are time-consuming, limiting their implementation. This study presents an adaptation of our previously developed convolutional neural network-based models to estimate the total (cumulative) root length (TRL) per MR image without requiring segmentation.

View Article and Find Full Text PDF

This paper focuses on how the autonomy level of an assistive robot that offers support for older adults in a daily task and its feedback affect the interaction. Identifying the level of automation (LOA) that prioritizes older adults' preferences while avoiding passiveness and sedentariness is challenging. The feedback mode should match the cognitive and perceptual capabilities of older adults and the LOA.

View Article and Find Full Text PDF

We studied politeness in human-robot interaction based on Lakoff's politeness theory. In a series of eight studies, we manipulated three different levels of politeness of non-humanoid robots and evaluated their effects. A table-setting task was developed for two different types of robots (a robotic manipulator and a mobile robot).

View Article and Find Full Text PDF

Agricultural industry is facing a serious threat from plant diseases that cause production and economic losses. Early information on disease development can improve disease control using suitable management strategies. This study sought to detect downy mildew () on grapevine () leaves at early stages of development using thermal imaging technology and to determine the best time during the day for image acquisition.

View Article and Find Full Text PDF

This paper investigates human's preferences for a robot's eye gaze behavior during human-to-robot handovers. We studied gaze patterns for all three phases of the handover process: reach, transfer, and retreat, as opposed to previous work which only focused on the reaching phase. Additionally, we investigated whether the object's size or fragility or the human's posture affect the human's preferences for the robot gaze.

View Article and Find Full Text PDF

Physical exercise has many physical, psychological and social health benefits leading to improved life quality. This paper presents a robotic system developed as a personal coach for older adults aiming to motivate older adults to participate in physical activities. The robot instructs the participants, demonstrates the exercises and provides real-time corrective and positive feedback according to the participant's performance as monitored by an RGB-D camera.

View Article and Find Full Text PDF

The effect of camera viewpoint and fruit orientation on the performance of a sweet pepper maturity level classification algorithm was evaluated. Image datasets of sweet peppers harvested from a commercial greenhouse were collected using two different methods, resulting in 789 RGB-Red Green Blue (images acquired in a photocell) and 417 RGB-D-Red Green Blue-Depth (images acquired by a robotic arm in the laboratory), which are published as part of this paper. Maturity level classification was performed using a random forest algorithm.

View Article and Find Full Text PDF

This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions.

View Article and Find Full Text PDF

Current harvesting robots are limited by low detection rates due to the unstructured and dynamic nature of both the objects and the environment. State-of-the-art algorithms include color- and texture-based detection, which are highly sensitive to the illumination conditions. Deep learning algorithms promise robustness at the cost of significant computational resources and the requirement for intensive databases.

View Article and Find Full Text PDF

This paper presents the overall design of a prototype home-based system aimed to reduce sedentary behavior of older adults. Quantitative performance indicators were developed to measure the sedentary behavior and daily activities of an older adult. The sedentary behavior is monitored by identifying individual positions (standing, sitting, and lying) within the field of view of a Microsoft Kinect sensor, using a custom designed algorithm.

View Article and Find Full Text PDF

Background: Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach.

Objectives: To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks.

View Article and Find Full Text PDF

Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times.

View Article and Find Full Text PDF

Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer.

View Article and Find Full Text PDF

Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS.

View Article and Find Full Text PDF

This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation.

View Article and Find Full Text PDF
Article Synopsis
  • Image registration aligns multiple images of the same scene captured at different times, perspectives, or with different sensors, focusing on agricultural systems that use visual and thermal sensors.
  • The research develops a method using a distance-dependent transformation matrix (DDTM), created through pre-calibration and compactly represented via regression functions.
  • A unique experimental setup with Artificial Control Points (ACPs) and detection algorithms for both sensors demonstrates the effectiveness of this method through various experiments.
View Article and Find Full Text PDF

An in-depth evaluation of the usability and situation awareness performance of different displays and destination controls of robots are presented. In two experiments we evaluate the way information is presented to the operator and assess different means for controlling the robot. Our study compares three types of displays: a "blocks" display, a HUD (head-up display), and a radar display, and two types of controls: touch screen and hand gestures.

View Article and Find Full Text PDF

The use of doctor-computer interaction devices in the operation room (OR) requires new modalities that support medical imaging manipulation while allowing doctors' hands to remain sterile, supporting their focus of attention, and providing fast response times. This paper presents "Gestix," a vision-based hand gesture capture and recognition system that interprets in real-time the user's gestures for navigation and manipulation of images in an electronic medical record (EMR) database. Navigation and other gestures are translated to commands based on their temporal trajectories, through video capture.

View Article and Find Full Text PDF