Publications by authors named "Y Yitzhaky"

Detecting early-stage stress in broiler farms is crucial for optimising growth rates and animal well-being. This study aims to classify various stress calls in broilers exposed to cold, heat, or wind, using acoustic signal processing and a transformer artificial neural network (ANN). Two consecutive trials were conducted with varying amounts of collected data, and three ANN models with the same architecture but different parameters were examined.

View Article and Find Full Text PDF

Videos captured in long-distance horizontal imaging through the atmosphere suffer from dynamic spatiotemporal movements and blur caused by the air turbulence. Simulations of atmospheric turbulence in such videos, which have been conducted in the past, are difficult to compute. Our goal in this research is to develop an effective simulation algorithm of videos affected by atmospheric turbulence characterized by spatiotemporally varying blur and tilt, when supplied with a given image.

View Article and Find Full Text PDF

Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing.

View Article and Find Full Text PDF

Biometrics methods, which currently identify humans, can potentially identify dairy cows. Given that animal movements cannot be easily controlled, identification accuracy and system robustness are challenging when deploying an animal biometrics recognition system on a real farm. Our proposed method performs multiple-cow face detection and face classification from videos by adjusting recent state-of-the-art deep-learning methods.

View Article and Find Full Text PDF

Atmospheric turbulence (AT) can change the path and direction of light during video capturing of a target in space due to the random motion of the turbulent medium, a phenomenon that is most noticeable when shooting videos at long ranges, resulting in severe video dynamic distortion and blur. To mitigate geometric distortion and reduce spatially and temporally varying blur, we propose a novel Atmospheric Turbulence Video Restoration Generative Adversarial Network (ATVR-GAN) with a specialized Recurrent Neural Network (RNN) generator, which is trained to predict the scene's turbulent optical flow (OF) field and utilizes a recurrent structure to catch both spatial and temporal dependencies. The new architecture is trained using a newly combined loss function that counts for the spatiotemporal distortions, specifically tailored to the AT problem.

View Article and Find Full Text PDF

A PHP Error was encountered

Severity: Notice

Message: fwrite(): Write of 34 bytes failed with errno=28 No space left on device

Filename: drivers/Session_files_driver.php

Line Number: 272

Backtrace:

A PHP Error was encountered

Severity: Warning

Message: session_write_close(): Failed to write session data using user defined save handler. (session.save_path: /var/lib/php/sessions)

Filename: Unknown

Line Number: 0

Backtrace: