Publications by authors named "Gilles Gasso"

Recent research highlighted the interest in 1) investigating the effect of variable practice on the dynamics of learning and 2) modeling the dynamics of motor skill learning to enhance understanding of individual pathways learners. Such modeling has not been suitable for predicting future performance, both in terms of retention and transfer to new tasks. The present study attempted to quantify, by means of a machine learning algorithm, the prediction of skill transfer for three practice conditions in a climbing task: constant practice (without any modifications applied during learning), imposed variable practice (with graded contextual modifications, i.

View Article and Find Full Text PDF
Article Synopsis
  • Correct rider oscillation and position are crucial for effective horseback riding performance, which is analyzed through cluster analysis in this study.
  • The research involved two groups—riders and non-riders—who performed exercises on a horseback riding simulator with varying horse oscillation frequencies.
  • Findings indicated that rider expertise affected postural coordination, with riders showing less postural displacement and better control compared to the more variable behaviors of non-riders.
View Article and Find Full Text PDF

Unlabelled: Common compartmental modeling for COVID-19 is based on a priori knowledge and numerous assumptions. Additionally, they do not systematically incorporate asymptomatic cases. Our study aimed at providing a framework for data-driven approaches, by leveraging the strengths of the grey-box system theory or grey-box identification, known for its robustness in problem solving under partial, incomplete, or uncertain data.

View Article and Find Full Text PDF

We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are nonconvex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure a sufficient descent.

View Article and Find Full Text PDF

Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on l(p)-l(q) (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) mixed norms as sparsity-inducing penalties.

View Article and Find Full Text PDF