Publications by authors named "Tsang I"

The intestinal epithelium has a remarkably high turnover in homeostasis. It remains unresolved how this is orchestrated at the cellular level and how the behavior of stem and progenitor cells ensures tissue maintenance. To address this, we combined quantitative fate mapping in three complementary mouse models with mathematical modeling and single-cell RNA sequencing.

View Article and Find Full Text PDF

Ensembl (www.ensembl.org) is an open platform integrating publicly available genomics data across the tree of life with a focus on eukaryotic species related to human health, agriculture and biodiversity.

View Article and Find Full Text PDF

Background: Root hairs are single-celled projections on root surfaces, critical for water and nutrient uptake. Here, we describe the first short root hair mutant in wheat ( L.), identified in a mutagenized population and termed here short root hair 1 ().

View Article and Find Full Text PDF

Policy diversity, encompassing the variety of policies an agent can adopt, enhances reinforcement learning (RL) success by fostering more robust, adaptable, and innovative problem-solving in the environment. The environment in which standard RL operates is usually modeled with a Markov Decision Process (MDP) as the theoretical foundation. However, in many real-world scenarios, the rewards depend on an agent's history of states and actions leading to a non-MDP.

View Article and Find Full Text PDF

Introduction: Hyperdimensional Computing (HDC) is a brain-inspired and lightweight machine learning method. It has received significant attention in the literature as a candidate to be applied in the wearable Internet of Things, near-sensor artificial intelligence applications, and on-device processing. HDC is computationally less complex than traditional deep learning algorithms and typically achieves moderate to good classification performance.

View Article and Find Full Text PDF

To meet the demands of a rising human population, plant breeders will need to develop improved crop varieties that maximize yield in the face of increasing pressure on crop production. Historically, the optimization of crop root architecture has represented a challenging breeding target due to the inaccessibility of the root systems. Root hairs, single cell projections from the root epidermis, are perhaps the most overlooked component of root architecture traits.

View Article and Find Full Text PDF

State-of-the-art model for zero-shot cross-lingual spoken language understanding performs cross-lingual unsupervised contrastive learning to achieve the label-agnostic semantic alignment between each utterance and its code-switched data. However, it ignores the precious intent/slot labels, whose label information is promising to help capture the label-aware semantics structure and then leverage supervised contrastive learning to improve both source and target languages' semantics. In this paper, we propose Hybrid and Cooperative Contrastive Learning to address this problem.

View Article and Find Full Text PDF

Spiking neural network (SNN) distinguish themselves from artificial neural network (ANN) because of their inherent temporal processing and spike-based computations, enabling a power-efficient implementation in neuromorphic hardware. In this study, we demonstrate that data processing with spiking neurons can be enhanced by co-learning the synaptic weights with two other biologically inspired neuronal features: (1) a set of parameters describing neuronal adaptation processes and (2) synaptic propagation delays. The former allows a spiking neuron to learn how to specifically react to incoming spikes based on its past.

View Article and Find Full Text PDF

Background: Gray matter (GM) and white matter (WM) impairments are both associated with raised blood pressure (BP), although whether elevated BP is differentially associated with the GM and WM aging process remains inadequately examined.

Methods: We included 37 327 participants with diffusion-weighted imaging (DWI) and 39 630 participants with T1-weighted scans from UK Biobank. BP was classified into 4 categories: normal BP, high-normal BP, grade 1, and grade 2 hypertension.

View Article and Find Full Text PDF

As the brain ages, it almost invariably accumulates vascular pathology, which differentially affects the cerebral white matter. A rich body of research has investigated the link between vascular risk factors and the brain. One of the less studied questions is that among various modifiable vascular risk factors, which is the most debilitating one for white matter health? A white matter specific brain age was developed to evaluate the overall white matter health from diffusion weighted imaging, using a three-dimensional convolutional neural network deep learning model in both cross-sectional UK biobank participants (n = 37,327) and a longitudinal subset (n = 1409).

View Article and Find Full Text PDF

Recent graph-based models for multi-intent SLU have obtained promising results through modeling the guidance from the prediction of intents to the decoding of slot filling. However, existing methods (1) only model the unidirectional guidance from intent to slot, while there are bidirectional inter-correlations between intent and slot; (2) adopt homogeneous graphs to model the interactions between the slot semantics nodes and intent label nodes, which limit the performance. In this paper, we propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the mutual guidances between the two tasks.

View Article and Find Full Text PDF

Existing deep learning-based shadow removal methods still produce images with shadow remnants. These shadow remnants typically exist in homogeneous regions with low-intensity values, making them untraceable in the existing image-to-image mapping paradigm. We observe that shadows mainly degrade images at the image-structure level (in which humans perceive object shapes and continuous colors).

View Article and Find Full Text PDF

Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This letter proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples but also samples that are correctly classified by the HDC model but with low confidence.

View Article and Find Full Text PDF

Deep models have achieved state-of-the-art performance on a broad range of visual recognition tasks. Nevertheless, the generalization ability of deep models is seriously affected by noisy labels. Though deep learning packages have different losses, this is not transparent for users to choose consistent losses.

View Article and Find Full Text PDF

Time series analysis is essential to many far-reaching applications of data science and statistics including economic and financial forecasting, surveillance, and automated business processing. Though being greatly successful of Transformer in computer vision and natural language processing, the potential of employing it as the general backbone in analyzing the ubiquitous times series data has not been fully released yet. Prior Transformer variants on time series highly rely on task-dependent designs and pre-assumed "pattern biases", revealing its insufficiency in representing nuanced seasonal, cyclic, and outlier patterns which are highly prevalent in time series.

View Article and Find Full Text PDF

Deep learning on large-scale data is currently dominant nowadays. The unprecedented scale of data has been arguably one of the most important driving forces behind its success. However, there still exist scenarios where collecting data or labels could be extremely expensive, e.

View Article and Find Full Text PDF

Dual-task dialog language understanding aims to tackle two correlative dialog language understanding tasks simultaneously via leveraging their inherent correlations. In this paper, we put forward a new framework, whose core is relational temporal graph reasoning. We propose a speaker-aware temporal graph (SATG) and a dual-task relational temporal graph (DRTG) to facilitate relational temporal modeling in dialog understanding and dual-task reasoning.

View Article and Find Full Text PDF

Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning (SSL). The prediction uncertainty is typically expressed as the entropy computed by the transformed probabilities in output space. Most existing works distill low-entropy prediction by either accepting the determining class (with the largest probability) as the true label or suppressing subtle predictions (with the smaller probabilities).

View Article and Find Full Text PDF

Many machine learning applications encounter situations where model providers are required to further refine the previously trained model so as to gratify the specific need of local users. This problem is reduced to the standard model tuning paradigm if the target data is permissibly fed to the model. However, it is rather difficult in a wide range of practical cases where target data is not shared with model providers but commonly some evaluations about the model are accessible.

View Article and Find Full Text PDF

Many real-world problems deal with collections of data with missing values, e.g., RNA sequential analytics, image completion, video processing, etc.

View Article and Find Full Text PDF
Distribution Matching for Machine Teaching.

IEEE Trans Neural Netw Learn Syst

September 2024

Machine teaching is an inverse problem of machine learning that aims at steering the student toward its target hypothesis, in which the teacher has already known the student's learning parameters. Previous studies on machine teaching focused on balancing the teaching risk and cost to find the best teaching examples deriving from the student model. This optimization solver is in general ineffective when the student does not disclose any cue of the learning parameters.

View Article and Find Full Text PDF

Learning with noisy labels has become imperative in the Big Data era, which saves expensive human labors on accurate annotations. Previous noise-transition-based methods have achieved theoretically-grounded performance under the Class-Conditional Noise model (CCN). However, these approaches builds upon an ideal but impractical anchor set available to pre-estimate the noise transition.

View Article and Find Full Text PDF

Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph: 1) the similarity between the original graph and the generated augmented graph gradually decreases and 2) the discrimination between all nodes within each augmented view gradually increases.

View Article and Find Full Text PDF

A liquid state machine (LSM) is a biologically plausible model of a cortical microcircuit. It exists of a random, sparse reservoir of recurrently connected spiking neurons with fixed synapses and a trainable readout layer. The LSM exhibits low training complexity and enables backpropagation-free learning in a powerful, yet simple computing paradigm.

View Article and Find Full Text PDF

Imitation learning (IL) aims to extract knowledge from human experts' demonstrations or artificially created agents to replicate their behaviors. It promotes interdisciplinary communication and real-world automation applications. However, the process of replicating behaviors still exhibits various problems, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments.

View Article and Find Full Text PDF