Publications by authors named "Lukasiewicz T"

A common problem in the field of deep-learning-based low-level vision medical images is that most of the research is based on single task learning (STL), which is dedicated to solving one of the situations of low resolution or high noise. Our motivation is to design a model that can perform both SR and DN tasks simultaneously, in order to cope with the actual situation of low resolution and high noise in low-level vision medical images. By improving the existing single image super-resolution (SISR) network and introducing the idea of multi-task learning (MTL), we propose an end-to-end lightweight MTL generative adversarial network (GAN) based network using residual-in-residual-blocks (RIR-Blocks) for feature extraction, RIRGAN, which can concurrently accomplish super-resolution (SR) and denoising (DN) tasks.

View Article and Find Full Text PDF

Self-supervised learning aims to learn transferable representations from unlabeled data for downstream tasks. Inspired by masked language modeling in natural language processing, masked image modeling (MIM) has achieved certain success in the field of computer vision, but its effectiveness in medical images remains unsatisfactory. This is mainly due to the high redundancy and small discriminative regions in medical images compared to natural images.

View Article and Find Full Text PDF

There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs; this is insufficient for making an informed decision about which LLMs are best to use in an interactive setting, and how that varies by setting. Static assessment therefore limits how we understand language model capabilities.

View Article and Find Full Text PDF

For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as 'credit assignment'. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. Here, we set out a fundamentally different principle on credit assignment called 'prospective configuration'.

View Article and Find Full Text PDF

Although existing deep reinforcement learning-based approaches have achieved some success in image augmentation tasks, their effectiveness and adequacy for data augmentation in intelligent medical image analysis are still unsatisfactory. Therefore, we propose a novel Adaptive Sequence-length based Deep Reinforcement Learning (ASDRL) model for Automatic Data Augmentation (AutoAug) in intelligent medical image analysis. The improvements of ASDRL-AutoAug are two-fold: (i) To remedy the problem of some augmented images being invalid, we construct a more accurate reward function based on different variations of the augmentation trajectories.

View Article and Find Full Text PDF
Article Synopsis
  • Recent advances in automatic medical report generation use deep learning, combining CNNs for image encoding and RNNs for report decoding, but face issues like incomplete optimization, simplistic attention mechanisms, and repeated output generation.
  • The article introduces HReMRG-MR, a new method that employs a hybrid reward system, m-linear attention for improved feature interaction, and a repetition penalty to enhance report accuracy and detail.
  • Experimental results confirm that HReMRG-MR outperforms existing methods in efficiency and quality while demonstrating that its components effectively address previous limitations, including significantly reducing weight search time without sacrificing performance.
View Article and Find Full Text PDF

Data augmentation is widely applied to medical image analysis tasks in limited datasets with imbalanced classes and insufficient annotations. However, traditional augmentation techniques cannot supply extra information, making the performance of diagnosis unsatisfactory. GAN-based generative methods have thus been proposed to obtain additional useful information to realize more effective data augmentation; but existing generative data augmentation techniques mainly encounter two problems: (i) Current generative data augmentation lacks of the capability in using cross-domain differential information to extend limited datasets.

View Article and Find Full Text PDF

Existing self-supervised medical image segmentation usually encounters the domain shift problem (i.e., the input distribution of pre-training is different from that of fine-tuning) and/or the multimodality problem (i.

View Article and Find Full Text PDF
Article Synopsis
  • Feature Pyramid Networks (FPNs) are important in deep detection models for multi-scale feature utilization, but they face issues like insufficient feature fusion and equal weighting for features.
  • A new model called Enhanced Feature Pyramid Networks (EFPNs) addresses these problems by adding a top-down pyramid for deeper information fusion, developing a scale enhancement module for diverse feature generation, and introducing a feature fusion attention module for assigning importance to features.
  • Experiments on two medical image datasets show that EFPNs significantly improve detection performance compared to existing models, suggesting their effectiveness can extend to other deep learning frameworks.
View Article and Find Full Text PDF
Article Synopsis
  • Deep learning has significantly advanced AI through artificial neural networks, which mimic brain neuronal networks, leading to diverse applications and mutual benefits in AI and neuroscience.
  • The widely used backpropagation algorithm faces criticism for its lack of biological realism, prompting exploration of predictive coding methods that offer more biologically plausible learning approaches.
  • Recent research introduced a novel method, zerodivergence inference learning (Z-IL), that achieves exact implementation of backpropagation on multilayer perceptrons, bridging the gap between neuroscience and deep learning, and providing a new, efficient approach to parameter updates in neural networks.
View Article and Find Full Text PDF

Although the existing deep supervised solutions have achieved some great successes in medical image segmentation, they have the following shortcomings; (i) semantic difference problem: since they are obtained by very different convolution or deconvolution processes, the intermediate masks and predictions in deep supervised baselines usually contain semantics with different depth, which thus hinders the models' learning capabilities; (ii) low learning efficiency problem: additional supervision signals will inevitably make the training of the models more time-consuming. Therefore, in this work, we first propose two deep supervised learning strategies, U-Net-Deep and U-Net-Auto, to overcome the semantic difference problem. Then, to resolve the low learning efficiency problem, upon the above two strategies, we further propose a new deep supervised segmentation model, called μ-Net, to achieve not only effective but also efficient deep supervised medical image segmentation by introducing a tied-weight decoder to generate pseudo-labels with more diverse information and also speed up the convergence in training.

View Article and Find Full Text PDF

Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections.

View Article and Find Full Text PDF
Article Synopsis
  • The hippocampus plays a critical role in associative memory tasks, with recent theories linking its predictive coding mechanisms to these memory processes.
  • A new computational model based on hierarchical predictive networks was developed to better reflect the recurrent connections found in the CA3 region of the hippocampus, which are important for associative memory.
  • The proposed models learn covariance information implicitly and are numerically stable, offering a more biologically accurate framework for understanding hippocampal memory formation and its interactions with the neocortex.
View Article and Find Full Text PDF

Automatic medical image detection aims to utilize artificial intelligence techniques to detect lesions in medical images accurately and efficiently, which is one of the most important tasks in computer-aided diagnosis (CAD) systems, and can be embedded into portable imaging devices for intelligent Point of Care (PoC) Diagnostics. The Feature Pyramid Networks (FPN) based models are widely used deep-learning-based solutions for automatic medical image detection. However, FPN-based medical lesion detection models have two shortcomings: the object position offset problem and the degradation problem of IoU-based loss.

View Article and Find Full Text PDF
Article Synopsis
  • A variety of neural network models for associative memory, including classical Hopfield networks and modern continuous Hopfield networks, are discussed, emphasizing their theoretical connections.
  • A new general framework is proposed that outlines the operation of these memory networks through a sequence of operations, extending previous mathematical models and deriving a unified energy function.
  • The study empirically explores the effectiveness of different similarity functions in associative memory models, finding that using Euclidean or Manhattan distances significantly enhances retrieval capabilities and memory capacity compared to traditional dot product measures.
View Article and Find Full Text PDF

Aims: Deep learning has dominated predictive modelling across different fields, but in medicine it has been met with mixed reception. In clinical practice, simple, statistical models and risk scores continue to inform cardiovascular disease risk predictions. This is due in part to the knowledge gap about how deep learning models perform in practice when they are subject to dynamic data shifts; a key criterion that common internal validation procedures do not address.

View Article and Find Full Text PDF

Pre-processing is widely applied in medical image analysis to remove the interference information. However, the existing pre-processing solutions mainly encounter two problems: (i) it is heavily relied on the assistance of clinical experts, making it hard for intelligent CAD systems to deploy quickly; (ii) due to the personnel and information barriers, it is difficult for medical institutions to conduct the same pre-processing operations, making a deep model that performs well on a specific medical institution difficult to achieve similar performances on the same task in other medical institutions. To overcome these problems, we propose a deep-reinforcement-learning-based task-oriented homogenized automatic pre-processing (DRL-HAPre) framework to overcome these two problems.

View Article and Find Full Text PDF

Electronic health records (EHR) represent a holistic overview of patients' trajectories. Their increasing availability has fueled new hopes to leverage them and develop accurate risk prediction models for a wide range of diseases. Given the complex interrelationships of medical records and patient outcomes, deep learning models have shown clear merits in achieving this goal.

View Article and Find Full Text PDF

Semi-supervised learning has a great potential in medical image segmentation tasks with a few labeled data, but most of them only consider single-modal data. The excellent characteristics of multi-modal data can improve the performance of semi-supervised segmentation for each image modality. However, a shortcoming for most existing multi-modal solutions is that as the corresponding processing models of the multi-modal data are highly coupled, multi-modal data are required not only in the training but also in the inference stages, which thus limits its usage in clinical practice.

View Article and Find Full Text PDF
Article Synopsis
  • Incorporating repeated vital and lab measurements can enhance mortality risk prediction and help pinpoint critical risk factors for COVID-19 patients in hospitals.
  • An observational study analyzed data from 3,699 COVID-19 patients admitted to five Mount Sinai Health System hospitals between March and June 2020, comparing survivors to non-survivors.
  • The study used a sophisticated model called BEHRTDAY, which achieved high accuracy (precision score of 0.96 and area under the curve of 0.92) in predicting next-day mortality by evaluating the full history of patient vitals and lab results.
View Article and Find Full Text PDF

Machine learning can be used to identify relevant trajectory shape features for improved predictive risk modeling, which can help inform decisions for individualized patient management in intensive care during COVID-19 outbreaks. We present explainable random forests to dynamically predict next day mortality risk in COVID -19 positive and negative patients admitted to the Mount Sinai Health System between March 1st and June 8th, 2020 using patient time-series data of vitals, blood and other laboratory measurements from the previous 7 days. Three different models were assessed by using time series with: 1) most recent patient measurements, 2) summary statistics of trajectories (min/max/median/first/last/count), and 3) coefficients of fitted cubic splines to trajectories.

View Article and Find Full Text PDF

Associative memories in the brain receive and store patterns of activity registered by the sensory neurons, and are able to retrieve them when necessary. Due to their importance in human intelligence, computational models of associative memories have been developed for several decades now. In this paper, we present a novel neural model for realizing associative memories, which is based on a hierarchical generative network that receives external stimuli via sensory neurons.

View Article and Find Full Text PDF
Article Synopsis
  • Predicting complex chronic conditions like heart failure (HF) using deep learning can improve accuracy but lacks explainability, which limits practical use in medicine.
  • The study developed a Transformer-based risk model utilizing extensive electronic health records from over 100,000 patients in the U.K. to predict new cases of HF in six months.
  • The findings indicate that while the model achieved high predictive performance, it also identified important risk factors, some consistent with existing research and others presenting new insights for better risk assessment in medical practice.
View Article and Find Full Text PDF
Article Synopsis
  • The use of deep learning in clinical decision-making is hindered by the challenge of quantifying confidence in model predictions, with current methods like deep Bayesian neural networks and sparse Gaussian processes facing limitations.
  • A new approach that combines deep Bayesian learning with deep kernel learning aims to enhance uncertainty estimation by addressing the weaknesses of each method, particularly in terms of model interpretability and incorporating uncertainty from raw data.
  • Experiments show this combined method outperforms existing techniques in capturing uncertainty and improving accuracy in predicting health conditions using electronic medical records, while also providing better insights for risk factor analysis.
View Article and Find Full Text PDF