A point-of-care non-invasive test for Coronary Artery Disease (CAD) (POC-CAD) has been previously developed and validated. The test requires the simultaneous acquisition of orthogonal voltage gradient (OVG) and photoplethysmogram signals, which is the primary methodology described in this paper. The acquisition of the OVG, a biopotential signal, necessitates the placement of electrodes on the prepared skin of the patient's thorax (arranged similarly to the Frank lead configuration, comprising six bipolar electrodes and a reference electrode) and a hemodynamic sensor on the finger (using a standard transmission modality).
View Article and Find Full Text PDFMany clinical studies have shown wide performance variation in tests to identify coronary artery disease (CAD). Coronary computed tomography angiography (CCTA) has been identified as an effective rule-out test but is not widely available in the USA, particularly so in rural areas. Patients in rural areas are underserved in the healthcare system as compared to urban areas, rendering it a priority population to target with highly accessible diagnostics.
View Article and Find Full Text PDFArtificial intelligence, particularly machine learning, has gained prominence in medical research due to its potential to develop non-invasive diagnostics. Pulmonary hypertension presents a diagnostic challenge due to its heterogeneous nature and similarity in symptoms to other cardiovascular conditions. Here, we describe the development of a supervised machine learning model using non-invasive signals (orthogonal voltage gradient and photoplethysmographic) and a hand-crafted library of 3298 features.
View Article and Find Full Text PDFThe current standard of care for coronary artery disease (CAD) requires an intake of radioactive or contrast enhancement dyes, radiation exposure, and stress and may take days to weeks for referral to gold-standard cardiac catheterization. The CAD diagnostic pathway would greatly benefit from a test to assess for CAD that enables the physician to rule it out at the point of care, thereby enabling the exploration of other diagnoses more rapidly. We sought to develop a test using machine learning to assess for CAD with a rule-out profile, using an easy-to-acquire signal (without stress/radiation) at the point of care.
View Article and Find Full Text PDFBackground: Phase space is a mechanical systems approach and large-scale data representation of an object in 3-dimensional space. Whether such techniques can be applied to predict left ventricular pressures non-invasively and at the point-of-care is unknown.
Objective: This study prospectively validated a phase space machine-learned approach based on a novel electro-mechanical pulse wave method of data collection through orthogonal voltage gradient (OVG) and photoplethysmography (PPG) for the prediction of elevated left ventricular end diastolic pressure (LVEDP).
Developers proposing new machine learning for health (ML4H) tools often pledge to match or even surpass the performance of existing tools, yet the reality is usually more complicated. Reliable deployment of ML4H to the real world is challenging as examples from diabetic retinopathy or Covid-19 screening show. We envision an integrated framework of algorithm auditing and quality control that provides a path towards the effective and reliable application of ML systems in healthcare.
View Article and Find Full Text PDFBackground: Artificial intelligence (AI) techniques are increasingly applied to cardiovascular (CV) medicine in arenas ranging from genomics to cardiac imaging analysis. Cardiac Phase Space Tomography Analysis (cPSTA), employing machine-learned linear models from an elastic net method optimized by a genetic algorithm, analyzes thoracic phase signals to identify unique mathematical and tomographic features associated with the presence of flow-limiting coronary artery disease (CAD). This novel approach does not require radiation, contrast media, exercise, or pharmacological stress.
View Article and Find Full Text PDFAs proteomic MS has increased in throughput, so has the demand to catalogue the increasing number of peptides and proteins observed by the community using this technique. As in other 'omics' fields, this brings obvious scientific benefits such as sharing of results and prevention of unnecessary repetition, but also provides technical insights, such as the ability to compare proteome coverage between different laboratories, or between different proteomic platforms. Journals are also moving towards mandating that proteomics data be submitted to public repositories upon publication.
View Article and Find Full Text PDFAs proteins within cells are spatially organized according to their role, knowledge about protein localization gives insight into protein function. Here, we describe the LOPIT technique (localization of organelle proteins by isotope tagging) developed for the simultaneous and confident determination of the steady-state distribution of hundreds of integral membrane proteins within organelles. The technique uses a partial membrane fractionation strategy in conjunction with quantitative proteomics.
View Article and Find Full Text PDFProteomics based on tandem mass spectrometry is a powerful tool for identifying novel biomarkers and drug targets. Previously, a major bottleneck in high-throughput proteomics has been that the computational techniques needed to reliably identify proteins from proteomic data lagged behind the ability to collect the immense quantity of data generated. This is no longer the case, as fully automated pipelines for peptide and protein identification exist, and these are publicly and privately accessible.
View Article and Find Full Text PDFThis paper introduces the genome annotating proteomic pipeline (GAPP), a totally automated publicly available software pipeline for the identification of peptides and proteins from human proteomic tandem mass spectrometry data. The pipeline takes as its input a series of MS/MS peak lists from a given experimental sample and produces a series of database entries corresponding to the peptides observed within the sample, along with related confidence scores. The pipeline is capable of finding any peptides expected, including those that cross intron-exon boundaries, and those due to single nucleotide polymorphisms (SNPs), alternate splicing, and post-translational modifications (PTMs).
View Article and Find Full Text PDFA challenging task in the study of the secretory pathway is the identification and localization of new proteins to increase our understanding of the functions of different organelles. Previous proteomic studies of the endomembrane system have been hindered by contaminating proteins, making it impossible to assign proteins to organelles. Here we have used the localization of organelle proteins by the isotope tagging technique in conjunction with isotope tags for relative and absolute quantitation and 2D liquid chromatography for the simultaneous assignment of proteins to multiple subcellular compartments.
View Article and Find Full Text PDFBackground: iTRAQ technology for protein quantitation using mass spectrometry is a recent, powerful means of determining relative protein levels in up to four samples simultaneously. Although protein identification of samples generated using iTRAQ may be carried out using any current identification software, the quantitation calculations have been restricted to the ProQuant software supplied by Applied Biosciences. i-Tracker software has been developed to extract reporter ion peak ratios from non-centroided tandem MS peak lists in a format easily linked to the results of protein identification tools such as Mascot and Sequest.
View Article and Find Full Text PDFPerhaps the greatest difficulty in interpreting large sets of protein identifications derived from mass spectrometric methods is whether or not to trust the results. For such experiments, the level of confidence in each protein identification made needs to be far greater than the often used 95% significance threshold to avoid the identification of many false-positives. To provide higher confidence results, we have developed an innovative scoring strategy coupling the recently published Average Peptide Score (APS) method with pre-filtering of peptide identifications, using a simple peptide quality filter.
View Article and Find Full Text PDFCurrent proteomics experiments can generate vast quantities of data very quickly, but this has not been matched by data analysis capabilities. Although there have been a number of recent reviews covering various aspects of peptide and protein identification methods using MS, comparisons of which methods are either the most appropriate for, or the most effective at, their proposed tasks are not readily available. As the need for high-throughput, automated peptide and protein identification systems increases, the creators of such pipelines need to be able to choose algorithms that are going to perform well both in terms of accuracy and computational efficiency.
View Article and Find Full Text PDFWe demonstrate a new approach to the determination of amino acid composition from tandem mass spectrometrically fragmented peptides using both experimental and simulated data. The approach has been developed to be used as a search-space filter in a protein identification pipeline with the aim of increased performance above that which could be attained by using immonium ion information. Three automated methods have been developed and tested: one based upon a simple peak traversal, in which all intense ion peaks are treated as being either a b- or y-ion using a wide mass tolerance; a second which uses a much narrower tolerance and does not perform transformations of ion peaks to the complementary type; and the unique fragments method which allows for b- or y-ion type to be inferred and corroborated using a scan of the other ions present in each peptide spectrum.
View Article and Find Full Text PDF