Publications by authors named "N D Hagan"

Neuroinflammation in the central nervous system (CNS), driven largely by resident phagocytes, has been proposed as a significant contributor to disability accumulation in multiple sclerosis (MS) but has not been addressed therapeutically. Bruton's tyrosine kinase (BTK) is expressed in both B-lymphocytes and innate immune cells, including microglia, where its role is poorly understood. BTK inhibition may provide therapeutic benefit within the CNS by targeting adaptive and innate immunity-mediated disease progression in MS.

View Article and Find Full Text PDF

Data volume has been one of the fast-growing assets of most real-world applications. This increases the rate of human errors such as duplication of records, misspellings, and erroneous transpositions, among other data quality issues. Entity Resolution is an ETL process that aims to resolve data inconsistencies by ensuring entities are referring to the same real-world objects.

View Article and Find Full Text PDF

Leukocytes migrate through the blood and extravasate into organs to surveil the host for infection or cancer. Recently, we demonstrated that intravenous (IV) anti-CD45.2 antibody labeling allowed for precise tracking of leukocyte migration.

View Article and Find Full Text PDF

Anticancer immunity is predicated on leukocyte migration into tumors. Once recruited, leukocytes undergo substantial reprogramming to adapt to the tumor microenvironment. A major challenge in the field is distinguishing recently recruited from resident leukocytes in tumors.

View Article and Find Full Text PDF
Article Synopsis
  • Traditional data curation relies heavily on human involvement, but as data increases, there's a push towards automating these processes to handle large volumes efficiently and independently.
  • This paper discusses enhancing unsupervised entity resolution (ER), crucial for linking records that refer to the same real-world entities, by parallelizing it using Hadoop MapReduce to improve scalability.
  • The new system, called Hadoop Data Washing Machine (HDWM), overcomes limitations of existing implementations by eliminating reliance on shared memory and enabling efficient clustering of larger datasets, demonstrating promising results for scalability.
View Article and Find Full Text PDF