This paper illustrates the results obtained by using pre-trained semantic segmentation deep learning models for the detection of archaeological sites within the Mesopotamian floodplains environment. The models were fine-tuned using openly available satellite imagery and vector shapes coming from a large corpus of annotations (i.e., surveyed sites). A randomized test showed that the best model reaches a detection accuracy in the neighborhood of 80%. Integrating domain expertise was crucial to define how to build the dataset and how to evaluate the predictions, since defining if a proposed mask counts as a prediction is very subjective. Furthermore, even an inaccurate prediction can be useful when put into context and interpreted by a trained archaeologist. Coming from these considerations we close the paper with a vision for a Human-AI collaboration workflow. Starting with an annotated dataset that is refined by the human expert we obtain a model whose predictions can either be combined to create a heatmap, to be overlaid on satellite and/or aerial imagery, or alternatively can be vectorized to make further analysis in a GIS software easier and automatic. In turn, the archaeologists can analyze the predictions, organize their onsite surveys, and refine the dataset with new, corrected, annotations.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10227033 | PMC |
http://dx.doi.org/10.1038/s41598-023-36015-5 | DOI Listing |
Front Artif Intell
January 2025
Center for Mind/Brain Sciences, University of Trento, Trento, Italy.
The impressive performance of modern Large Language Models (LLMs) across a wide range of tasks, along with their often non-trivial errors, has garnered unprecedented attention regarding the potential of AI and its impact on everyday life. While considerable effort has been and continues to be dedicated to overcoming the limitations of current models, the potentials and risks of human-LLM collaboration remain largely underexplored. In this perspective, we argue that enhancing the focus on human-LLM interaction should be a primary target for future LLM research.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
We aimed to develop and evaluate Explainable Artificial Intelligence (XAI) for fetal ultrasound using actionable concepts as feedback to end-users, using a prospective cross-center, multi-level approach. We developed, implemented, and tested a deep-learning model for fetal growth scans using both retrospective and prospective data. We used a modified Progressive Concept Bottleneck Model with pre-established clinical concepts as explanations (feedback on image optimization and presence of anatomical landmarks) as well as segmentations (outlining anatomical landmarks).
View Article and Find Full Text PDFInt J Low Extrem Wounds
January 2025
Environmental-Occupational Health Sciences and Non Communicable Diseases Research Centre, Research Institute for Health Sciences, Chiang Mai University, Chiang Mai, Thailand.
Artificial Intelligence (AI) is revolutionizing medical writing by enhancing the efficiency and precision of healthcare communication and health research. This review explores the transformative integration of AI in medical writing, highlighting its dual role of enhancing efficiency while maintaining the crucial elements of human expertise. AI technologies, including natural language processing and AI-driven literature review tools, have significantly advanced, facilitating rapid draft generation, literature summarization, and consistency in medical documentation.
View Article and Find Full Text PDFJ Allergy Clin Immunol Glob
February 2025
Big Data Department, Faculdade Israelita de Ciências da Saúde Albert Einstein, São Paulo, Brazil.
Background: The use of artificial intelligence (AI) in scientific writing is rapidly increasing, raising concerns about authorship identification, content quality, and writing efficiency.
Objectives: This study investigates the real-world impact of ChatGPT, a large language model, on those aspects in a simulated publication scenario.
Methods: Forty-eight individuals representing 3 medical expertise levels (medical students, residents, and experts in allergy or dermatology) evaluated 3 blinded versions of an atopic dermatitis case report: one each human written (HUM), AI generated (AI), and combined written (COM).
Sci Rep
December 2024
ETH Zurich, Zurich, Switzerland.
Artificial intelligence (AI) provides considerable opportunities to assist human work. However, one crucial challenge of human-AI collaboration is that many AI algorithms operate in a black-box manner where the way how the AI makes predictions remains opaque. This makes it difficult for humans to validate a prediction made by AI against their own domain knowledge.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!