Recent years have seen formidable advances in artificial intelligence. Developments include a large number of specialised systems either existing or planned for use in scientific research, data analysis, translation, text production and design with grammar checking and stylistic revision, plagiarism detection, and scientific review in addition to general-purpose AI systems for searching the internet and generative AI systems for texts, images, videos, and musical compositions. These systems promise more ease and simplicity in many aspects of work. Blind trust in AI systems with uncritical, careless use of AI results is dangerous, as these systems do not have any inherent understanding of the content they process or generate, but only simulate this understanding by reproducing statistical patterns extracted from training data. This article discusses the potential and risk of using AI in scientific communication and explores potential systemic consequences of widespread AI implementation in this context.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1055/a-2418-5238 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!