Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors.
View Article and Find Full Text PDFDeep learning neural networks are often described as black boxes, as it is difficult to trace model outputs back to model inputs due to a lack of clarity over the internal mechanisms. This is even true for those neural networks designed to emulate mechanistic models, which simply learn a mapping between the inputs and outputs of mechanistic models, ignoring the underlying processes. Using a mechanistic model studying the pharmacological interaction between opioids and naloxone as a proof-of-concept example, we demonstrated that by reorganizing the neural networks' layers to mimic the structure of the mechanistic model, it is possible to achieve better training rates and prediction accuracy relative to the previously proposed black-box neural networks, while maintaining the interpretability of the mechanistic simulations.
View Article and Find Full Text PDFIn response to a surge of deaths from synthetic opioid overdoses, there have been increased efforts to distribute naloxone products in community settings. Prior research has assessed the effectiveness of naloxone in the hospital setting; however, it is challenging to assess naloxone dosing regimens in the community/first-responder setting, including reversal of respiratory depression effects of fentanyl and its derivatives (fentanyls). Here, we describe the development and validation of a mechanistic model that combines opioid mu receptor binding kinetics, opioid agonist and antagonist pharmacokinetics, and human respiratory and circulatory physiology, to evaluate naloxone dosing to reverse respiratory depression.
View Article and Find Full Text PDFNeural synchrony in the brain is often present in an intermittent fashion, i.e., there are intervals of synchronized activity interspersed with intervals of desynchronized activity.
View Article and Find Full Text PDFFront Comput Neurosci
June 2020
Neural synchrony in the brain at rest is usually variable and intermittent, thus intervals of predominantly synchronized activity are interrupted by intervals of desynchronized activity. Prior studies suggested that this temporal structure of the weakly synchronous activity might be functionally significant: many short desynchronizations may be functionally different from few long desynchronizations even if the average synchrony level is the same. In this study, we used computational neuroscience methods to investigate the effects of spike-timing dependent plasticity (STDP) on the temporal patterns of synchronization in a simple model.
View Article and Find Full Text PDF