Objective: To demonstrate the feasibility of an integer programming model to assist in pre-operative planning for open reduction and internal fixation of a distal humerus fracture.
Materials And Methods: We describe an integer programming model based on the objective of maximizing the reward for screws placed while satisfying the requirements for sound internal fixation. The model maximizes the number of bicortical screws placed while avoiding screw collision and favoring screws of greater length that cross multiple fracture planes.
Results: The model was tested on three types of total articular fractures of the distal humerus. Solutions were generated using 5, 9, 21 and 33 possible screw orientations per hole. Solutions generated using 33 possible screw orientations per hole and five screw lengths resulted in the most clinically relevant fixation plan and required the calculation of 1,191,975 pairs of screws that resulted in collision. At this level of complexity, the pre-processor took 104 seconds to generate the constraints for the solver, and a solution was generated in under one minute in all three cases.
Conclusion: Despite the large size of this problem, it can be solved in a reasonable amount of time, making use of the model practical in pre-surgical planning.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3109/10929080802057306 | DOI Listing |
Sensors (Basel)
December 2024
Intelligent Embedded Systems of Computer Science, University of Duisburg-Essen, 47057 Duisburg, Germany.
This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations-spanning layer counts, neuron counts, and quantization bitwidths-to accommodate the constraints and capabilities of different FPGA platforms.
View Article and Find Full Text PDFNat Commun
January 2025
Chair for Bioinformatics, Institute for Computer Science, Friedrich Schiller University Jena, Jena, Germany.
Small molecule machine learning aims to predict chemical, biochemical, or biological properties from molecular structures, with applications such as toxicity prediction, ligand binding, and pharmacokinetics. A recent trend is developing end-to-end models that avoid explicit domain knowledge. These models assume no coverage bias in training and evaluation data, meaning the data are representative of the true distribution.
View Article and Find Full Text PDFACS Appl Mater Interfaces
January 2025
Chemistry and Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore 560064, India.
A material equivalent of a biosynapse is the key to neuromorphic architecture. Here we report a self-forming labyrinthine Ag nanostructure activated with a few pulses of 0.5 V, width and interval set at 50 ms, at current compliance () of 400 nA, serving as the active material for a highly stable device with programmable volatility.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Széchenyi István University, 9026 Győr, Hungary.
Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new type of coverage task, the Maximum Target Coverage with k-Barrier Coverage (MTCBC-k) problem, is defined.
View Article and Find Full Text PDFEntropy (Basel)
December 2024
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!