Introduction: Epidemiological research has been adhering to new technologies, such as computer systems, and using the Internet as a tool. Usability is a characteristic of a specific product concerning the facility to use it, its speed and the facility to learn how to use it. It should also not present errors, or these must be easy to solve, in case they occur, thus providing high satisfaction to users.
Objective: To evaluate the usability of the "System of health and nutrition monitoring - nutrition of school children" (NUTRISIM).
Methods: A sample of 17 Information Technology professionals evaluated the system and answered the "Questionnaire for System Usability", which determines the level of usability of systems by the Fuzzy Logic. The questionnaire contains 30 questions, which are divided into six metrics. The usability of the system determines six usability criteria in a large Fuzzy scale.
Results: With the exception of the metric "error control", all metrics were analyzed as "very good". The metrics "error control", "efficiency" and "satisfaction" presented medium amplitude, which is a better result in relation to the metrics "easy to learn", "easy to remember" and "effectiveness", which was assessed as "high".
Conclusion: The study showed that the system is easy to be learned and used, but the answers are scattered. The instrument proved to be a useful tool to monitor and evaluate health and dietary intake in epidemiologic studies.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1590/s1415-790x2013000400016 | DOI Listing |
J Chem Phys
January 2025
Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, Tennessee 37830, USA.
The linear scaling divide-expand-consolidate (DEC) framework is expanded to include unrestricted Hartree-Fock references. By partitioning the orbital space and employing local molecular orbitals, the full molecular calculation can be performed as independent calculations on individual fragments, making the method well-suited for massively parallel implementations. This approach also incorporates error control through the fragment optimization threshold (FOT), which maintains precision and consistency throughout the calculations.
View Article and Find Full Text PDFPolymers (Basel)
December 2024
Faculty of Printing, Packaging Engineering and Digital Media Technology, Xi'an University of Technology, Xi'an 710048, China.
This paper addresses the issue of the high-precision control of substrate tension in an accumulator during the roll-to-roll coating process. First, a coupling model for tension errors in the substrate within the accumulator is established, along with dynamic models for the input-output rollers, carriage, and the thrust model of the ball screw. Based on these models, a simulation model is built in MATLAB/Simulink to analyze the main causes of substrate tension errors in the accumulator under uncontrolled conditions.
View Article and Find Full Text PDFApplications in engineering biology increasingly share the need to run operations on very large numbers of biological samples. This is a direct consequence of the application of good engineering practices, the limited predictive power of current computational models and the desire to investigate very large design spaces in order to solve the hard, important problems the discipline promises to solve. Automation has been proposed as a key component for running large numbers of operations on biological samples.
View Article and Find Full Text PDFMol Syst Biol
January 2025
Research group "Structural Interactomics", Leibniz Forschungsinstitut für Molekulare Pharmakologie, Robert-Rössle-Str. 10, 13125, Berlin, Germany.
Cross-linking mass spectrometry (XL-MS) allows characterizing protein-protein interactions (PPIs) in native biological systems by capturing cross-links between different proteins (inter-links). However, inter-link identification remains challenging, requiring dedicated data filtering schemes and thorough error control. Here, we benchmark existing data filtering schemes combined with error rate estimation strategies utilizing concatenated target-decoy protein sequence databases.
View Article and Find Full Text PDFMendelian Randomization analysis is a popular method to infer causal relationships between exposures and outcomes, utilizing data from genome-wide association studies (GWAS) to overcome limitations of observational research by treating genetic variants as instrumental variables. This study focuses on a specific problem setting, where causal signals may exist among a series of correlated traits, but the exposures of interest, such as biological functions or lower-dimensional latent factors that regulate the observable traits, are not directly observable. We propose a Bayesian Mendelian randomization analysis framework that allows joint analysis of the causal effects of multiple latent exposures on a disease outcome leveraging GWAS summary-level association statistics for traits co-regulated by the exposures.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!