Background: Container virtualization technologies such as Docker are popular in the bioinformatics domain because they improve the portability and reproducibility of software deployment. Along with software packaged in containers, the standardized workflow descriptors Common Workflow Language (CWL) enable data to be easily analyzed on multiple computing environments. These technologies accelerate the use of on-demand cloud computing platforms, which can be scaled according to the quantity of data. However, to optimize the time and budgetary restraints of cloud usage, users must select a suitable instance type that corresponds to the resource requirements of their workflows.
Results: We developed CWL-metrics, a utility tool for cwltool (the reference implementation of CWL), to collect runtime metrics of Docker containers and workflow metadata to analyze workflow resource requirements. To demonstrate the use of this tool, we analyzed 7 transcriptome quantification workflows on 6 instance types. The results revealed that choice of instance type can deliver lower financial costs and faster execution times using the required amount of computational resources.
Conclusions: CWL-metrics can generate a summary of resource requirements for workflow executions, which can help users to optimize their use of cloud computing by selecting appropriate instances. The runtime metrics data generated by CWL-metrics can also help users to share workflows between different workflow management frameworks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6479428 | PMC |
http://dx.doi.org/10.1093/gigascience/giz052 | DOI Listing |
Urban Inform
January 2025
IVL Swedish Environmental Research Institute LTD., PO Box 530 21, SE-400 14 Gothenburg, Sweden.
In response to the demand for advanced tools in environmental monitoring and policy formulation, this work leverages modern software and big data technologies to enhance novel road transport emissions research. This is achieved by making data and analysis tools more widely available and customisable so users can tailor outputs to their requirements. Through the novel combination of vehicle emissions remote sensing and cloud computing methodologies, these developments aim to reduce the barriers to understanding real-driving emissions (RDE) across urban environments.
View Article and Find Full Text PDFSensors (Basel)
December 2024
School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia.
Most current research in cloud forensics is focused on tackling the challenges encountered by forensic investigators in identifying and recovering artifacts from cloud devices. These challenges arise from the diverse array of cloud service providers as each has its distinct rules, guidelines, and requirements. This research proposes an investigation technique for identifying and locating data remnants in two main stages: artefact collection and evidence identification.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Computer Science & Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea.
The Internet of Things (IoT) has seen remarkable advancements in recent years, leading to a paradigm shift in the digital landscape. However, these technological strides have introduced new challenges, particularly in cybersecurity. IoT devices, inherently connected to the internet, are susceptible to various forms of attacks.
View Article and Find Full Text PDFPhysiol Meas
January 2025
University of Duisburg-Essen, Bismarckstr. 81 (BB), Duisburg, 47057, GERMANY.
Objective: In recent years, wearable devices such as smartwatches and smart patches have revolutionized biosignal acquisition and analysis, particularly for monitoring electrocardiography (ECG). However, the limited power supply of these devices often precludes real-time data analysis on the patch itself.
Approach: This paper introduces a novel Python package, tinyHLS (High Level Synthesis), designed
to address these challenges by converting Python-based AI models into platform-independent hardware description language (HDL) code accelerators.
PLoS One
January 2025
School of Humanities, Ningbo University of Finance and Economics, Ningbo, Zhejiang, China.
Lightweight container technology has emerged as a fundamental component of cloud-native computing, with the deployment of containers and the balancing of loads on virtual machines representing significant challenges. This paper presents an optimization strategy for container deployment that consists of two stages: coarse-grained and fine-grained load balancing. In the initial stage, a greedy algorithm is employed for coarse-grained deployment, facilitating the distribution of container services across virtual machines in a balanced manner based on resource requests.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!