A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 143

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 143
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 209
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3098
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

A PHP Error was encountered

Severity: Warning

Message: Attempt to read property "Count" on bool

Filename: helpers/my_audit_helper.php

Line Number: 3100

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3100
Function: _error_handler

File: /var/www/html/application/controllers/Detail.php
Line: 574
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 488
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training. | LitMetric

Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization (EM) training and Viterbi training.

Algorithms Mol Biol

Centre for High-Throughput Biology, Department of Computer Science and Department of Medical Genetics, 2366 Main Mall, University of British Columbia, Vancouver V6T 1Z4, Canada.

Published: December 2010

AI Article Synopsis

  • Hidden Markov models are crucial in bioinformatics for tasks like gene prediction and analyzing micro-array data, but their parameters need fine-tuning for specific data sets to improve accuracy.
  • The authors developed two new algorithms for Viterbi and stochastic EM training, which reduce memory usage and streamline the process by requiring only a single pass through the data.
  • These algorithms enhance the computational efficiency of hidden Markov models, allowing for the training of more complex models and longer sequences, while also being easier to implement compared to traditional methods.

Article Abstract

Background: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications.

Results: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-CONVERTER and examine their respective practical merits for three small example models.

Conclusions: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3019189PMC
http://dx.doi.org/10.1186/1748-7188-5-38DOI Listing

Publication Analysis

Top Keywords

hidden markov
16
viterbi training
16
training
15
efficient algorithms
12
markov models
12
computationally efficient
12
algorithms viterbi
12
training stochastic
12
stochastic training
12
algorithms
9

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!