Publications by authors named "Gilles Didier"

Article Synopsis
  • The Fossilized Birth-Death (FBD) process has been used to explore biodiversity evolution by creating models that utilize fossil ages to generate phylogenetic trees without needing divergence times.
  • *The researchers developed methods to evaluate hypotheses about diversification, such as detecting mass extinctions or changes in fossilization rates, by applying the skyline FBD model and estimating parameters using simulations.
  • *The study applies these methods to an updated dataset on Permo-Carboniferous synapsids to investigate biodiversity dynamics in specific clades and to determine support for previously suggested mass extinction events.
View Article and Find Full Text PDF

Phylogenetic comparative methods use random processes, such as the Brownian Motion, to model the evolution of continuous traits on phylogenetic trees. Growing evidence for non-gradual evolution motivated the development of complex models, often based on Lévy processes. However, their statistical inference is computationally intensive and currently relies on approximations, high-dimensional sampling, or numerical integration.

View Article and Find Full Text PDF
Article Synopsis
  • Researchers present a new method to calculate extinction times for extinct or mixed taxa using the Fossilized-Birth-Death model, improving on previous methods by considering diversification before extinction and examining the entire phylogenetic tree instead of each branch separately.
  • This approach allows for estimating extinction times for lineages with only one fossil if they belong to a broader group with multiple fossil records, leading to more accurate results.
  • The method was tested on three synapsid taxa from the Permo-Carboniferous era, revealing that their extinction aligns with a gradual decline in biodiversity during the late Kungurian/early Roadian period.
View Article and Find Full Text PDF
Article Synopsis
  • The study introduces a method to calculate the exact probability distribution of divergence times in a phylogenetic tree using only fossil ages, particularly under the Fossilized Birth-Death model.
  • It specifically focuses on determining the age of Amniota, revealing it to be approximately 322 to 340 million years ago, which is older than the previously assumed range of 310-315 million years.
  • This research is significant as it not only revises the timeline for key evolutionary events but also presents a new technique to understand the probability density of divergence times in evolutionary studies.
View Article and Find Full Text PDF
Article Synopsis
  • The amniotic egg is crucial in vertebrate evolution, influencing the development of amniotes, which are a group of vertebrates that includes mammals, birds, and reptiles.
  • Researchers tested Carroll's 1970 theory about the egg's origin using a new method that assesses different evolutionary trends by splitting phylogenetic trees at various nodes.
  • Their findings showed that the expected significant changes in body size evolution along the amniote stem were not present, challenging the validity of Carroll's scenario.
View Article and Find Full Text PDF

Being confounding factors, directional trends are likely to make two quantitative traits appear as spuriously correlated. By determining the probability distributions of independent contrasts when traits evolve following Brownian motions with linear trends, we show that the standard independent contrasts can not be used to test for correlation in this situation. We propose a multiple regression approach which corrects the bias caused by directional evolution.

View Article and Find Full Text PDF

The identification of communities, or modules, is a common operation in the analysis of large biological networks. The  established a framework to evaluate clustering approaches in a biomedical context, by testing the association of communities with GWAS-derived common trait and disease genes. We implemented here several extensions of the MolTi software that detects communities by optimizing multiplex (and monoplex) network modularity.

View Article and Find Full Text PDF

The time-dependent-asymmetric-linear parsimony is an ancestral state reconstruction method which extends the standard linear parsimony (a.k.a.

View Article and Find Full Text PDF
Article Synopsis
  • The diversification process in species cannot be directly observed and is instead studied through fossil records and existing taxa, with phylogenetic trees serving as detailed representations of this process.
  • A new method is proposed to calculate the likelihood of phylogenetic trees using only fossil ages for timing, which allows for more accurate estimates of diversification rates compared to relying just on divergence times.
  • An analysis of 50 early synapsid taxa revealed a speciation rate of approximately 0.1 per lineage per million years and a slightly lower extinction rate, highlighting the hidden biodiversity of early synapsids during the Permo-Carboniferous period.
View Article and Find Full Text PDF

Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide.

View Article and Find Full Text PDF

Various biological networks can be constructed, each featuring gene/protein relationships of different meanings (e.g., protein interactions or gene co-expression).

View Article and Find Full Text PDF

Despite its intrinsic difficulty, ancestral character state reconstruction is an essential tool for testing evolutionary hypothesis. Two major classes of approaches to this question can be distinguished: parsimony- or likelihood-based approaches. We focus here on the second class of methods, more specifically on approaches based on continuous-time Markov modeling of character evolution.

View Article and Find Full Text PDF

Using the fossil record yields more detailed reconstructions of the evolutionary process than what is obtained from contemporary lineages only. In this work, we present a stochastic process modeling not only speciation and extinction, but also fossil finds. Next, we derive an explicit formula for the likelihood of a reconstructed phylogeny with fossils, which can be used to estimate the speciation and extinction rates.

View Article and Find Full Text PDF
Article Synopsis
  • The paper investigates René Thomas's generalized logical framework for representing regulatory networks through graph theory, where nodes represent genes and edges indicate regulatory relationships.
  • It highlights that while Boolean variables are typically sufficient for representing gene expression levels, there are cases that require multivalued variables, but most existing analysis tools focus only on the Boolean framework.
  • The authors formally demonstrate that a specific mapping from multivalued to Boolean variables, proposed by P. Van Ham, is the only effective way to maintain the integrity of regulatory structures and their dynamics in these models.
View Article and Find Full Text PDF

We give a formal study of the relationships between the transition cost parameters and the generalized maximum parsimonious reconstructions of unknown (ancestral) binary character states {0,1} over a phylogenetic tree. As a main result, we show there are two thresholds λ¹n and λ⁰n , generally confounded, associated to each node n of the phylogenetic tree and such that there exists a maximum parsimonious reconstruction associating state 1 to n (resp. state 0 to n) if the ratio "10-cost"/"01-cost" is smaller than λ¹n (resp.

View Article and Find Full Text PDF

Background: While multiple alignment is the first step of usual classification schemes for biological sequences, alignment-free methods are being increasingly used as alternatives when multiple alignments fail. Subword-based combinatorial methods are popular for their low algorithmic complexity (suffix trees ..

View Article and Find Full Text PDF

Background: As public microarray repositories are constantly growing, we are facing the challenge of designing strategies to provide productive access to the available data.

Methodology: We used a modified version of the Markov clustering algorithm to systematically extract clusters of co-regulated genes from hundreds of microarray datasets stored in the Gene Expression Omnibus database (n = 1,484). This approach led to the definition of 18,250 transcriptional signatures (TS) that were tested for functional enrichment using the DAVID knowledgebase.

View Article and Find Full Text PDF

Background: We present the N-map method, a pairwise and asymmetrical approach which allows us to compare sequences by taking into account evolutionary events that produce shuffled, reversed or repeated elements. Basically, the optimal N-map of a sequence s over a sequence t is the best way of partitioning the first sequence into N parts and placing them, possibly complementary reversed, over the second sequence in order to maximize the sum of their gapless alignment scores.

Results: We introduce an algorithm computing an optimal N-map with time complexity O (|s| x |t| x N) using O (|s| x |t| x N) memory space.

View Article and Find Full Text PDF

Background: In general, the construction of trees is based on sequence alignments. This procedure, however, leads to loss of informationwhen parts of sequence alignments (for instance ambiguous regions) are deleted before tree building. To overcome this difficulty, one of us previously introduced a new and rapid algorithm that calculates dissimilarity matrices between sequences without preliminary alignment.

View Article and Find Full Text PDF

Subword composition plays an important role in a lot of analyses of sequences. Here we define and study the "local decoding of order N of sequences," an alternative that avoids some drawbacks of "subwords of length N" approaches while keeping informations about environments of length N in the sequences ("decoding" is taken here in the sense of hidden Markov modeling, i.e.

View Article and Find Full Text PDF

The number of statistical tools used to analyze transcriptome data is continuously increasing and no one, definitive method has so far emerged. There is a need for comparison and a number of different approaches has been taken to evaluate the effectiveness of the different statistical tools available for microarray analyses. In this paper, we describe a simple and efficient protocol to compare the reliability of different statistical tools available for microarray analyses.

View Article and Find Full Text PDF