Reproducible data analysis is an approach aiming at complementing classical printed scientific articles with everything required to independently reproduce the results they present. "Everything" covers here: the data, the computer codes and a precise description of how the code was applied to the data. A brief history of this approach is presented first, starting with what economists have been calling replication since the early eighties to end with what is now called reproducible research in computational data analysis oriented fields like statistics and signal processing.
View Article and Find Full Text PDFJ Neurosci Methods
January 2006
We demonstrate the efficacy of a new spike-sorting method based on a Markov chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique in its capability to estimate and make use of the firing statistics as well as the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple discharge states, giving rise to multi-modal inter-spike interval (ISI) histograms and to correlations between successive ISIs.
View Article and Find Full Text PDFSpike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this study, we propose a more realistic spike train generation model.
View Article and Find Full Text PDF