In this paper, we propose new Bayesian hierarchical representations of lasso, adaptive lasso and elastic net quantile regression models. We explore these representations by observing that the lasso penalty function corresponds to a scale mixture of truncated normal distribution (with exponential mixing densities). We consider fully Bayesian treatments that lead to new Gibbs sampler methods with tractable full conditional posteriors.
View Article and Find Full Text PDFClassical adaptive lasso regression is known to possess the oracle properties; namely, it performs as well as if the correct submodel were known in advance. However, it requires consistent initial estimates of the regression coefficients, which are generally not available in high dimensional settings. In addition, none of the algorithms used to obtain the adaptive lasso estimators provide a valid measure of standard error.
View Article and Find Full Text PDF