<div class="csl-bib-body">
<div class="csl-entry">Trautsamwieser, A. (2008). <i>Estimation of time series models with Lasso type algorithms</i> [Diploma Thesis, Technische Universität Wien]. reposiTUm. http://hdl.handle.net/20.500.12708/179726</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/179726
-
dc.description.abstract
This thesis deals mainly with the theory of variable selection in regression which is particularly important for model interpretation.<br />Nowadays we are confronted with an enormous amount of data due to modern computer science. This is one reason for neglecting the common Ordinary Least Squares solution, which usually provides a good prediction mean squared error, but is not interpretable as it includes all available predictors. We considered some methods which offer a good interpretation on the one hand, and a satisfiable prediction accuracy on the other hand. All of them solve constrained regression problems. We examined the different methods both theoretically and practically, the latter by comparing the methods on times series data.<br />This work is organised as follows. In Chapter 1 some preliminaries are stated.<br />Then, in Chapter 2 the Lasso estimator is introduced and discussed. We propose an efficient algorithm and show some asymptotic properties which point out that the Lasso is inconsistent in variable selection if the tuning parameter is chosen such that the estimation is consistent. We further present some cases in which consistent variable selection is possible and mention some other problems of the Lasso.<br />In Chapter 3, relatives of the Lasso are described. We discuss the Adaptive Lasso, which puts data-dependent weights on the absolut size of the parameters and prove that this method is able to combine estimation consistency and variable selection consistency. Then we describe the Elastic Net procedure which solves a regression problem with a linear combination of the Ridge penalty and the Lasso penalty as constraint.<br />This method combines the advantages of the Lasso estimator and the Ridge estimator. Afterwards we introduce the Huberised Lasso method, a robustified version of the Lasso and then we present the Marginal Bridge estimator, which uses a non convex penalty term. Thereupon, in Chapter 4 we compare the methods for the orthonormal case. Accordingly we discuss the optimal choice of the tuning parameters in Chapter 5. Last but not least we provide some empirical results in Chapter 6 and give discussions/conclusions in Chapter 7.<br />
de
dc.description.abstract
This thesis deals mainly with the theory of variable selection in regression which is particularly important for model interpretation.<br />Nowadays we are confronted with an enormous amount of data due to modern computer science. This is one reason for neglecting the common Ordinary Least Squares solution, which usually provides a good prediction mean squared error, but is not interpretable as it includes all available predictors. We considered some methods which offer a good interpretation on the one hand, and a satisfiable prediction accuracy on the other hand. All of them solve constrained regression problems. We examined the different methods both theoretically and practically, the latter by comparing the methods on times series data.<br />This work is organised as follows. In Chapter 1 some preliminaries are stated.<br />Then, in Chapter 2 the Lasso estimator is introduced and discussed. We propose an efficient algorithm and show some asymptotic properties which point out that the Lasso is inconsistent in variable selection if the tuning parameter is chosen such that the estimation is consistent. We further present some cases in which consistent variable selection is possible and mention some other problems of the Lasso.<br />In Chapter 3, relatives of the Lasso are described. We discuss the Adaptive Lasso, which puts data-dependent weights on the absolut size of the parameters and prove that this method is able to combine estimation consistency and variable selection consistency. Then we describe the Elastic Net procedure which solves a regression problem with a linear combination of the Ridge penalty and the Lasso penalty as constraint.<br />This method combines the advantages of the Lasso estimator and the Ridge estimator. Afterwards we introduce the Huberised Lasso method, a robustified version of the Lasso and then we present the Marginal Bridge estimator, which uses a non convex penalty term. Thereupon, in Chapter 4 we compare the methods for the orthonormal case. Accordingly we discuss the optimal choice of the tuning parameters in Chapter 5. Last but not least we provide some empirical results in Chapter 6 and give discussions/conclusions in Chapter 7.
en
dc.language
English
-
dc.language.iso
en
-
dc.subject
Zeitreihenmodelle
de
dc.subject
konsistene Parameterschätzung
de
dc.subject
konsistente Modellspezifikation
de
dc.subject
Lasso
de
dc.subject
Adaptive Lasso
de
dc.subject
Elastic Net
de
dc.subject
Huberised Lasso
de
dc.subject
Marginal Bridge
de
dc.subject
time series models
en
dc.subject
consistent estimation
en
dc.subject
consistent selection
en
dc.subject
Lasso
en
dc.subject
Adaptive Lasso
en
dc.subject
Elastic Net
en
dc.subject
Huberised Lasso
en
dc.subject
Marginal Bridge
en
dc.title
Estimation of time series models with Lasso type algorithms