Loading...
Thumbnail Image
Publication

Inference and learning in high-dimensional spaces

Weinstein, Alejandro J.
Research Projects
Organizational Units
Journal Issue
Embargo Expires
Abstract
High-dimensional problems have received a considerable amount of attention in the last decade by numerous scientific communities. This thesis considers three research thrusts that fall under the umbrella of inference and learning in high-dimensional spaces. Each of these trusts aim to tackle the so called "curse of dimensionality" in a particular way. The first research thrust focuses on recovering a signal whose amplitudes have been clipped. We present two new algorithms for recovering a clipped signal by leveraging the model assumption that the underlying signal is sparse in the frequency domain. Both algorithms employ ideas commonly used in the field of Compressive Sensing (CS); the first one is a modified version of Reweighted [cursive l]1 minimization, and the second one is a modification of a simple greedy algorithm known as Trivial Pursuit. An empirical investigation shows that both approaches can recover signals with significant levels of clipping. The second research thrust focuses on denoising a signal ensemble by exploiting sparsity both at the inter- and intra-signal level. The problem of signal denoising using thresholding estimators has received a significant amount of attention in the literature, starting in the 1990s when Donoho and Johnstone introduced the concept of wavelet shrinkage. In this approach, the signal is represented in a basis where it is sparse, and each noisy coefficient is thresholded by a parameter that depends on the noise level. We are extending this concept to the case where one has a set of signals, and the location of the nonzero coefficients for all these signals is the same. Our approach is based on a vetoing mechanism, where in addition to thresholding, the inter-signal information is used to "save" a coefficient that otherwise would be "killed". Our method achieves a better performance than independent denoising, and we quantify the expected value of this improvement. The results show a consistent improvement over the independent denoising, achieving results close to the ones produced by an oracle. We validate the technique using both synthetic and real world signals. The third research thrust focuses on using sparse models in Reinforcement Learning (RL). In RL one is interested in designing an agent able to interact with a given environment. The agent observes its current state, and based on this observation takes an action. As a consequence, it gets a reward and transitions to a new state. The design objective is to conceive a policy, or control rule, that maximizes the aggregated rewards. When the number of states is large, the design of such policies requires the use of function approximations; it also requires the design of feature vectors, i.e., the design of a mapping between a state and a vector that summarizes the state. In this work we propose new algorithms that, by exploiting the structure of the functions to be approximated, simplify the design of the feature vectors. These methods are also more efficient than the existing ones in terms of computational complexity and the required number of samples. We evaluate the performance of the proposed methods empirically in a variety of environments.
Associated Publications
Rights
Copyright of the original work is retained by the author.
Embedded videos