Sparse Modeling with Universal Priors and Learned Incoherent Dictionaries(PREPRINT)
MINNESOTA UNIV MINNEAPOLIS INST FOR MATHEMATICS AND ITS APPLICATIONS
Pagination or Media Count:
Sparse data models have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. The learning of sparse models has been mostly concerned with adapting the dictionary to tasks such as classification and reconstruction, optimizing extrinsic properties of the trained dictionaries. In this work, we first propose a learning method aimed at enhancing both extrinsic and intrinsic properties of the dictionaries, such as the mutual and cumulative coherence and the Gram matrix norm, characteristics known to improve the efficiency and performance of sparse coding algorithms. We then use tools from information theory to propose a sparsity regularization term which has several desirable theoretical and practical advantages over the more standard 0 or 1 ones. These new sparse modeling components lead to improved coding performance and accuracy in reconstruction tasks.
- Information Science
- Radiofrequency Wave Propagation