Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses

Fabien Lauer 1
1 ABC - Machine Learning and Computational Biology
LORIA - ALGO - Department of Algorithms, Computation, Image and Geometry
Abstract : This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. Indeed, we show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependency for nonconvex regularization schemes that could not be handled by previous methods.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [22 references]  Display  Hide  Download

Contributor : Fabien Lauer <>
Submitted on : Tuesday, April 16, 2019 - 11:35:00 AM
Last modification on : Wednesday, April 17, 2019 - 1:32:55 AM


Files produced by the author(s)


  • HAL Id : hal-02100779, version 1
  • ARXIV : 1904.07594



Fabien Lauer. Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses. 2019. ⟨hal-02100779⟩



Record views


Files downloads