Tag Archives: Kernel methods

Learning Output Embeddings in Structured Prediction (submitted to AISTATS’21)

By Luc Brogat-Motte, Alessandro Rudi, Céline Brouard, Juho Rousu, Florence d’Alché-Buc.

Submitted to AISTATS, 2021

https://arxiv.org/abs/2007.14703.

Abstract. A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension by means of output kernels, and then, solving a regression problem in this output space. A prediction in the original space is computed by solving a pre-image problem. In such an approach, the embedding, linked to the target loss, is defined prior to the learning phase. In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space. For that purpose, we leverage a priori information on the outputs and also unexploited unsupervised output data, which are both often available in structured prediction problems. We prove that the resulting structured predictor is a consistent estimator, and derive an excess risk bound. Moreover, the novel structured prediction tool enjoys a significantly smaller computational complexity than former output kernel methods. The approach empirically tested on various structured prediction problems reveals to be versatile and able to handle large datasets.

Interpretable time series kernel analytics by pre-image estimation (in Artificial Intelligence 2020)

by Thi Phuong Thao Tran, Ahlame Douzal-Chouakria, Saeed Varasteh Yazdi, Paul Honeine, Patrick Gallinari.

Paper published in Artificial Intelligence (Volume 286, September 2020, 103342):
https://www.sciencedirect.com/science/article/abs/pii/S0004370220300989

Abstract. Kernel methods are known to be effective to analyse complex objects by implicitly embedding them into some feature space. To interpret and analyse the obtained results, it is often required to restore in the input space the results obtained in the feature space, by using pre-image estimation methods. This work proposes a new closed-form pre-image estimation method for time series kernel analytics that consists of two steps. In the first step, a time warp function, driven by distance constraints in the feature space, is defined to embed time series in a metric space where analytics can be performed conveniently. In the second step, the time series pre-image estimation is cast as learning a linear (or a nonlinear) transformation that ensures a local isometry between the time series embedding space and the feature space. The proposed method is compared to the state of the art through three major tasks that require pre-image estimation: 1) time series averaging, 2) time series reconstruction and denoising and 3) time series representation learning. The extensive experiments conducted on 33 publicly-available datasets show the benefits of the pre-image estimation for time series kernel analytics.

Duality in RKHSs with Infinite Dimensional Outputs: Application to Robust Losses (in ICML’20)

By Pierre Laforgue, Alex Lambert, Luc Brogat-Motte, Florence d’Alche-Buc.

In Proceedings of the 37th International Conference on Machine Learning (ICML), Online, PMLR 119, 2020.

https://arxiv.org/pdf/1910.04621.pdf

Abstract. Operator-Valued Kernels (OVKs) and associated vector-valued Reproducing Kernel Hilbert Spaces provide an elegant way to extend scalar kernel methods when the output space is a Hilbert space. Although primarily used in finite dimension for problems like multi-task regression, the ability of this framework to deal with infinite dimensional output spaces unlocks many more applications, such as functional regression, structured output prediction, and structured data representation. However, these sophisticated schemes crucially rely on the kernel trick in the output space, so that most of previous works have focused on the square norm loss function, completely neglecting robustness issues that may arise in such surrogate problems. To overcome this limitation, this paper develops a duality approach that allows to solve OVK machines for a wide range of loss functions. The infinite dimensional Lagrange multipliers are handled through a Double Representer Theorem, and algorithms for epsilon-insensitive losses and the Huber loss are thoroughly detailed. Robustness benefits are emphasized by a theoretical stability analysis, as well as empirical improvements on structured data applications.

Pixel-wise linear/nonlinear nonnegative matrix factorization for unmixing of hyperspectral data (in ICASSP’20)

By Fei Zhu, Paul Honeine, Jie Chen.

In Proc. 45th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4 – 8 May 2020.

 link   Pixel-wise linear/nonlinear nonnegative matrix factorization for unmixing of hyperspectral data [pdf] paper   doi:10.1109/ICASSP40776.2020.9053239

Abstract. Nonlinear spectral unmixing is a challenging and important task in hyperspectral image analysis. The kernel-based bi-objective non-negative matrix factorization (Bi-NMF) has shown its usefulness in nonlinear unmixing; However, it suffers several issues that prohibit its practical application. In this work, we propose an unsupervised nonlinear unmixing method that overcomes these weaknesses. Specifically, the new method introduces into each pixel a parameter that adjusts the nonlinearity therein. These parameters are jointly optimized with endmembers and abundances, using a carefully designed objective function by multiplicative update rules. Experiments on synthetic and real datasets confirm the effectiveness of the proposed method.