Efficient Optimization for Rank-based Loss Functions

Pritish Mohapatra, Michal Rolínek, C.V. Jawahar, Vladimir Kolmogorov and M. Pawan Kumar.

arXiv technical report, April 2016.


Abstract

The accuracy of information retrieval systems is often measured using complex non-decomposable loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive (relevant) and negative (non-relevant) samples, the parameters of a retrieval system can be estimated using a rank SVM framework, which minimizes a regularized convex upper bound on the empirical loss. However, the high computational complexity of loss-augmented inference, which is required to learn a rank SVM, prohibits its use in large training datasets. To alleviate this deficiency, we present a novel quicksort flavored algorithm for a large class of nondecomposable loss functions. We provide a complete characterization of the loss functions that are amenable to our algorithm. Furthermore, we prove that no comparison based algorithm can improve upon the computational complexity of our approach asymptotically. We demonstrate that it is possible to reduce the constant factors of the complexity by exploiting the special structure of the AP loss. Using the PASCAL VOC action recognition and object detection datasets, we show that our approach provides significantly better results than baseline methods that use a simpler decomposable loss in comparable runtime.


Links

arXiv