Efficient Dual Approach to Distance Metric Learning. Shen, C., Kim, J., Liu, F., Wang, L., & van den Hengel , A. IEEE Transactions on Neural Networks and Learning Systems, 25(2):394--406, 2014. abstract bibtex Distance metric learning is of fundamental interest in machine learning because the distance metric employed can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. Standard interior-point SDP solvers typically have a complexity of O(D^6.5) (with D the dimension of input data), and can thus only practically solve problems exhibiting less than a few thousand variables. Since the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved of around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here we propose a significantly more efficient approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is O(D^3), which is significantly lower than that of the SDP approach. Experiments on a variety of datasets demonstrate that the proposed method achieves an accuracy comparable to the state-of-the-art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius-norm regularized SDP problems approximately.
@article{Shen2014Metric,
author = {Chunhua Shen and Junae Kim and Fayao Liu and Lei Wang and Anton {van den Hengel}},
title = {Efficient Dual Approach to Distance Metric Learning},
journal= {IEEE Transactions on Neural Networks and Learning Systems},
volume = {25},
number = {2},
year = {2014},
month = {},
pages = {394--406},
eprint = {1302.3219},
venue = {TNN},
note = {},
abstract= {
Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the
performance of many learning methods. Quadratic Mahalanobis metric
learning is a popular approach to the problem, but typically requires
solving a semidefinite programming (SDP) problem, which is
computationally expensive. Standard interior-point SDP solvers
typically have a complexity of O(D^6.5) (with D the dimension of input
data), and can thus only practically solve problems exhibiting
less than a few thousand variables. Since the number of variables is
D(D+1)/2, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The
complexity of the popular quadratic Mahalanobis metric learning
approach thus limits the size of problem to which metric learning can
be applied. Here we propose a significantly more efficient approach to
the metric learning problem based on the Lagrange dual formulation of
the problem. The proposed formulation is much simpler to implement, and
therefore allows much larger Mahalanobis metric learning problems to be
solved. The time complexity of the proposed method is O(D^3), which is
significantly lower than that of the SDP approach. Experiments on a
variety of datasets demonstrate that the proposed method achieves an
accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method
can be applied to solve more general Frobenius-norm regularized SDP
problems approximately.
},
}
Downloads: 0
{"_id":"bJdZYi4RcfNEnkYF8","bibbaseid":"shen-kim-liu-wang-vandenhengel-efficientdualapproachtodistancemetriclearning-2014","downloads":0,"creationDate":"2016-08-12T01:53:44.371Z","title":"Efficient Dual Approach to Distance Metric Learning","author_short":["Shen, C.","Kim, J.","Liu, F.","Wang, L.","van den Hengel , A."],"year":2014,"bibtype":"article","biburl":"http://cs.adelaide.edu.au/~chhshen/cs.bib","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["Chunhua"],"propositions":[],"lastnames":["Shen"],"suffixes":[]},{"firstnames":["Junae"],"propositions":[],"lastnames":["Kim"],"suffixes":[]},{"firstnames":["Fayao"],"propositions":[],"lastnames":["Liu"],"suffixes":[]},{"firstnames":["Lei"],"propositions":[],"lastnames":["Wang"],"suffixes":[]},{"firstnames":["Anton"],"propositions":["van den Hengel"],"lastnames":[],"suffixes":[]}],"title":"Efficient Dual Approach to Distance Metric Learning","journal":"IEEE Transactions on Neural Networks and Learning Systems","volume":"25","number":"2","year":"2014","month":"","pages":"394--406","eprint":"1302.3219","venue":"TNN","note":"","abstract":"Distance metric learning is of fundamental interest in machine learning because the distance metric employed can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. Standard interior-point SDP solvers typically have a complexity of O(D^6.5) (with D the dimension of input data), and can thus only practically solve problems exhibiting less than a few thousand variables. Since the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved of around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here we propose a significantly more efficient approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is O(D^3), which is significantly lower than that of the SDP approach. Experiments on a variety of datasets demonstrate that the proposed method achieves an accuracy comparable to the state-of-the-art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius-norm regularized SDP problems approximately. ","bibtex":"@article{Shen2014Metric,\r\n author = {Chunhua Shen and Junae Kim and Fayao Liu and Lei Wang and Anton {van den Hengel}},\r\n title = {Efficient Dual Approach to Distance Metric Learning},\r\n journal= {IEEE Transactions on Neural Networks and Learning Systems},\r\n volume = {25},\r\n number = {2},\r\n year = {2014},\r\n month = {},\r\n pages = {394--406},\r\n eprint = {1302.3219},\r\n venue = {TNN},\r\n note = {},\r\n abstract= {\r\n Distance metric learning is of fundamental interest in machine learning\r\n because the distance metric employed can significantly affect the\r\n performance of many learning methods. Quadratic Mahalanobis metric\r\n learning is a popular approach to the problem, but typically requires\r\n solving a semidefinite programming (SDP) problem, which is\r\n computationally expensive. Standard interior-point SDP solvers\r\n typically have a complexity of O(D^6.5) (with D the dimension of input\r\n data), and can thus only practically solve problems exhibiting\r\n less than a few thousand variables. Since the number of variables is\r\n D(D+1)/2, this implies a limit upon the size of problem that can\r\n practically be solved of around a few hundred dimensions. The\r\n complexity of the popular quadratic Mahalanobis metric learning\r\n approach thus limits the size of problem to which metric learning can\r\n be applied. Here we propose a significantly more efficient approach to\r\n the metric learning problem based on the Lagrange dual formulation of\r\n the problem. The proposed formulation is much simpler to implement, and\r\n therefore allows much larger Mahalanobis metric learning problems to be\r\n solved. The time complexity of the proposed method is O(D^3), which is\r\n significantly lower than that of the SDP approach. Experiments on a\r\n variety of datasets demonstrate that the proposed method achieves an\r\n accuracy comparable to the state-of-the-art, but is applicable to\r\n significantly larger problems. We also show that the proposed method\r\n can be applied to solve more general Frobenius-norm regularized SDP\r\n problems approximately.\r\n },\r\n}\r\n\r\n","author_short":["Shen, C.","Kim, J.","Liu, F.","Wang, L.","van den Hengel , A."],"key":"Shen2014Metric","id":"Shen2014Metric","bibbaseid":"shen-kim-liu-wang-vandenhengel-efficientdualapproachtodistancemetriclearning-2014","role":"author","urls":{},"downloads":0,"html":""},"search_terms":["efficient","dual","approach","distance","metric","learning","shen","kim","liu","wang","van den hengel "],"keywords":[],"authorIDs":["57ad2c26482541360800049f"],"dataSources":["QpRy9mEoxTeyawTNL"]}