Incorporating query constraints for autoencoder enhanced ranking. Xu, B., Lin, H., Lin, Y., & Xu, K. Neurocomputing, 356:142–150, September, 2019.
Incorporating query constraints for autoencoder enhanced ranking [link]Paper  doi  abstract   bibtex   
Learning to rank has been widely used in information retrieval tasks to construct ranking models for document retrieval. Existing learning to rank methods adopt supervised machine learning methods as core techniques and classical retrieval models as document features. The quality of document features can significantly affect the effectiveness of ranking models. Therefore, it is necessary to generate effective document features in ranking to extend the feature space of learning to rank for better modeling the relevance between queries and their corresponding documents. Recently, deep neural network models have been used to generate effective features for various text mining tasks. Autoencoders, as one type of building blocks of neural networks, capture semantic information as effective features based on an encoder-decoder framework. In this paper, we incorporate autoencoders into constructing ranking models based on learning to rank. In our method, autoencoders are used to generate effective documents features for capturing semantic information of documents. We propose a query-level semi-supervised autoencoder by considering three types of query constraints based on Bregman divergence. We evaluate the effectiveness of our model on datasets from LETOR 3.0 and LETOR 4.0, and show that our model significantly outperforms other competing methods to improve retrieval performance.
@article{xu_incorporating_2019,
	title = {Incorporating query constraints for autoencoder enhanced ranking},
	volume = {356},
	issn = {0925-2312},
	url = {http://www.sciencedirect.com/science/article/pii/S0925231219304084},
	doi = {10.1016/j.neucom.2019.03.068},
	abstract = {Learning to rank has been widely used in information retrieval tasks to construct ranking models for document retrieval. Existing learning to rank methods adopt supervised machine learning methods as core techniques and classical retrieval models as document features. The quality of document features can significantly affect the effectiveness of ranking models. Therefore, it is necessary to generate effective document features in ranking to extend the feature space of learning to rank for better modeling the relevance between queries and their corresponding documents. Recently, deep neural network models have been used to generate effective features for various text mining tasks. Autoencoders, as one type of building blocks of neural networks, capture semantic information as effective features based on an encoder-decoder framework. In this paper, we incorporate autoencoders into constructing ranking models based on learning to rank. In our method, autoencoders are used to generate effective documents features for capturing semantic information of documents. We propose a query-level semi-supervised autoencoder by considering three types of query constraints based on Bregman divergence. We evaluate the effectiveness of our model on datasets from LETOR 3.0 and LETOR 4.0, and show that our model significantly outperforms other competing methods to improve retrieval performance.},
	language = {en},
	urldate = {2020-05-18},
	journal = {Neurocomputing},
	author = {Xu, Bo and Lin, Hongfei and Lin, Yuan and Xu, Kan},
	month = sep,
	year = {2019},
	keywords = {Autoencoders, Learning to rank, Query constraints, Semi-supervised learning},
	pages = {142--150}
}

Downloads: 0