Improving Universal Language Model Fine-Tuning using Attention Mechanism. Santos, F., Ponce-Guevara, K., MacEdo, D., & Zanchettin, C. In Proceedings of the International Joint Conference on Neural Networks, volume 2019-July, 2019. doi abstract bibtex © 2019 IEEE. Inductive transfer learning is widespread in computer vision applications. However, in natural language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.
@inproceedings{
title = {Improving Universal Language Model Fine-Tuning using Attention Mechanism},
type = {inproceedings},
year = {2019},
volume = {2019-July},
id = {7915d8a1-6b7d-3f5d-aca5-37bafcdfa86c},
created = {2019-10-20T23:59:00.000Z},
file_attached = {false},
profile_id = {74e7d4ea-3dac-3118-aab9-511a5b337e8f},
last_modified = {2021-01-15T18:17:59.305Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {© 2019 IEEE. Inductive transfer learning is widespread in computer vision applications. However, in natural language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.},
bibtype = {inproceedings},
author = {Santos, F.A.O. and Ponce-Guevara, K.L. and MacEdo, D. and Zanchettin, C.},
doi = {10.1109/IJCNN.2019.8852398},
booktitle = {Proceedings of the International Joint Conference on Neural Networks}
}
Downloads: 0
{"_id":"Nw73t4XwxLovGNGsK","bibbaseid":"santos-ponceguevara-macedo-zanchettin-improvinguniversallanguagemodelfinetuningusingattentionmechanism-2019","authorIDs":["PtDsdiZ3iPSFZKH6J"],"author_short":["Santos, F.","Ponce-Guevara, K.","MacEdo, D.","Zanchettin, C."],"bibdata":{"title":"Improving Universal Language Model Fine-Tuning using Attention Mechanism","type":"inproceedings","year":"2019","volume":"2019-July","id":"7915d8a1-6b7d-3f5d-aca5-37bafcdfa86c","created":"2019-10-20T23:59:00.000Z","file_attached":false,"profile_id":"74e7d4ea-3dac-3118-aab9-511a5b337e8f","last_modified":"2021-01-15T18:17:59.305Z","read":false,"starred":false,"authored":"true","confirmed":false,"hidden":false,"private_publication":false,"abstract":"© 2019 IEEE. Inductive transfer learning is widespread in computer vision applications. However, in natural language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.","bibtype":"inproceedings","author":"Santos, F.A.O. and Ponce-Guevara, K.L. and MacEdo, D. and Zanchettin, C.","doi":"10.1109/IJCNN.2019.8852398","booktitle":"Proceedings of the International Joint Conference on Neural Networks","bibtex":"@inproceedings{\n title = {Improving Universal Language Model Fine-Tuning using Attention Mechanism},\n type = {inproceedings},\n year = {2019},\n volume = {2019-July},\n id = {7915d8a1-6b7d-3f5d-aca5-37bafcdfa86c},\n created = {2019-10-20T23:59:00.000Z},\n file_attached = {false},\n profile_id = {74e7d4ea-3dac-3118-aab9-511a5b337e8f},\n last_modified = {2021-01-15T18:17:59.305Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {© 2019 IEEE. Inductive transfer learning is widespread in computer vision applications. However, in natural language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.},\n bibtype = {inproceedings},\n author = {Santos, F.A.O. and Ponce-Guevara, K.L. and MacEdo, D. and Zanchettin, C.},\n doi = {10.1109/IJCNN.2019.8852398},\n booktitle = {Proceedings of the International Joint Conference on Neural Networks}\n}","author_short":["Santos, F.","Ponce-Guevara, K.","MacEdo, D.","Zanchettin, C."],"biburl":"https://bibbase.org/service/mendeley/74e7d4ea-3dac-3118-aab9-511a5b337e8f","bibbaseid":"santos-ponceguevara-macedo-zanchettin-improvinguniversallanguagemodelfinetuningusingattentionmechanism-2019","role":"author","urls":{},"metadata":{"authorlinks":{"zanchettin, c":"https://zanche.github.io/publications/"}},"downloads":0},"bibtype":"inproceedings","biburl":"https://bibbase.org/service/mendeley/74e7d4ea-3dac-3118-aab9-511a5b337e8f","creationDate":"2020-12-26T17:53:36.609Z","downloads":0,"keywords":[],"search_terms":["improving","universal","language","model","fine","tuning","using","attention","mechanism","santos","ponce-guevara","macedo","zanchettin"],"title":"Improving Universal Language Model Fine-Tuning using Attention Mechanism","year":2019,"dataSources":["fvRdkx56Jpp5ebtSw","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}