Distributional semantics using neural networks. Svoboda, L. Technical Report DCSE/TR-2016-0, University of West Bohemia in Pilsen, Brno, June, 2016. Paper abstract bibtex During recent years, neural networks show crucial improvement in catching semantics of words or sentences. They also show improves in Language modeling, which is crucial for many tasks among Natural Language Processing (NLP). One of the most used architectures of Artificial Neural Networks (ANN) in NLP are Recurrent Neural Networks (RNN) that do not use limited size of context. By using recurrent connections, information can cycle in side these networks for arbitrarily long time. Thesis summarizes the state-of-the-art approaches to distributional semantics. Thesis also focus on further use of ANN among NLP problems
@techreport{svoboda_distributional_2016,
address = {Brno},
title = {Distributional semantics using neural networks},
url = {https://dspace5.zcu.cz/bitstream/11025/25377/1/Svoboda.pdf},
abstract = {During recent years, neural networks show crucial improvement in catching
semantics of words or sentences. They also show improves in Language
modeling, which is crucial for many tasks among Natural Language
Processing (NLP).
One of the most used architectures of Artificial Neural Networks (ANN) in
NLP are Recurrent Neural Networks (RNN) that do not use limited size of
context. By using recurrent connections, information can cycle in side
these networks for arbitrarily long time.
Thesis summarizes the state-of-the-art approaches to distributional
semantics. Thesis also focus on further use of ANN among NLP problems},
number = {DCSE/TR-2016-0},
urldate = {2017-04-02},
institution = {University of West Bohemia in Pilsen},
author = {Svoboda, Lukas},
month = jun,
year = {2016},
pages = {47},
}
Downloads: 0
{"_id":"rKE7dLdtXPJPgs8zk","bibbaseid":"svoboda-distributionalsemanticsusingneuralnetworks-2016","downloads":0,"creationDate":"2017-12-29T17:22:12.152Z","title":"Distributional semantics using neural networks","author_short":["Svoboda, L."],"year":2016,"bibtype":"techreport","biburl":"https://api.zotero.org/groups/1152176/items?key=bi2Q7duoPuqjf6lgym4TgM83&format=bibtex&limit=100","bibdata":{"bibtype":"techreport","type":"techreport","address":"Brno","title":"Distributional semantics using neural networks","url":"https://dspace5.zcu.cz/bitstream/11025/25377/1/Svoboda.pdf","abstract":"During recent years, neural networks show crucial improvement in catching semantics of words or sentences. They also show improves in Language modeling, which is crucial for many tasks among Natural Language Processing (NLP). One of the most used architectures of Artificial Neural Networks (ANN) in NLP are Recurrent Neural Networks (RNN) that do not use limited size of context. By using recurrent connections, information can cycle in side these networks for arbitrarily long time. Thesis summarizes the state-of-the-art approaches to distributional semantics. Thesis also focus on further use of ANN among NLP problems","number":"DCSE/TR-2016-0","urldate":"2017-04-02","institution":"University of West Bohemia in Pilsen","author":[{"propositions":[],"lastnames":["Svoboda"],"firstnames":["Lukas"],"suffixes":[]}],"month":"June","year":"2016","pages":"47","bibtex":"@techreport{svoboda_distributional_2016,\n\taddress = {Brno},\n\ttitle = {Distributional semantics using neural networks},\n\turl = {https://dspace5.zcu.cz/bitstream/11025/25377/1/Svoboda.pdf},\n\tabstract = {During recent years, neural networks show crucial improvement in catching\nsemantics of words or sentences. They also show improves in Language\nmodeling, which is crucial for many tasks among Natural Language\nProcessing (NLP).\nOne of the most used architectures of Artificial Neural Networks (ANN) in\nNLP are Recurrent Neural Networks (RNN) that do not use limited size of\ncontext. By using recurrent connections, information can cycle in side\nthese networks for arbitrarily long time.\nThesis summarizes the state-of-the-art approaches to distributional\nsemantics. Thesis also focus on further use of ANN among NLP problems},\n\tnumber = {DCSE/TR-2016-0},\n\turldate = {2017-04-02},\n\tinstitution = {University of West Bohemia in Pilsen},\n\tauthor = {Svoboda, Lukas},\n\tmonth = jun,\n\tyear = {2016},\n\tpages = {47},\n}\n\n","author_short":["Svoboda, L."],"key":"svoboda_distributional_2016","id":"svoboda_distributional_2016","bibbaseid":"svoboda-distributionalsemanticsusingneuralnetworks-2016","role":"author","urls":{"Paper":"https://dspace5.zcu.cz/bitstream/11025/25377/1/Svoboda.pdf"},"metadata":{"authorlinks":{}},"downloads":0},"search_terms":["distributional","semantics","using","neural","networks","svoboda"],"keywords":[],"authorIDs":[],"dataSources":["W7n7ZGD7akQhMdBQh"]}