Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data. Beam*, A. L, Kompa*, B., Fried, I., Palmer, N. P, Shi, X., Cai, T., & Kohane, I. S arXiv preprint arXiv:1804.01486, 2019.
Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data [link]Paper  abstract   bibtex   10 downloads  
Word embeddings are a popular approach to unsupervised learning of word relationships that are widely used in natural language processing. In this article, we present a new set of embeddings for medical concepts learned using an extremely large collection of multimodal medical data. Leaning on recent theoretical insights, we demonstrate how an insurance claims database of 60 million members, a collection of 20 million clinical notes, and 1.7 million full text biomedical journal articles can be combined to embed concepts into a common space, resulting in the largest ever set of embeddings for 108,477 medical concepts. To evaluate our approach, we present a new benchmark methodology based on statistical power specifically designed to test embeddings of medical concepts. Our approach, called cui2vec, attains state-of-the-art performance relative to previous methods in most instances. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.
@article{beam2018clinical,
  title={Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data},
  author={Beam*, Andrew L and Kompa*, Benjamin and Fried, Inbar and Palmer, Nathan P and Shi, Xu and Cai, Tianxi and Kohane, Isaac S},
  journal={arXiv preprint arXiv:1804.01486},
  keywords={Distributed Representations, NLP},
  url_Paper={https://www.dropbox.com/s/mkoc9l6ma3e0bze/PSB__Clinical_Concept_Embeddings_Learned_from_Massive_Sources_of_Multimodal_Medical_Data.pdf?dl=1},
  abstract={Word embeddings are a popular approach to unsupervised learning of word relationships that are widely used in natural language processing. In this article, we present a new set of embeddings for medical concepts learned using an extremely large collection of multimodal medical data. Leaning on recent theoretical insights, we demonstrate how an insurance claims database of 60 million members, a collection of 20 million clinical notes, and 1.7 million full text biomedical journal articles can be combined to embed concepts into a common space, resulting in the largest ever set of embeddings for 108,477 medical concepts. To evaluate our approach, we present a new benchmark methodology based on statistical power specifically designed to test embeddings of medical concepts. Our approach, called cui2vec, attains state-of-the-art performance relative to previous methods in most instances. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.
},
  year={2019}
}

Downloads: 10