FaceChat: An Emotion-Aware Face-to-face Dialogue Framework. Alnuhait, D., Wu, Q., & Zhou, Y. arXiv.org, March, 2023. Place: Ithaca Publisher: Cornell University Library, arXiv.org
FaceChat: An Emotion-Aware Face-to-face Dialogue Framework [link]Paper  abstract   bibtex   
While current dialogue systems like ChatGPT have made significant advancements in text-based interactions, they often overlook the potential of other modalities in enhancing the overall user experience. We present FaceChat, a web-based dialogue framework that enables emotionally-sensitive and face-to-face conversations. By seamlessly integrating cutting-edge technologies in natural language processing, computer vision, and speech processing, FaceChat delivers a highly immersive and engaging user experience. FaceChat framework has a wide range of potential applications, including counseling, emotional support, and personalized customer service. The system is designed to be simple and flexible as a platform for future researchers to advance the field of multimodal dialogue systems. The code is publicly available at https://github.com/qywu/FaceChat.
@article{alnuhait_facechat_2023,
	title = {{FaceChat}: {An} {Emotion}-{Aware} {Face}-to-face {Dialogue} {Framework}},
	url = {https://www.proquest.com/working-papers/facechat-emotion-aware-face-dialogue-framework/docview/2786648899/se-2},
	abstract = {While current dialogue systems like ChatGPT have made significant advancements in text-based interactions, they often overlook the potential of other modalities in enhancing the overall user experience. We present FaceChat, a web-based dialogue framework that enables emotionally-sensitive and face-to-face conversations. By seamlessly integrating cutting-edge technologies in natural language processing, computer vision, and speech processing, FaceChat delivers a highly immersive and engaging user experience. FaceChat framework has a wide range of potential applications, including counseling, emotional support, and personalized customer service. The system is designed to be simple and flexible as a platform for future researchers to advance the field of multimodal dialogue systems. The code is publicly available at https://github.com/qywu/FaceChat.},
	language = {English},
	journal = {arXiv.org},
	author = {Alnuhait, Deema and Wu, Qingyang and Zhou, Yu},
	month = mar,
	year = {2023},
	note = {Place: Ithaca
Publisher: Cornell University Library, arXiv.org},
	keywords = {Artificial Intelligence, Business And Economics--Banking And Finance, Computation and Language, Natural language processing, User experience, Computer vision, Customer services, Speech processing},
	annote = {Copyright - © 2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”).  Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.},
	annote = {Última actualización - 2023-03-15},
}

Downloads: 0