Large Language Models as Knowledge Engineers. Brand, F., Malburg, L., & Bergmann, R. In Proceedings of the Workshops at the 32nd International Conference on Case-Based Reasoning (ICCBR-WS 2024) co-located with the 32nd International Conference on Case-Based Reasoning (ICCBR 2024), Merida, Mexico, July 1, 2024, of CEUR Workshop Proceedings, 2024. CEUR-WS.org.. Accepted for Publication.
abstract   bibtex   
Many Artificial Intelligence (AI) systems require human-engineered knowledge at their core to reason about new problems based on this knowledge, with Case-Based Reasoning (CBR) being no exception. However, the acquisition of this knowledge is a time-consuming and laborious task for the domain experts that provide the needed knowledge. We propose an approach to help in the creation of this knowledge by leveraging Large Language Models (LLMs) in conjunction with existing knowledge to create the vocabulary and case base for a complex real-world domain. We find that LLMs are capable of generating knowledge, with results improving by using natural language and instructions. Furthermore, permissively licensed models like CodeLlama and Mixtral perform similar or better than closed state-of-the-art models like GPT-3.5 Turbo and GPT-4 Turbo.
@inproceedings{Brand.2024_LLMKnowledgeEngineer,
  author       = {Brand, Florian and Malburg, Lukas and Bergmann, Ralph},
  title        = {{Large Language Models as Knowledge Engineers}},
  booktitle    = {Proceedings of the Workshops at the 32nd International Conference
                  on Case-Based Reasoning {(ICCBR-WS} 2024) co-located with the 32nd 
				  International Conference on Case-Based Reasoning {(ICCBR} 2024), Merida,
                  Mexico, July 1, 2024},
  series       = {{CEUR} Workshop Proceedings},
  publisher    = {CEUR-WS.org.},
  note		   = {{Accepted for Publication.}},
  year         = {2024},
  abstract 	   = {{Many Artificial Intelligence (AI) systems require human-engineered knowledge at their core to reason about new problems based on this knowledge, with Case-Based Reasoning (CBR) being no exception. However, the acquisition of this knowledge is a time-consuming and laborious task for the domain experts that provide the needed knowledge. We propose an approach to help in the creation of this knowledge by leveraging Large Language Models (LLMs) in conjunction with existing knowledge to create the vocabulary and case base for a complex real-world domain. We find that LLMs are capable of generating knowledge, with results improving by using natural language and instructions. Furthermore, permissively licensed models like CodeLlama and Mixtral perform similar or better than closed state-of-the-art models like GPT-3.5 Turbo and GPT-4 Turbo.}},
  keywords 	   = {{Case-Based Reasoning, Knowledge Engineering, Knowledge Acquisition Bottleneck, Large Language Models, Prompting}}
}

Downloads: 0