LLsiM: Large Language Models for Similarity Assessment in Case-Based Reasoning. Lenz, M., Hoffmann, M., & Bergmann, R. In Bichindaritz, I. & López, B., editors, Case-Based Reasoning Research and Development, volume 15662, of Lecture Notes in Computer Science, pages 126–141, Cham, 2025. Springer Nature Switzerland.
LLsiM: Large Language Models for Similarity Assessment in Case-Based Reasoning [pdf]Paper  doi  abstract   bibtex   1 download  
In Case-Based Reasoning (CBR), past experience is used to solve new problems. Determining the most relevant cases is a crucial aspect of this process and is typically based on one or multiple manually-defined similarity measures, requiring deep domain knowledge. To overcome the knowledge-acquisition bottleneck, we propose the use of Large Language Models (LLMs) to automatically assess similarities between cases. We present three distinct approaches where the model is used for different tasks: (i) to predict similarity scores, (ii) to assess pairwise preferences, and (iii) to automatically configure similarity measures. Our conceptual work is accompanied by an open-source Python implementation that we use to evaluate the approaches on three different domains by comparing them to manually crafted similarity measures. Our results show that directly using LLM-based scores does not align well with the baseline rankings, but letting the LLM automatically configure the measures yields rankings that closely resemble the expert-defined ones.

Downloads: 1