English explanations are provided alongside them for international readers.
My Bachelor's Thesis: Analyzing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library
This repository contains the evaluation framework used in my Bachelor's thesis to evaluate the semantic consistency of Large Language Models using the Word-in-Context (WiC) benchmark. It allows researchers to test the semantic understanding capabilities of any* Hugging Face language model and generate detailed statistics about its semantic understanding performance.
- Evaluate Hugging Face language models on the Word-in-Context (WiC) benchmark
- Modular evaluation pipeline
- Automatic dataset preparation
- Model inference using Hugging Face Transformers
- Detailed evaluation statistics
- Designed for reproducible research
Planned features:
- automatic result plotting using TikZ
- extended model compatibility
src/Framework - The module where it happens
The framework consists of three stages:
1️⃣ ModelInputPreparer
- Converts WiC dataset records into prompts suitable for LLM inference.
2️⃣ HuggingFaceModelInferencer
- Runs the chosen Hugging Face model on the prepared inputs.
3️⃣ ModelOutputProcessor
- Processes the outputs and computes evaluation statistics.
src/
└ Framework/
└ ModelInputPreparer/
└ main.py
├ HuggingFaceModelInferencer/
└ main.py
├ ModelOutputProcessor/
└ main.py
└ globalMain.pyRequirements:
- Python 3.10+
- pip
- git
Clone the repository:
git clone https://github.com/Fabbernat/Thesis
cd Thesis- clone the repository to a folder e.g.
cd ~\PycharmProjects
git clone https://github.com/Fabbernat/ThesisInstall required packages (may vary based on the chosen model)
cd Thesis
pip install torch transformers accelerate huggingface_hubThen run the modules one by one:
py -3.13 -m src.Framework.ModelInputPreparer.main
py -3.13 -m src.Framework.HuggingFaceModelInferencer.main
py -3.13 -m src.Framework.ModelOutputProcessor.mainOr run all three:
py -3.13 -m src.Framework.globalMain- Clone the Repo. Python interpreter needed. It is recommended to use PyCharm
- navigate to
src/Framework/ModelInputPreparer/main.pyand runmain()(in PyCharm just click the green triangle) - You see the results in the
.outfiles - do the same with the
HuggingFaceModelInferencerand theModelOutputProcessormodules, or just run thesrc/Framework/globalMain.pyto execute all three modules at once - Check the results in the .out files
- That's it!
- Any amount of records from the Word in Context dataset (or records in the same format, of course 🙂)
- Any* Hugging Face model
- Detailed statistics and analytics of the model's answers to the input. Unfortunately no plots yet, but I'm working on automatic plotting the results using TikZ right now.
* almost any, qwen and google models are the most compatible. You need to make your own scripts to test unsupported models. The framework has been thoroughly tested on
- Qwen/Qwen2.5-0.5B-Instruct, so this and similar models will grantedly work.
- google/gemma-2-2b-it has been tested a lot too, so this and similar models will work. Note: As gemma is a gated model, you'll need to log in to use it.
Analyzing the Consistency of Semantical Capabilities of Large Language Models
By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, Pilehvar and his team put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.
This repository contains an algorithm to achieve as much accuracy as possible on the WiC binary classification task. Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w. The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word’s semantic category. The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) – the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense.
English:
Comparison of the number of correct predictions made by four models on the 200 individual questions (100 word pairs), measured against the gold-standard labels.
Magyar: Négy modell pontos egyenkénti predikcióinak (gold standard címkékkel való egyezések számának) összehasonlítása a 200 kérdésen (100 kérdéspáron).
English:
Number of consistent, partially correct, and fully correct predictions made by four models across the 100 word pairs.
The number of fully correct pairs is the intersection of the consistent and partially correct sets.
Magyar:
Négy modell konzisztens, részben és teljesen pontos párjainak száma a 100 kérdéspáron.
A teljesen pontos párok száma a konzisztens és a részben pontos párok matematikai metszete.
English:
Distribution of „Yes” and „No” predictions made by the four models across the 200 questions, compared to the expected label distribution.
Magyar: A négy modell „Yes”-„No” válaszainak eloszlása a 200 egyenkénti kérdésre, összevetve a címkék elvárt eloszlásával.
English:
Absolute difference between the models' „Yes”-„No” prediction distribution and the expected label distribution.
Magyar: A négy modell „Yes”-„No” válaszainak eloszlásának különbözete (különbségének abszolút értéke) az elvárt eloszláshoz képest.
REAL Phi_4_mini_instruct.ipynb inferrer
- The Google Colab notebook running the models can be found at this link.
- This software can be downloaded from the github.com/Fabbernat/Thesis GitHub repository.
- Testing and evaluation of language models can be viewed in the Generative Language Models spreadsheet.



