Skip to content

Fabbernat/Thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

551 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚠️ Parts of the research description are written in Hungarian, as they originate from the Bachelor's thesis documentation.
English explanations are provided alongside them for international readers.

My Bachelor's Thesis: Analyzing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library

This repository contains the evaluation framework used in my Bachelor's thesis to evaluate the semantic consistency of Large Language Models using the Word-in-Context (WiC) benchmark. It allows researchers to test the semantic understanding capabilities of any* Hugging Face language model and generate detailed statistics about its semantic understanding performance.

Features

  • Evaluate Hugging Face language models on the Word-in-Context (WiC) benchmark
  • Modular evaluation pipeline
  • Automatic dataset preparation
  • Model inference using Hugging Face Transformers
  • Detailed evaluation statistics
  • Designed for reproducible research

Planned features:

  • automatic result plotting using TikZ
  • extended model compatibility

src/Framework - The module where it happens

link

Evaluation Pipeline

The framework consists of three stages:

1️⃣ ModelInputPreparer

  • Converts WiC dataset records into prompts suitable for LLM inference.

2️⃣ HuggingFaceModelInferencer

  • Runs the chosen Hugging Face model on the prepared inputs.

3️⃣ ModelOutputProcessor

  • Processes the outputs and computes evaluation statistics.
  src/Framework/ModelInputPreparer/main.pyHuggingFaceModelInferencer/main.pyModelOutputProcessor/main.pyglobalMain.py

Installation

Requirements:

  • Python 3.10+
  • pip
  • git

Clone the repository:

git clone https://github.com/Fabbernat/Thesis
cd Thesis
In PowerShell ( or your favorite terminal that supports git and pip)
  1. clone the repository to a folder e.g.
cd ~\PycharmProjects
git clone https://github.com/Fabbernat/Thesis

Install required packages (may vary based on the chosen model)

cd Thesis
pip install torch transformers accelerate huggingface_hub

Then run the modules one by one:

py -3.13 -m src.Framework.ModelInputPreparer.main

py -3.13 -m src.Framework.HuggingFaceModelInferencer.main

py -3.13 -m src.Framework.ModelOutputProcessor.main

Or run all three:

py -3.13 -m src.Framework.globalMain
In PyCharm
  1. Clone the Repo. Python interpreter needed. It is recommended to use PyCharm
  2. navigate to src/Framework/ModelInputPreparer/main.py and run main() (in PyCharm just click the green triangle)
  3. You see the results in the .out files
  4. do the same with the HuggingFaceModelInferencer and the ModelOutputProcessor modules, or just run the src/Framework/globalMain.py to execute all three modules at once
  5. Check the results in the .out files
  6. That's it!

Input:

Output:

  • Detailed statistics and analytics of the model's answers to the input. Unfortunately no plots yet, but I'm working on automatic plotting the results using TikZ right now.

* almost any, qwen and google models are the most compatible. You need to make your own scripts to test unsupported models. The framework has been thoroughly tested on

  1. Qwen/Qwen2.5-0.5B-Instruct, so this and similar models will grantedly work.
  2. google/gemma-2-2b-it has been tested a lot too, so this and similar models will work. Note: As gemma is a gated model, you'll need to log in to use it.

The paper [Overleaf Project]:

Analyzing the Consistency of Semantical Capabilities of Large Language Models

Pdf TeX Source:

GitHub Thesis-paper

My home page:

Bernát Fábián

Word in Context (WiC) Task

By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, Pilehvar and his team put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.

This repository contains an algorithm to achieve as much accuracy as possible on the WiC binary classification task. Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w. The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word’s semantic category. The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) – the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense.

Research results (Hungarian) [updated: March 13, 2026.]:

image

English:
Comparison of the number of correct predictions made by four models on the 200 individual questions (100 word pairs), measured against the gold-standard labels.

Magyar: Négy modell pontos egyenkénti predikcióinak (gold standard címkékkel való egyezések számának) összehasonlítása a 200 kérdésen (100 kérdéspáron).


image

English:
Number of consistent, partially correct, and fully correct predictions made by four models across the 100 word pairs.
The number of fully correct pairs is the intersection of the consistent and partially correct sets.

Magyar:
Négy modell konzisztens, részben és teljesen pontos párjainak száma a 100 kérdéspáron. A teljesen pontos párok száma a konzisztens és a részben pontos párok matematikai metszete.


image

English:
Distribution of „Yes” and „No” predictions made by the four models across the 200 questions, compared to the expected label distribution.

Magyar: A négy modell „Yes”-„No” válaszainak eloszlása a 200 egyenkénti kérdésre, összevetve a címkék elvárt eloszlásával.


image

English:
Absolute difference between the models' „Yes”-„No” prediction distribution and the expected label distribution.

Magyar: A négy modell „Yes”-„No” válaszainak eloszlásának különbözete (különbségének abszolút értéke) az elvárt eloszláshoz képest.


The Google Colab Notebook used for GPU-powered run [updated: January 10, 2026.]:

REAL Phi_4_mini_instruct.ipynb inferrer

Usage of the scripts [outdated]:

image image image image

About

My Bachelor's Thesis: Reviewing the Consistency of Semantical Capabilities of Large Language Models. For documentation see [https://github.com/Fabbernat/Thesis-paper]

Topics

Resources

Stars

Watchers

Forks

Contributors

Languages