Search papers, labs, and topics across Lattice.
This paper introduces a transparent and verifiable protocol for evaluating the fairness of open-source LLMs by leveraging smart contracts on the Internet Computer Protocol (ICP) blockchain. The protocol executes on-chain HTTP requests to Hugging Face endpoints, storing datasets, prompts, and fairness metrics on-chain to ensure reproducibility and immutability. The authors benchmarked Llama, DeepSeek, and Mistral models using the PISA dataset (for academic performance fairness), StereoSet (for social bias), and Kaleidoscope (for multilingual fairness), revealing disparities across models and languages.
Blockchain-based fairness evaluations expose significant disparities in open-source LLMs across academic performance, social bias, and multilingual contexts, demanding more rigorous and transparent auditing.
Large language models (LLMs) are increasingly deployed in realworld applications, yet concerns about their fairness persist especially in highstakes domains like criminal justice, education, healthcare, and finance. This paper introduces transparent evaluation protocol for benchmarking the fairness of opensource LLMs using smart contracts on the Internet Computer Protocol (ICP) blockchain (Foundation, 2023). Our method ensures verifiable, immutable, and reproducible evaluations by executing onchain HTTP requests to hosted Hugging Face endpoints and storing datasets, prompts, and metrics directly onchain. We benchmark the Llama, DeepSeek, and Mistral models on the PISA dataset for academic performance prediction (OECD, 2018), a dataset suitable for fairness evaluation using statistical parity and equal opportunity metrics (Hardt et al., 2016). We also evaluate structured Context Association Metrics derived from the StereoSet dataset (Nadeem et al., 2020) to measure social bias in contextual associations. We further extend our analysis with a multilingual evaluation across English, Spanish, and Portuguese using the Kaleidoscope benchmark (Salazar et al., 2025), revealing cross-linguistic disparities. All code and results are open source, enabling community audits and longitudinal fairness tracking across model versions.