AutoFair

AutoFair logo

AutoFair: Human-Compatible AI with Guarantees

The AutoFair project (Human-Compatible Artificial Intelligence with Guarantees) is a Horizon Europe-funded research initiative dedicated to making artificial intelligence systems more transparent, explainable, and inherently fair. As AI increasingly powers critical decision-making in society, AutoFair addresses the risk of “black-box” algorithms by developing a technical framework that guarantees non-discrimination. By combining insights from computer science, control theory, optimization, and legal ethics, the project seeks to move beyond simple bias mitigation toward a system of “automatic fairness” that aligns AI behavior with human values and European regulatory standards.

AutoFair Project Website

At its core, AutoFair is building a comprehensive suite of tools for the modular certification of AI pipelines. The project targets three primary technological pillars: a priori guarantees, which use hard constraints during the training process to prevent bias from the start; post-hoc explicability, which provides thorough communication of the trade-offs involved in algorithmic choices; and user-in-the-loop design, which allows developers and end-users to interactively navigate fairness-accuracy trade-offs. These methodologies are being validated through three high-impact industrial use cases: fair evaluation in recruitment (HR), the elimination of gender inequality in digital advertising, and the prevention of discrimination against bank clients in FinTech.

A key feature of the project is its interdisciplinary approach to trust. AutoFair recognizes that fairness is not just a mathematical constraint but a socio-legal requirement. The project collaborates with major industry partners, including IBM Research and various European startups, to ensure that the developed fairness toolkits are practical for real-world deployment. By providing open-source libraries and clear visualization tools, the project aims to empower practitioners to build AI systems that are not only high-performing but also legally compliant and socially responsible.

By bridging the gap between technical optimization and legal frameworks, AutoFair seeks to set new standards for “trustworthy AI.” The project contributes directly to the implementation of the EU AI Act by providing the mathematical and computational “guarantees” needed to ensure that automated systems do not inherit or amplify societal inequalities.

My role

My role in the AutoFair project is situated at the intersection of AI, law, and ethics. I focus on the development of fairness and explainability techniques specifically grounded in counterfactual reasoning. This involves designing methods that provide “what-if” explanations to help users and auditors understand how a decision might have changed if certain attributes were different.

Factsheet

Item Details
Funding Program Horizon Europe
Call HORIZON-CL4-2021-HUMAN-01-01
Grant Agreement No. 101070568
Type of Action HORIZON Research and Innovation Actions (RIA)
Duration 1 October 2022 – 30 September 2025
Funding € 3.84 million
Consortium 8 partners from 5 countries
Coordinator Czech Technical University in Prague (CTU), Czechia