Fairlogue: Intersectional Fairness Toolkit for Clinical AI Models Detects Hidden Disparities
Most AI fairness tools compare single demographic groups, missing compounded disparities that affect intersectional populations. Fairlogue is a new toolkit that specifically addresses this gap in clinical machine learning.
The Problem
Traditional fairness analysis examines one attribute at a time — e.g., comparing outcomes by race, or by gender separately. This misses critical intersectional effects:
- A model might seem fair by race AND fair by gender
- But could severely disadvantage specific race-gender combinations
- These compounded disparities are invisible to single-axis analysis
Fairlogue's Three Components
| Component | What It Does |
|---|---|
| Observational Framework | Extends demographic parity, equalized odds, and equal opportunity to intersectional populations |
| Counterfactual Framework | Evaluates fairness under treatment-based contexts |
| Generalized Counterfactual | Assesses fairness under interventions on intersectional group membership |
Clinical Evaluation
Tested on electronic health records from the All of Us dataset:
- Task: Glaucoma surgery prediction
- Protected attributes: Race and gender
- Finding: Substantial intersectional disparities identified that single-axis analysis completely missed
Why It Matters
Healthcare AI models make life-critical decisions. If these models have hidden biases at the intersection of multiple demographic factors, certain patient populations could systematically receive worse care. Fairlogue provides the tooling to detect and address these invisible disparities.
Availability
Fairlogue is Python-based and open-source, designed for practical deployment in clinical settings.