Models

No model

Signifies that the analysis should focus solely on the fairness of the dataset.

Trivial predictor

Creates a trivial predictor that returns the most common predictive label value among provided data. If the label is numeric, the median is computed instead. This model servers as an informed baseline of what happens even for an uninformed predictor. Several kinds of class biases may exist, for example due to different class imbalances for each sensitive attribute dimension (e.g., for old white men compared to young hispanic women).

Onnx

Loads an inference model stored in ONNx format. This is a generic cross-platform format for representing machine learning models with a common set of operations. Several machine learning frameworks can export to this format. The loaded model should be compatible with the dataset being analysed. For example, the same data columns as in the dataset should be used for training on tabular data.

ONNx supports several different runtimes, but this loader's implementation selects the CPUExecutionProvider runtime to run on, therefore maintaining compatibility with most machines. For inference in GPUs, prefer storing and loading models in formats that are guaranteed to maintain all features that could be included in the architectures of respective frameworks; this can be achieved with different model loaders.

Here are some quick links on how to export ONNx models from popular frameworks:

Parameters

Onnx ensemble

This ONNX Ensemble Module enables predictions using a boosting ensemble mechanism, ideal for combining multiple weak learners to improve prediction accuracy. Boosting, a powerful technique in machine learning, focuses on training a series of simple models (weak learners) – often single-depth decision trees – and combining them into a strong ensemble model. However, the model loader module allows any model converted to ONNX format and zipped inside a directory path along with other meta-informations (if any) stored in .npy format.

Usage Instructions: To load a model, users need to supply a zip file path. This zip file should include at least one or possibly many trained models, each saved in the ONNX format, as well as parameters, such as weights (often denoted as ‘alphas’), that define each learner’s contribution to the final model. For an example of preparing this file, please see our notebook.

The module recomends using the MMM-Fair models. MMM-Fair is a fairness-aware machine learning framework designed to support high-stakes AI decision-making under competing fairness and accuracy demands. The three M’s stand for: • Multi-Objective: Optimizes across classification accuracy, balanced accuracy, and fairness (specifically, maximum group-level discrimination). • Multi-Attribute: Supports multiple protected groups (e.g., race, gender, age) simultaneously, analyzing group-specific disparities. • Multi-Definition: Evaluates and compares fairness under multiple definitions—Demographic Parity (DP), Equal Opportunity (EP), and Equalized Odds (EO). MMM-Fair enables developers, researchers, and decision-makers to explore the full spectrum of possible trade-offs and select the model configuration that aligns with their social or organizational goals. For theoretical understanding of MMM-Fair, it is recomended to read the published scientific article that introduced the foundation of the MMM-algorithms.

Train and upload: To create and integrate your own MMM-Fair model trained on your intended data, please follow the instructions given in the Pypi package guidance.

Parameters

Torch

Loads a pytorch model that comprises a Python code initializing the architecture and a file of trained parameters. For safety, the architecture's definition is allowed to directly import only specified libraries.

Parameters

Torch2onnx

Loads a ONNX model that comprises a Python code initializing the architecture and a file of trained parameters. For safety, the architecture's definition is allowed to directly import only specified libraries.

Parameters

Fair node ranking

Constructs a node ranking algorithm that is a variation non-personalized PageRank. The base algorithm is often computes a notion of centrality/structural importance for each node in the graph, and employs a diffusion parameter in the range [0, 1). Find more details on how the algorithm works based on the following seminal paper:

Page, L. (1999). The PageRank citation ranking: Bringing order to the web. Technical Report.

The base node ranking algorithm is enriched by fairness-aware interventions implemented by the pygrank library. The latter may run on various computational backends, but numpy is selected due to its compatibility with a broad range of software and hardware. All implemented algorithms transfer node score mass from over-represented groups of nodes to those with lesser average mass using different strategies that determine the redistribution details. Fairness is imposed in terms of centrality scores achieving similar score mass between groups. The three available strategies are described here:

  • `none` does not employ any fairness intervention and runs the base algorithm.
  • `uniform` applies a uniform rank redistribution strategy.
  • `original` tries to preserve the order of original node ranks by distributing more score mass to those.

Parameters

Mitigation ranking

Load the researcher ranking model incorporating a mitigation strategy.

This function implements a fair ranking mechanism utilizing a sampling technique. It applies a mitigation strategy based on Statistical Parity, which aims to ensure equitable treatment across different groups by mitigating bias in the ranking process. Additionally, it compares the results of this fair ranking with a standard ranking derived from one of the numerical columns.

Returns: ResearcherRanking: An instance of ResearcherRanking that contains both the mitigation-based ranking and the standard ranking for comparison.