This collection of visual explanations is produced by MAI-BIAS using the FaceX library to explain how face attribute classifiers make decisions. It evaluates 19 facial regions such as eyes, nose, mouth, hair, and skin, showing which areas influence predictions the most.
Rather than analyzing images one by one, FaceX aggregates activations across the dataset to reveal common patterns. It highlights high-impact regions and patches, helping identify potential biases and ensuring greater transparency in how the model interprets faces.
FaceX produces heatmaps where blue marks less important regions and red marks highly influential regions. These maps answer the question: "Where does the model focus?". High-impact patches provide further detail on "What visual features trigger this focus?".
By combining regional importance with patch-level analysis, the report helps spot possible biases in model reasoning — for example, whether the classifier over-relies on irrelevant features like accessories instead of actual facial attributes.