Blog

Sep 21, 2025

New AI Study Tackles the Transparency Problem in Black-Box Models

The disagreement issue in post hoc feature attribution techniques is discussed in this study. Explainers like SHAP, LIME, and gradient-based techniques frequently result in contradictory feature importance rankings for the same model. Post hoc Explainer Agreement Regularization (PEAR), a loss term added after model training, is introduced to counteract this and promote increased explainer consensus without significantly compromising accuracy. Experiments on three datasets show that PEAR offers a customizable balance between explanation consensus and predictive performance, and it enhances agreement across explainers, including those not directly used in training. PEAR improves explanations' dependability and credibility in crucial machine learning applications by turning disagreement into a controlled parameter.

Source: HackerNoon →


Share

BTCBTC
$111,742.00
0.58%
ETHETH
$3,828.95
1.55%
USDTUSDT
$1.00
0.01%
BNBBNB
$1,166.97
6.77%
XRPXRP
$2.39
1.72%
SOLSOL
$182.25
0.46%
USDCUSDC
$1.000
0.01%
STETHSTETH
$3,826.66
1.47%
TRXTRX
$0.316
0.73%
DOGEDOGE
$0.190
1.22%
ADAADA
$0.649
0.66%
WSTETHWSTETH
$4,657.66
1.59%
WBTCWBTC
$111,840.00
0.57%
WBETHWBETH
$4,109.31
1.91%
FIGR_HELOCFIGR_HELOC
$1.00
3.39%
USDEUSDE
$1.000
0.05%
LINKLINK
$17.46
1.68%
XLMXLM
$0.326
0.73%
BCHBCH
$521.46
1.61%
HYPEHYPE
$37.91
4.71%