Blog

Nov 10, 2025

Quantifying Attribute Association Bias in Latent Factor Recommendation Models

This paper introduces an evaluation framework to measure attribute association bias in recommendation systems, expanding fairness research beyond traditional allocation harms. Building on NLP bias-detection methods, it quantifies representational harms in latent factor models, focusing on gender associations as a case study. By analyzing how stereotypes can be encoded and amplified through vector embeddings, the study enhances transparency and offers new directions for mitigating bias in AI-driven recommendations.

Source: HackerNoon →


Share

BTCBTC
$88,352.00
0.03%
ETHETH
$2,990.41
0.03%
USDTUSDT
$1.000
0.02%
BNBBNB
$856.86
0.3%
XRPXRP
$1.90
0.61%
USDCUSDC
$1.000
0.01%
SOLSOL
$125.45
0.21%
TRXTRX
$0.284
1.41%
STETHSTETH
$2,989.18
0.11%
DOGEDOGE
$0.132
1.24%
FIGR_HELOCFIGR_HELOC
$1.04
0.34%
ADAADA
$0.371
2.78%
WBTWBT
$57.33
0.4%
BCHBCH
$590.06
0.48%
WSTETHWSTETH
$3,656.16
0.26%
WBTCWBTC
$88,240.00
0.19%
WBETHWBETH
$3,249.15
0.31%
USDSUSDS
$1.000
0.02%
WEETHWEETH
$3,242.69
0.24%
BSC-USDBSC-USD
$1.000
0.03%