Blog
4 hours ago
Researchers Can Now Identify If Your AI Stole Its Training Data
Text-to-image models are often misused when attackers train new models on outputs from commercial models. This paper introduces an injection-free method to identify whether a suspicious model’s training data comes from a source model. Leveraging inherent memorization patterns, the approach achieves over 80% instance-level accuracy and 85% statistical-level accuracy without altering the source model.
Source: HackerNoon →