Blog
13 hours ago
The Illusion of Scale: Why LLMs Are Vulnerable to Data Poisoning, Regardless of Size
The research challenges the conventional wisdom that an attacker needs to control a specific percentage of the training data (e.g., 0.1% or 0.27%) to succeed. For the largest model tested (13B parameters), those 250 poisoned samples represented a minuscule 0.00016% of the total training tokens. Attack success rate remained nearly identical across all tested model scales for a fixed number of poisoned documents.
Source: HackerNoon →