Blog
1 week ago
Model Poisoning Turns Helpful AI Into a Trojan Horse
Model poisoning is the malicious manipulation of a machine learning model's training data or parameters to embed hidden, "backdoor" behaviors. The attack works in four steps: Poisoning the weights, triggering triggers, exfiltrating data, and hiding the data.
Source: HackerNoon →