Blog
5 hours ago
The Fragile Memory of Neural Networks, and the Metrics We Trust
This study examines how catastrophic forgetting in neural networks is measured and influenced by training choices. By evaluating multiple metrics—retention, relearning, activation overlap, and interference—across different testbeds, it finds that no single metric fully captures the phenomenon. Crucially, optimizer choice plays a major role: Adam tends to worsen forgetting, while SGD performs more reliably. The findings highlight the need for multi-metric evaluation and caution against overgeneralizing results across tasks, with future work pointing toward deeper networks, broader testbeds, and improved measurement methods.
Source: HackerNoon →