Blog
1 hour ago
Study Finds Optimizer Choice Significantly Impacts Model Retention
This work revisits catastrophic forgetting in machine learning, showing that optimizer choice—alongside dataset and metrics—plays a far more significant role than previously understood. By comparing modern gradient-based optimizers like SGD, RMSProp, and Adam across supervised and reinforcement learning settings, the study reveals that forgetting is not just a function of model architecture or data exposure, but also of how learning itself is optimized.
Source: HackerNoon →