Blog
Sep 21, 2025
The Geek’s Guide to ML Experimentation
We use tabular datasets originally from OpenML and compiled into a set of benchmark datasets from the Inria-Soda team on HuggingFace. We train on 28,855 training samples and test on the remaining 9,619 samples. All the MLPs are trained with a batch size of 64, 64, and 0,0005, and we study 3 layers of 100 neurons each. We define the top six metrics used in our work here.
Source: HackerNoon →