Blog
12 hours ago
How Transformer Models Detect Anomalies in System Logs
This study evaluates a transformer-based framework for detecting anomalies in large-scale system logs. Experiments were conducted on four public datasets—HDFS, BGL, Spirit, and Thunderbird—using adaptive log-sequence generation to handle varying sequence lengths and data rates. The model architecture includes two transformer encoder layers with multi-head attention and was optimized using AdamW and OneCycleLR. Implemented in PyTorch and trained on an HPC system, the setup demonstrates an efficient and scalable approach for benchmarking log anomaly detection methods.
Source: HackerNoon →