Executive Summary
The classical Rocks vs Mines sonar benchmark (60 time–frequency features, 208 labelled pings) exposes the limitations of linear models: weak feature interactions, rigid decision boundaries and high bias. Re‑implementing the task in Rust with a modern Gradient‑Boosted Decision‑Tree (GBDT) framework lifted hold‑out accuracy from 76 % (logistic regression) to ≈85 % (GBDT) while maintaining millisecond‑level inference latency and a minimal binary footprint. This paper explains why the logistic approach plateaued, how boosting overcomes those bottlenecks, and what was required to engineer an end‑to‑end, self‑contained Rust solution.