Library term·Algorithmic trading
ONNX Inference from MQL5: Bridging Research ML Models into MetaTrader
Graph export, operator support limits, latency budgets, numeric precision and deployment packaging for EAs.
Authored by·Editorially reviewed
Onur Erkan YıldızFounder, Financial Engineer · CMB-licensed
Higher education in Financial Engineering and Money & Capital Markets. SPK (Turkey CMB) licence. 16 years across institutional markets, research, and quant-driven analytics.
Why ONNX
Research teams train in Python (PyTorch, sklearn pipelines) but production must run inside or adjacent to MT5. ONNX offers a portable graph representation — if operator coverage matches your export profile.Engineering checklist
- Operator coverage — verify every node is supported by your chosen native runtime.
- Latency budget — inference per tick must fit within microsecond/millisecond SLA you claimed in design docs.
- Numeric drift — floating variants differ across runtimes; align calibration thresholds with training environment.
- Versioning — embed model hash in EA comments for audit.
Failure mode: silent feature skew
If live feature engineering diverges one line from training, ONNX outputs become sophisticated random number generators.
Finvestopia context
We emphasise disclosed model boundaries; ship changelogs when you rotate ONNX graphs just as we version radar weighting heuristics.Related entries
Avoiding Curve Fitting & Over-Optimisation
Too many free parameters + short history = beautiful backtest, fragile live.
DLL Integration in Expert Advisors: Engineering Risk & Control
Where native code helps (SIMD, ONNX runtimes), where it hurts (ABI drift, trust boundary), and broker policy constraints.
Educational content authored by our team — informational only, not investment advice.
