Back to library
Library term·Algorithmic trading

ONNX Inference from MQL5: Bridging Research ML Models into MetaTrader

Graph export, operator support limits, latency budgets, numeric precision and deployment packaging for EAs.

Authored by·Editorially reviewed
Onur Erkan Yıldız
Founder, Financial Engineer · CMB-licensed

Why ONNX

Research teams train in Python (PyTorch, sklearn pipelines) but production must run inside or adjacent to MT5. ONNX offers a portable graph representation — if operator coverage matches your export profile.

Engineering checklist

  • Operator coverage — verify every node is supported by your chosen native runtime.
  • Latency budget — inference per tick must fit within microsecond/millisecond SLA you claimed in design docs.
  • Numeric drift — floating variants differ across runtimes; align calibration thresholds with training environment.
  • Versioning — embed model hash in EA comments for audit.

Failure mode: silent feature skew

If live feature engineering diverges one line from training, ONNX outputs become sophisticated random number generators.

Finvestopia context

We emphasise disclosed model boundaries; ship changelogs when you rotate ONNX graphs just as we version radar weighting heuristics.

Related entries

Educational content authored by our team — informational only, not investment advice.