How Backtesting Works

Backtesting is the process of evaluating a trading strategy against historical market data to understand how it would have performed in the past. Arconomy's backtesting engine replays real market conditions tick by tick, applying your strategy's rules at each step to produce a detailed performance report.

Backtest configuration panel with date range, instrument, and data source options

What Is Backtesting?

At its core, backtesting answers a simple question: if I had been running this strategy during a specific historical period, what would the outcome have been? Instead of risking real capital on an untested idea, backtesting lets you simulate thousands of trades across months or years of market data in seconds.

Backtesting serves several critical purposes in the strategy development process:

  • Validation — Confirm that your trading logic produces a positive edge before committing capital.
  • Refinement — Identify weaknesses in your entry, exit, or risk management rules and iterate on them.
  • Comparison — Evaluate multiple strategy variants side by side to determine which configuration performs best.
  • Confidence — Develop conviction in your approach by seeing it perform across different market regimes.

How the Engine Works

When you run a backtest on Arconomy, the engine performs the following sequence of operations:

  1. Load market data — The engine retrieves historical tick data or OHLC bar data for your selected instrument and date range from Arconomy's data store.
  2. Initialise strategy state — All rules are loaded with their configured parameters. Portfolio balance, position state, and indicator buffers are initialised to their starting values.
  3. Step through each data point — The engine processes each tick (or bar) sequentially. At every step, each rule in your strategy is evaluated in the order defined on your canvas.
  4. Execute signals — When entry or exit conditions are met, the engine simulates order execution, accounting for spread, slippage, and commission costs.
  5. Record results — Every trade, position change, and portfolio value is logged. At the end of the simulation, the engine compiles the full performance report.
Backtest engine pipeline: Data loading, rule evaluation, signal execution, results compilation

Rule Evaluation Order

The order in which rules are evaluated matters. Arconomy evaluates rules in the sequence they appear on your canvas, from top to bottom and left to right. Filter rules are evaluated first to determine whether the strategy should be active at all. If filters pass, entry rules are checked. If a position is open, exit and risk management rules are evaluated on each subsequent tick.

This deterministic evaluation order ensures that backtest results are reproducible. Running the same strategy with the same parameters and data will always produce identical results.

Real Tick Data vs OHLC Bar Data

Most backtesting platforms operate on OHLC (Open, High, Low, Close) bar data. While this is sufficient for many strategies, it introduces an inherent ambiguity: within each bar, you do not know the actual sequence of price movements. Did price hit the high before the low, or the other way around? This ambiguity can lead to unrealistic fill assumptions and inflated results.

Arconomy addresses this by offering backtesting on real tick data — the actual sequence of price changes as they occurred in the market. Tick data preserves the exact order of every price movement, eliminating bar-level ambiguity and producing more accurate simulations.

Feature OHLC Bar Data Real Tick Data
Price sequence within bar Unknown Exact
Fill accuracy Approximate High
Processing speed Faster Slower
Best for Longer timeframes All timeframes

For a deeper dive into tick data and its advantages, see Real Tick Data.

The Iteration Model

Arconomy supports an iterative backtesting model, where a single backtest can be split into multiple iterations across different data slices. Rather than running your strategy once across the entire date range, the engine can partition the data into segments and run the strategy independently on each segment.

This approach provides several benefits:

  • Statistical robustness — Performance measured across many independent segments is more reliable than a single run.
  • Regime analysis — You can observe how your strategy performs during different market conditions (trending, ranging, volatile, quiet).
  • Overfitting detection — A strategy that performs well on one segment but poorly on others is likely overfitted.

The number of iterations available depends on your plan. Learn more in Iterative Backtesting.

Execution Assumptions

The backtest engine applies several realistic execution assumptions to produce results that more closely resemble live trading:

  • Spread — Each simulated fill includes the instrument's typical bid-ask spread. You can configure a fixed spread or use historical spread data where available.
  • Slippage — A configurable slippage value is applied to each fill to account for the difference between the expected price and the actual execution price.
  • Commission — Trade commissions are deducted based on the fee structure of your selected broker profile.
  • Margin — For leveraged instruments, margin requirements are enforced. Positions that exceed available margin are rejected.

Limitations of Backtesting

Past performance does not guarantee future results. Backtesting is a valuable tool for strategy development, but it has inherent limitations that every trader should understand before deploying capital.

No backtesting engine can perfectly replicate live market conditions. Be aware of the following limitations:

  • Slippage approximation — Real slippage varies with market conditions, order size, and liquidity. The fixed or average slippage used in backtesting is an approximation.
  • Partial fills — In live markets, large orders may be partially filled at different prices. Backtesting assumes complete fills at a single price.
  • Market impact — Your orders can move the market, especially in less liquid instruments. Backtesting does not account for the price impact of your own trades.
  • Liquidity gaps — During high-volatility events, liquidity can disappear entirely. Backtesting may assume fills at prices that would not have been available in reality.
  • Data quality — Even real tick data may contain gaps, errors, or stale quotes. While Arconomy applies data quality filters, no historical dataset is perfect.
  • Survivorship bias — Historical instrument lists may not include delisted or bankrupt instruments, creating an upward bias in results.
  • Overfitting — Optimising parameters too aggressively against historical data can produce strategies that perform well in backtests but fail in live trading. See Configurable Parameters for guidance on avoiding overfitting.

Treat backtest results as one input in your decision-making process, not as a prediction of future performance. Combine backtesting with paper trading, out-of-sample testing, and sound risk management practices.

Was this helpful? Let us know