Configurable Parameters
Configurable parameters let you define variable inputs in your strategy that can be adjusted between backtest runs without rebuilding the strategy from scratch. Combined with parameter sweeps, they provide a systematic way to explore how different settings affect your strategy's performance.
What Are Configurable Parameters?
Every rule on your canvas has settings that control its behaviour — a moving average period, a stop-loss percentage, an RSI threshold. By default, these values are fixed. When you mark a parameter as configurable, it becomes a variable that can be changed at backtest time without opening the strategy editor.
This is useful when you want to answer questions like:
- Does a 20-period moving average work better than a 50-period moving average for this strategy?
- What stop-loss percentage produces the best risk-adjusted returns?
- Is the strategy sensitive to the RSI overbought threshold, or does it perform similarly across a range of values?
Marking a Parameter as Configurable
In the Strategy Designer, click on any rule to open its configuration panel. Next to each numeric or select parameter, you will see a toggle to mark it as configurable. When enabled, the parameter appears in the backtest configuration panel, where you can set its value (or a range of values for parameter sweeps) before each run.
// Example: Making MA period and stop-loss configurable
{
"rules": [
{
"type": "ma_crossover",
"params": {
"fast_period": { "value": 20, "configurable": true, "min": 5, "max": 200 },
"slow_period": { "value": 50, "configurable": true, "min": 10, "max": 500 }
}
},
{
"type": "stop_loss",
"params": {
"percentage": { "value": 2.0, "configurable": true, "min": 0.5, "max": 10.0, "step": 0.5 }
}
}
]
}
Parameter Sweeps and Optimisation Workflows
A parameter sweep runs your strategy multiple times, each time with a different combination of parameter values. The engine systematically tests every combination within the ranges you define and presents the results in a comparative format.
Setting Up a Sweep
For each configurable parameter, specify:
- Start value — The lowest value to test.
- End value — The highest value to test.
- Step size — The increment between each tested value.
The engine computes the total number of combinations and estimates the processing time before you start the sweep. For example, testing a fast MA period from 10 to 50 (step 5) and a slow MA period from 30 to 100 (step 10) produces 9 x 8 = 72 combinations.
Interpreting Sweep Results
Sweep results are displayed as a grid or heatmap, with each cell representing one parameter combination and its key performance metrics. You can sort by any metric (net profit, Sharpe ratio, max drawdown) to identify the best-performing configurations.
Look for parameter plateaus — regions in the heatmap where performance is consistently good across a range of values. A strategy that performs well only at a single exact parameter value is fragile. A strategy that performs well across a wide range of parameter values is robust.
Avoiding Overfitting
Parameter optimisation is a double-edged sword. While it helps you find the best settings for your strategy, aggressive optimisation can lead to overfitting — a state where the strategy is so finely tuned to historical data that it fails to perform on new, unseen data.
If a strategy only works with very specific parameters, it is likely overfitted. A robust strategy should produce positive results across a reasonable range of parameter values — not just at one magic number.
Walk-Forward Testing
Walk-forward testing is the gold standard for validating optimised parameters. The process works as follows:
- Divide your data into an in-sample (IS) portion and an out-of-sample (OOS) portion. A common split is 70% IS and 30% OOS.
- Optimise on IS data — Run your parameter sweep on the in-sample data to find the best parameter values.
- Validate on OOS data — Run the strategy with the optimised parameters on the out-of-sample data. This data was not used during optimisation, so it provides an unbiased estimate of future performance.
- Roll forward — Advance the IS and OOS windows by a fixed step and repeat. This produces multiple IS/OOS pairs, each providing an independent validation.
If the strategy performs well on the out-of-sample data consistently across multiple walk-forward windows, you can have greater confidence that the parameters are not overfitted.
// Walk-forward configuration example
{
"walk_forward": {
"enabled": true,
"in_sample_pct": 70,
"out_of_sample_pct": 30,
"step_forward": "3M",
"windows": 8
}
}
Out-of-Sample Testing
Even without full walk-forward analysis, you can protect against overfitting by reserving a portion of your data for out-of-sample testing:
- Never optimise on your full date range. Always hold back a segment — typically the most recent 20-30% — that you do not touch during parameter exploration.
- Run your final strategy on the held-out data. If performance degrades significantly compared to the optimised period, the strategy is likely overfitted.
- Do not go back and re-optimise. Once you have tested on the out-of-sample data, treat the results as final. Repeatedly optimising and retesting on the same data defeats the purpose of out-of-sample validation.
Signs of Overfitting
Watch for these red flags during parameter optimisation:
- Sharp performance cliffs — Performance drops dramatically when a parameter changes by a small amount (e.g., MA period 23 works great, but 22 and 24 produce losses).
- Too many parameters — The more parameters you optimise, the higher the risk of overfitting. Limit configurable parameters to the most meaningful ones.
- Unrealistic returns — If your optimised strategy shows returns that seem too good to be true, they probably are. Compare against realistic benchmarks.
- Low trade count — Optimised parameters that produce very few trades may have found a handful of lucky trades rather than a genuine edge.
- IS/OOS divergence — A large gap between in-sample and out-of-sample performance is a strong indicator of overfitting.
Optimisation should be used to confirm that a strategy is robust across a range of parameters, not to find the single best-performing configuration. The goal is to find parameters that work well enough across many market conditions, not parameters that work perfectly on one specific historical period.
Best Practices
- Start with your trading thesis. Choose parameter values based on a logical rationale first, then use sweeps to validate that small variations do not destroy the edge.
- Keep the number of configurable parameters small. Two or three parameters are usually sufficient. Each additional parameter multiplies the search space and increases the risk of overfitting.
- Use coarse step sizes first. Start with large steps to identify promising regions, then narrow down with finer steps around those regions.
- Always validate with out-of-sample data. Never deploy a strategy based solely on in-sample optimised results.
- Document your rationale. Record why you chose specific parameter values and what your walk-forward results showed. This makes it easier to revisit and refine strategies later.
Was this helpful? Let us know