Tel : +44 (0) 773 446 4583

Email :

Why you must differentiate between an analytical COA Wargame and training wargames

The diagram and explanatory text on the What is Wargaming page shows the different types, or categories, of wargame. Further discussion can also be found elsewhere on the LBS Blog: see ‘There are many different types of wargame, and no ‘one size fits all’ solution’. This Blog entry discusses in more detail the main distinctions between analytical Course of Action wargaming and training (including educational but, for the sake of brevity, grouped here under ‘training’) wargames.

The first obvious difference is in the aim of each type of wargame. The aim of a COA Wargame is to identify the risks and issues in a forming plan for subsequent analysis. Whether the COA Wargame takes place with multiple COAs still open, judging and comparing them for validity, or once a single COA has been selected and is being refined, the process and mechanics are the same. Identified risks (areas of uncertainty) will lead to a COA being discarded or contingency plans for branches and sequels being developed. Identified issues will lead to the rejection of the COA or improvements to the selected plan. This helps decision makers make better real-world decisions. The aim of a training wargame will depend on the event’s Training Objectives (TOs), which could be diverse. However, the overall aim is to make commanders and/or their staff better decision makers.

These are very different aims that can be served – indeed, have to be served – in very different ways. The first table below explains key characteristic differences. The second table explains the necessary differences in approach and the impact these have on the wargame and any supporting simulation.

Analytical COA Wargame Training Wargame
Can be used to examine either: one complete COA from start to finish; discrete vignettes from one selected COA; and/or several COAs for comparative reasons. The ability to replay a refined COA or vignette is often required. Tend to play one scenario through to a point where TOs have been achieved, although a Time Jump(s) to a different point can occur. There is seldom any requirement to replay part of the exercise.
Contemporary Operating Environment Force (COEFOR) actions tend to be pre-determined, usually adopting most-likely or worst-case behaviours. COEFOR is usually controlled by the team conducting the analysis. COEFOR actions are determined as play progresses to ensure the correct level of pressure on trainees and to steer the event to achieve the TOs. COEFOR actions are usually dictated by a Game Controller depending how events unfold.
Preparation and execution must be utterly scientific and rigorous. Processes to identify, capture and measure elements such as Measures of Performance/Effectiveness, metrics, data etc must all be logical and robust, and ensure that outcomes are quantifiable. While there is no suggestion that a training wargame can be anything other than rigorously planned, there is an element of ‘art’ involved. This is akin to Rommel’s fingertipsgefughel; a good wargame designer will sense that a certain action will lead, or not lead, to a successful training outcome.

From these basic characteristics a number of different approaches become evident, both for purposes of planning the wargame, selecting or writing the simulation software (if one is used) and executing the event.

Analytical COA Wargame Training Wargame Impact
Event outcomes must be highly verified and validated.    They must be as true to life as possible (or at least to the extent required to base a real-world decision on with confidence). They must be transparent, logical  and understandable. Event outcomes need not be realistic so long as they enable the TOs. Although desirable, they do not necessarily need to be transparent; they can come from a ‘black box’. They just have to be reasonable, actionable and traceable (for AAR purposes). Entirely different levels of verification and validation are required. Also, COA Wargames and any supporting simulation outcomes must be transparent, while training wargames need not.
Outcomes are deterministic (i.e. results are not random)     or random results are smoothed by repeated runs to determine the mean result. Outcomes are stochastic (i.e. determined by chance). Although some training wargames feature an adjudication process to smooth results from the ends of the distribution curve, many allow these outcomes to occur, replicating the chance (luck) inherent on real operations. This is a fundamental difference in the design of a wargame and any supporting simulation.
Repeatability underpins most analytical wargames. A COA Wargame often requires the replay of a complete scenario or selected vignettes after variables have been adjusted. Furthermore, in order to achieve confidence in the outcome, a large number of repeated executions are desirable (the Monte Carlo approach). The same training audience will seldom repeat a wargame. They will execute it, derive lessons, conduct an AAR and move on. COA Wargames must be repeatable, either in the entirety of the COA(s) or vignettes within a COA. The ability to ‘turn back time’ and to speed up time is fundamental. Training games are seldom repeated in the same event (although they can, of course, be used as the basis for subsequent events). Some use faster than real time to ‘fast forward’ to a different point in an operation.
The set-up time can be long. This is to ensure that all variables and behaviours have been correctly identified      and pre-considered. If this is not done properly then outcomes and finding can be invalid. Set-up time is as short as possible. This is to maximise the time spent by the training audience actually training. Many elements of the scenario in a COA Wargame are determined as part of the COA Wargame preparation. Training wargames are usually based on a pre-written ‘menu’ of scenarios, vignettes, MEL/MIL etc, which are selected for use as appropriate.

Outcomes are stochastic, i.e. determined by chance. Although some training wargames build in an adjudication process to smooth out results from the ends of the distribution curve, many allow these outcomes to occur, replicating the chance (luck) inherent on real operations