The key basic definitions that the M&S industry should agree on are listed below. Note that these are fundamental to communication within the M&S and professional wargaming fields. They exclude overly technical aspects of the industry. These require full working group action with the requisite Subject Matter Experts in attendance, circulation around the industry for validation and verification, agreement and then publication. This latter should be through the Simulation Interoperability Standards Organization (SISO) and/or as a NATO Standardisation Agreement (STANAG), Five Nations Agreements and ABCA standards etc.
You will find an article on the Resources page and a Case Study that go into more detail and suggest definitions, but the basic terms that require agreement are:
Setting and Scenario
Simulation, Stimulation, Model and Representation
You might be surprised at this list, thinking that the industry is sufficiently mature to have agreed definitions for these basic terms. The point needs reiterating that LBS sees experts constantly miscommunicating even when using these apparently fundamental words. Maybe you’ll read this and think ‘rubbish’. Go and ask 2-3 colleagues to write down what they think ‘scenario’ or ‘wargame’ mean – you will be surprised at the diversity of their answers!
The situation is even worse when it comes to distinctions. The critical terms that anyone involved in professional wargaming must understand the difference between are:
Validation versus Verification versus Accreditation
Training versus Education
‘A War’ versus ‘The War’ (and ‘Future War’)
Training Wargame versus Analytical Wargame
LBS sees these terms being used either interchangeably or downright erroneously all the time; to do so risks designing and delivering a poor wargame (whatever one of those is!).
This article provides a list of all the common abbreviations and acroynms that the professional wargamer needs. Let us know if any are missing and we’ll add them (Download here)
Erdal Çayirci and Dušan Marinčič have recently published Computer Assisted Exercises (CAX) and Training – A Reference Guide. This explains how CAX are designed and delivered from a NATO perspective, and is structured as a educational course. It is an excellent book and, being published in September 2009, describes current NATO best practice. Erdal is Chief CAX Support Branch at the NATO Joint Warfare Centre and knows more than anyone else I have met in NATO circles about CAX.
Paste the ISBN 978-0-470-41229-9 into Amazon or any other book retailer to find it. It’s expensive at about £60 ($100) even for a used copy, so maybe only for the committed professional wargamer!
The ABCA Interoperability Gap Analysis Study (IGAS) was an analytical event held in Australia during 2006.
The approach taken was based on Seminar Wargaming (SWG). The aim of a SWG is defined by the UK’s Dstl as being ‘to promote structured discussion between experts in several fields and to elicit opinions and judgements from them, and to increase understanding.’
The ABCA IGAS used the following method:
1. Introduction. The facilitator explained the scenario, methodology and scoring system. The scenario was based on events during the Iraq war and the 2nd battle of Fallujah.
2. Vignette brief. A specific vignette, derived from the scenario, was briefed. Each vignette was designed to draw out as many interoperability issues as possible e.g. cross boundary casevac, the forward passage of lines of a reserve, coordination of fires etc.
3. Discussion. Syndicates discussed pre-determined questions, based on the vignette just briefed. There were 6 syndicates, each facilitated by analysis staff intimately familiar with each vignette, its branches and sequels, and the potential interoperability gaps arising from it. Discussions were encouraged to be wide ranging so as to capture as many issues as possible. Participants were drawn from all nations and military arms. They were expected to lead on their own area of expertise and to introduce as many interoperability gaps as possible, explaining the significance of each. Each syndicate would then draw up their list of top 10 interoperability gaps, and their significance.
4. Plenary. Central plenary sessions were held, chaired by the lead facilitator. Each syndicate briefed the others on their findings and reasoning. This was designed to give individuals as great an insight as possible into each potential interoperability gap.
5. Formal data capture. All participants were asked to subjectively score each identified interoperability gap in categories including: impact; likelihood of occurrence; and ease of mitigation.
6. Repetition of steps 2- 5 for each vignette. There were 10 separate vignettes. Each cycle took approximately 3 hours, so 3 vignettes were analysed each day.
7. Analysis and recording. Various analytical methods were used to sift and rank the identified operability gaps. This took the entire second week. The output was a table showing 50 well defined interoperability gaps, their significance and ranking according to the judgement of expert participants. This formed the basis of ABCA interoperability activities for the next 2 years.
Observations and lessons arising:
1. SWG is a qualitative technique. It is therefore useful for conducting a preliminary sift of options but lacks the resolution to compare generally similar options. However, a well run SWG increases the chances of getting the subsequent analysis right.
2. SWGs can:
– Identify key issues in a plan or operating model, promote understanding, confirm assumptions and stimulate potential collaboration between stakeholders;
– Prompt discussion of concepts of operation of novel systems;
– Elicit opinions on the relative merits of widely differing system concepts;
– Focus and narrow the scope of a study;
– Suggest measures of effectiveness for subsequent quantitative analysis;
3. SWGs are not suitable for:
– Providing any form of quantitative measure of effectiveness;
– Comparing generally similar systems (e.g. one type of tank vs another);
– Representing C4ISR systems;
4. Because they lack a combat simulation system, operational CIS and structured battle rhythm, most people think that it is possible to ‘make things up as they go’ during a SWG, and that they are no more than a BOGSAT (Bunch of Guys Sat Around a Table). This is absolutely wrong; SWGs often need more rigorous preparation and planning than other types of wargame precisely because there are few external stimuli other than peoples’ imaginations. During IGAS the selection of the scenario and preparation of vignettes was key to drawing out interoperability gaps.
5. This prior preparation and planning must include rehearsals with all facilitators and analysts.
6. Success depends on having the right people participate. This includes a strong Red and White Cell; they are often overlooked. The Red cell ensured that frictions are introduced that drew out interoperability gaps.
7. SWG strengths are:
– They are rapid.
– They are relatively inexpensive.
– They are relatively few resources required.
– They are good at capturing knowledge from a wide variety of sources.
– Ideas from ‘outside the box’ can be considered.
– Real-time statistical testing can examine the alignment of SME sub-groups, thus allowing differences of opinion to be investigated immediately.
8. SWG weaknesses are:
– Results are totally panel dependent; people think they know how events will unfold – but are they right?
– They are open to abuse through misconceptions.
– Perceptions can be formed that prove difficult to change.
– They can be anti-innovation, since they are based upon knowledge and past experience of panel members.
– They are open to the panel being ‘led’ by poor questioning and preparation (including items such as scenarios, ORBATs, timeframe selection etc).