Lecture by Neil F. Byrne
The three main reasons to model war situations are:
With this in mind, Tactical Training Groups offer present day simulations of present day capabilities. The analysis of the results of such a simulation leads back to the above three reasons. However, the Navy must be careful about the analysis because Navy missions change. For example, NAVTAG was a manual simulation that was automated from 1982 to 1989 across 247 systems at a cost of $32 million. However, it only modeled US-Soviet superpower confrontations, and not our current "third-world, meals-on-wheels missions." Junior officers would play this at night and the next day ask their COs why they were "waxed." Is it a valid model? Well, Navy sims are the purview of the civil bureaucracy and they treat naval officers as ventriloquist dummies. But rules do change and the data is there to change the simulation if it is recognized by non-naval types. For example, in the realism-versus-training world, anti-submarine warfare sims should teach four stages of its mission: detect, identify, locate, and kill. The most important step is the first, which is also the most time consuming. If you have limited training time, the sims "give 'em the detect," defeating the purpose of training. As for Officer Tactical Training, focusing on the decision-makers who have to know "what's that and what to do about it," the courses suffer from inadequate facilities and length of training. The USN gives 16 weeks while the Royal Navy of Britain provides 51 weeks. And it gets even worse. AEGIS training should be once a week, but the best it gets is once a month and the average is once per quarter. And much of the "training" is assessing "how much you suck." However, getting back to the idea of garbage in and garbage out, realism needs improvement in the current crop of sims. For example, in the case of ASW ranges, 30 years of SHAREM exercises show the fleet doesn't detect subs anywhere newar the theoretical rate. Part of the problem lies with weapon prototype data. The contractor supplies data from hand-built "optimum" prototypes--and production runs NEVER equal the optimum data. Part also falls on "human error," where misses aren't misses, but "non-runs" that must be done over. In combat, it's still a miss! Back to the data validity, or lack thereof, the big Navy Warfare Tactical Database is 30% complete and is riddled with inaccuracies. For example, the flight characteristics of a maneuvering F-18 fighter is the same as a C-5 cargo plane. How about the Enhanced Naval Wargame Gaming System? Another inaccurate mass of data. Do the system and we sink 25 USSR subs in 24 hours! Compare it to actual fleet actions and you see a 100:1 error rate. Then there's the $1 billion Joint Simulation System Maritime--that has nobody to double check the data. We have amphibious combat systems that can't kill time, but in training runs they often score hits and kills. Navy sims need to get better and it should start with the data. There's money available, but it needs to be spent on data integrity, and that means more than just hotwiring systems together. More Connections 2000:
Lecture: Opening Remarks Lecture: History of Wargaming Lecture: Aerospace Power 101 Lecture: Games the Navy Plays: Naval Wargaming Past and Future Lecture: Sliding Timescales in Published Wargames Lecture: Wargaming: The End of the Millenium Lecture: Effects-Based Modelling Lecture: Global Defense and Wargaming Lecture: Army Transformation 2000 Lecture: Global Wargame Lecture: Global Engagement Lecture: Commercial Wargaming 1999 Lecture: The Human Factor: Modeling Inputs Lecture: The Modeling of Intangibles National Security Decision Making Game Recap Back to MWAN #109 Table of Contents Back to MWAN List of Issues Back to MagWeb Magazine List © Copyright 2001 Hal Thinglum This article appears in MagWeb (Magazine Web) on the Internet World Wide Web. Other military history articles and gaming articles are available at http://www.magweb.com |