Introduction
To deal with the ‘anarchy’ of information from different sources, and more in particular the value that could be given to different pieces of information, the military, intelligence services, and law enforcement agencies use so-called grading or evaluation systems. Most of these systems distinguish between a value for the reliability of the source (usually expressed in a capital letter) and a value for the credibility of the information (usually expressed in a number). The resulting two-digit index is then used as a condensed measurement to communicate the value of that piece of information.
I have not come across a wide use of grading systems by private organisations, even though for a correct understanding of the value of information from open sources, evaluation would be a mandatory step before attempting any further analysis. In a series of blog posts, of which this is the first, I will dive into the world of information grading in intelligence analysis to see how that could benefit OSINT work.
Before I will look into different grading systems, as well as the criticism that has been voiced in relation to those systems, in this first post I will dive into the history of the so-called Admiralty System from which most information grading systems as currently in use (in Western countries) originate.
Where it began
The first recorded use of an information grading system can be found in the practices of the British Admiralty’s Naval Intelligence Division (NID). When in 1939, at the brink of the Second World War, the new Director of Naval Intelligence (DNI), John Godfrey arrived, he found the anarchy of different information reports without a clear indication of their value, ‘intolerable’ (see D. McLachlan 1968, ‘Room 39’, p. 22). Therefore he set out to introduce a method that would briefly and clearly show the value of the report and, if necessary, evaluate the information received from it.
A simple system was devised in which the source and the information itself were evaluated separately, coding the evaluation with letters and numbers from “A1” to “D5”. The letter indicates the degree of reliability of the source, and the number represents the probability of the information as being correct. The reason for separating source evaluation from the evaluation of the information itself, was that after all, it is possible that valuable information may come from a source with a bad reputation and, conversely, disinformation may come from a source which is usually reliable. This method of grading information has since been used in different variations and is known as the ‘Admiralty System’ or ‘Admiralty Code’.
Source evaluation
To show how the evaluation of a source works, McLachlan quotes in his book (p. 23) an officer involved in devising the system:
“A good source in some hospital in Brest might be graded “A” as to the number killed and wounded in an air attack, but might be a “C” source on the extent of mechanical damage caused to a ship in the dockyard. An equally good source in the Port’s Chaplain office who had seen the ships defect list, might be “A” on damage caused, but “C” on the number killed and wounded. A very junior engine-room rating taken prisoner and speaking in good faith may be an “A” or “B” source on the particulars of the engine of which he is in charge, but will be “C”, “D”, or even “E” on the intended area of operations of the U-boat in which he served.”
This quote shows two important implications to be taken into account when evaluating a source. Firstly, ‘reliability’ represents more than just the truthfulness of a source, it also should take the competence of the source on the subject of the information into account. Even though competence is often included when these gradings systems are described, in practice, especially in environments where disinformation is abundant – such as in open sources – the focus is more often than not solely on the perceived truthfulness of the source.
Secondly, the example also makes very clear that the same source can be evaluated differently depending on the subject matter. Again, it seems obvious, however in practice I often observe the tendency to assign a static evaluation score to a certain source, rather than a dynamic one in which the competence on the subject matter is taken into account.
Information evaluation
In the original grading system as devised in the Admiralty, the source was graded by the collector of the information while the final grading of the information was given within the Naval Intelligence Division. As McLachlan (p. 23) quotes the officer involved in devising the system:
“The authority responsible for grading – on whose discrimination, honesty and integrity the whole system depends – was under an obligation not to suppress evidence merely because he did not understand or because it looked improbable, unless he had a good reason for disbelieving it. Low grade information may prove to be of the highest importance; for example, the first reports of V1s and V2s were received in this form.”
Assigning value to the credibility of the information is a difficult task and generally can only be performed adequately by a subject matter expert. The current grading systems based on the Admiralty Code often rate information higher when it is consistent with information from other sources. That of course could result in a systemic confirmation bias in which weak signals may receive a lower grading than justified. McLachlan (p. 23) gives us an example:
“A special problem of grading might arise from language used. A very accurate report might be made about a weapon e.g. a flying bomb – giving correct dimensions, method of propulsion, range and so on – but the source might call it a ‘Torpedo with wings’. When the report was circulated, Director of Torpedoes and Mining in the Admiralty might say that to talk of a torpedo with wings and a range of 140 km was none-sense. The report might then be discredited, despite the fact that everything except the name and purpose was absolutely correct.”
In a later blog post I will dive into the intricacies of information evaluation and the criticism that has been voiced in relation to the way of evaluating information under the Admiralty System.
Current use
In essence the grading system that was devised 80 years ago is still in use in some form or another in the Western military, intelligence services and Law Enforcement. For the NATO it is codified in the current NATO standard STANAG 2511 (Allied Joint Doctrine for Intelligence procedures AJP–2.1 Edition B). Although the grading system is often still referred to as the Admiralty System, it has evolved and the current NATO standard includes 6 different grades for the reliability of the source (A-F) and 6 grades to indicated the credibility of the information (1-6).
This standard has been transposed in the different national standards, such as for example in the UK Ministry of Defence Joint Doctrine ‘understanding and intelligence support to joint operations’.
The US Army in their Field Manual for Human Intelligence Collector Operations present the following grading standard:
There are small differences in implementation which I will discuss in a later blog post.
Conclusion
Reading McLachlan on the origin of the Admiralty Code and its use during wartime to enhance the intelligence production of the British Navy, shows that still lessons can be learned from history. Similar challenges related to the value of information exist nowadays, and these exist in particular relating to the ‘information anarchy’ we call open sources. Having a (somewhat) standardised method of evaluating pieces of information could be very helpful, if only to force analysts to give the value of their ingredients sufficient thought. Whether the Admiralty Code is the best system for that, is something I’m going to explore in upcoming blogposts.
(Photo credit to @SuzyHazelwood)