Nuances in the application of the Admiralty Code

Recently I had a discussion in an organisation on the evaluation of open sources and the information collected from these sources. During the discussion I realised that my previous blogposts on this topic were insufficiently nuanced in the application of the Admiralty Code methodology. In particular the separation of the evaluation of the source and of the information deserves more attention. That separation can actually not be applied as absolute as I might have suggested because there are overlapping attributes. In this blogpost I provide some further thoughts on this matter.

To that end I will first revisit the original idea on the Admiralty Code methodology and the importance of looking separately to the reliability of the source and to the credibility of the information. Thereafter I show why source and information cannot be evaluated completely separate from each other and I point out the relevant overlapping attributes.

The original idea

The Admiralty Code concept uses two distinct scales, one for source reliability and one for information credibility. After all, it is possible that valuable information may come from a source with a tarnished reputation and, conversely, disinformation may come from a source which is usually reliable. From its’ conception in 1939 the methodology not only uses two scales, but also suggests (or actually presupposes) a separation in the evaluation process itself. This is done in order to avoid any potential cross-contamination in which the judgement of the credibility of the information could influence the reliability of the source, or the other way around.

Originally in the British Admiralty, the source was evaluated by the collector of the information while the information was evaluated by specialists within the Naval Intelligence Division. Another reason for this division was that evaluating the credibility of information often requires a subject matter expert, while the handler of the source – the methodology was originally devised for HUMINT sources – was better placed to evaluate the reliability of the source.

And there are good reasons to separate the evaluations, as is highlighted by Baker, McKendry and Mace (1968) in their research for the US Army in which they found a strong correlation between the source reliability rating and the information credibility rating in army field reports. In fact, 87 percent of the ratings fell along the diagonal A1, B2, C3 etc., which according to Baker et al. implies that the two scales are not independent (1968: 13). The question though is whether the scales are not independent, or whether the two scales were not applied independently. I believe the latter is the more likely explanation.

Of course, for those who have read Thinking Fast and Slow by Daniel Kahneman it should not come as a surprise that the human brain cannot be trusted and – if not forced into System 2 thinking – will take the ‘easy’ path and likely apply the heuristic that a reliable source will produce credible information. That could be a very likely cause of the findings by Baker et al.

Nonetheless, there is a number of elements in the evaluation of source and information that cannot be done in perfect isolation as there are overlapping factors.

Overlapping attributes

The first post on the subject of source and information evaluation explained that it is important to give a dynamic evaluation score to a source in which also the competence of the source on the subject matter is taken into account. For example, your old uncle who may be a competent source on the local politics in his city, is likely blissfully ignorant on contemporary geopolitical issues surrounding the Arctic. Therefore his source evaluation score would depend on the subject matter.

The inclusion of the attribute competence however already reveals that it is not possible to have Chinese walls between the evaluation of the source and the information. After all, how would one be able to evaluate competence of the source when completely unaware of the type of information the source provides? Understanding not only the subject matter, however also the level of detail in the information, as well as the conceptual correctness of the information, is directly linked to the competence of the source.

In open sources this is particularly visible when we attempt to determine the competence of the journalist who wrote a certain article. Only when the source evaluator would know what information the article is about, it would be possible to determine the competence of the journalist on that subject. The depth and detail of the information provided plays a role there as, for example, an article on the basics of vaccins would require a less profound expertise compared to an article in which the characteristics of the spike proteins of corona viruses are discussed in detail.

A second attribute in the evaluation of the source for which at least some understanding of the information provided is necessary, is the access that the source has or had to the information. Although access often goes hand in hand with competence, it should be a separate element in the evaluation.

For the evaluating the source we would need to know what information the source exactly provided in order to ask the question: “Is it realistic that the source had access to that specific information?” Also a recently published old KGB manual shows that Soviet Intelligence deployed similar methodology to evaluate the reliability of the source:

Информационная Работа в Разведке, p.18

Loosely translated it reads: “Based on the capabilities of the source of information, his personal qualities, the operational officer can conclude whether the source could have received such information, whether he could talk with this or that official on this issue, whether the source has a position in the information flow on the matter. All this will allow the intelligence officer to get an accurate idea of ​​the reliability of the material.

Interestingly, the manual shows that the KGB methodology explicitly focusses on determining the reliability of the information however by using source attributes in the determination of that reliability. That approach deviates from (a strict application of) the Admiralty Code methodology, however is not completely wrong.

The third attribute of a source which is directly linked to the information provided, is the history of trustworthiness. This attribute of a source by definition cannot be determined without understanding the credibility of the information previously provided by the source.

Lastly one could argue that the motivation of a source may be an overlapping attribute as the motivation can only be understood when knowing what information exactly has been provided by the source. The devil here is in the details (and timing).

Conclusion

In sum, in spite of the understandable principle of a separate evaluation of source and information, both in theory and practice there are clear and needed linkages between the evaluation of the source and the information provided by that source. The reliability of a source cannot be fully established without some knowledge of the information provided and therefore the source evaluation score is to some extent dependent on that information.

This realisation could lead to an interesting debate on the cascade of consequences when older information provided by a source suddenly turns out to be false. I however leave that debate for another blogpost. Thank you for reading and feel free to contact me with questions.