JSP 936 - a sign of things to come?

JSP 936 - a sign of things to come?
Headline image from JSP 936

Introduction

I was recently pleased to see that the UK Ministry of Defence has published a Joint Service Publication (JSP) on Dependable Artificial Intelligence.

This is - to my awareness at least - the first non-strategic publication on the subject. As the JSP discusses, there have been other publications on the subject of AI to date, but most of these have been big-picture strategy works (e.g. Ambitious, Safe, Responsible or the Defence AI Strategy) rather than answering the core question of "what do we do with all this AI stuff, anyway?"

Positioning

JSPs occupy a space somewhere akin to regulation for UK Armed Forces personnel and therefore naturally extend to the wider mixed civilian & military organisation of the Ministry of Defence and its arms-length procurement body in the form of Defence Equipment & Support (DE&S).

As a result, this isn't chiefly a document for the consumption of industry, although I can see that straightforwardly there are key elements that industry will increasingly need to answer (e.g. many of the design-level requirements).

I would expect that as the Ministry and its associated bodies grow more familiar with AI technology and the capability of industry, we can reasonably expect to see a Defence Standard written specifically on the subject of safe (& secure) AI development. More on that later.

Headlines

My first draft of this blog was enormous, unwieldy and excessive. I've pared it back to the punchiest points.

Scope

The JSP is interestingly positioned as it reflects an interest not just in AI technologies as they might apply to defence materiel (e.g. munitions, command & control systems, reconaissance capabilities) but even considerations such as the use of AI in the context of Huamn Resources decision-making.

Relationship to existing approaches

There is a specific call out to Defence Standard 00-055 (my favourite) and the fact that most ML-based technologies are going to be - fundamentally - software and require similar assurance. I'm glad to see this as it reflects a significant maturity of thinking. The alternative, c.f. AI getting a wholly-new approach and not taking the hard-fought lessons learned over the past ~60-70 years of software engineering & software safety, would be pretty questionable.

There's significant coverage of the legal & ethical responsibilities involved in developing and applying AI. It dominates 12 of the 41 pages of the document. This is also pleasing to see: it reflects a clear up-front commitment to the common worries that many have with respect with handing over increasing degrees of control to autonomous or ML-based solutions. My only criticism here is that the AI Ethical Risk Rating aspect appears to have a framework but not one that is public at this time; a shame, but understandable.

Development requirements

Getting into the aspects that I feel I am actually qualified to discuss, there is significant commonality between Defence Standard 00-055 and its overall requirements on the development of software versus JSP 936's expectations around the development of AI. Some key points here:

  1. The adoption of established terminology within the field like Operational Design Domain (ODD) is fantastic and reduces the barrier to entry.
  2. The articulated need for hazard analysis and requirements decomposition is greatly appreciated, as is the fact that the JSP recognises the challenges of maintaining traceability between e.g. neural network weights and higher/lower level requirements coming out of the system safety process.
  3. I'm glad there isn't any sort of lifecycle being mandated here, as it's likely most developments wouldn't fit into a single prescribed approach.

Outstanding challenges

I think what is also interesting is what the JSP does not say, which is that the lack of international standards on "what good looks like" for safe AI development is one of the major challenges that both design and acceptance organisations are wrestling with. It's also a multi-dimensional challenge, covering aspects such as:

  • What does Explainable AI look like?
  • What does a good development lifecycle for AI look like?
  • What metrics suggest an AI or ML-based development has actually been successful? What about safe? Are they the same metrics?

Assurance is challenging, particularly when there is not any publicly-agreed best practice to assess a given safety or compliance claim against. That being said, I can't hold that against the JSP authors, as this is a broader industry, academia and governmental challenge. The lack of standards does mean that safety arguments, claims, etc. as well as the associated challenge and assurance, will mostly be bespoke and case-by-case.

I would expect that a significant body of effort will go into gaining approval for use of any AI-based solutions in the UK defence environment, at least until those standards do eventually become available.

Bringing it back together

I think my main headline is that I'm really glad to see the Ministry thinking these subjects through, and I think JSP 936 is a fantastic first step. I'd definitely recommend anyone interested in this space gives it a read to understand what the UK Defence environment is thinking on the subject.