Artificial Intelligence (AI)
What is it? How does it work? Can we afford it? Can we afford not to pursue it? What will it do?
These questions, amongst others, reverberate within and between governments globally, potentially generating a direction of thought which manifests in policy terms as: this is a ‘must have’. This overarching set of questions not only excite politicians about the potential of next generation capabilities but also raise the spectre of adversarial advantage being won or lost today, through the decision of whether or not to develop, what is in reality ones and zeros, coded algorithms, and software!
This electronic arms race is now driving the possibility of machine versus machine combat, and this is not just autonomous defensive or offensive machines as replacements for human cognition-controlled platforms and systems, but more widely in the thinking and reacting space in all aspects of Defence and Security. Cognitive Intelligence is difficult to benchmark beyond each and every task the intelligent human or machine faces; it is rather a non-coherent constant variable in both measuring ability and subsequent analysis of that ability. Therefore, we need a coherent understanding of what we want to develop and apply AI to as only then will the conditions be set for focused development and application of the technology.
What should AI do for air and space Power in the UK is an extension of the broader question of what should AI do for humanity? Albeit with the caveat that whatever AI UK air and space power pursues must be more intelligent than our adversaries, be they today’s or tomorrow’s, and operate within the framework of the UK’s interpretation of its international legal obligations. That the military need to embrace new technology to achieve advantage is not new and the race for machine intelligence began almost with the birth of computing. However, one interesting aspect that appears from this recent accelerated development is engagement with the major ethical and legal questions raised as to human’s relationship with other than human intelligence.
So why pursue AI? In much the same way as humans learn all of the time, computers can now take that pre-loaded programme that they start with and change it, improve it, arm it with knowledge and keep adding to that knowledge base: in simplistic terms the Machine Learns. To be more accurate the software (algorithm) changes and this change increases the number and type of input variables. However, the number and type of variable outputs should not increase in sympathy; quite the contrary they should decrease. This is the key to AI today or as it should be known Machine Learning (ML) and leads to capabilities that will be explicable, repeatable and that therefore engender appropriate trust.
The key is the understanding of the data as humans give up trying to understand and assimilate information at their own individual cognitive overload level. The ML must keep learning to sift the data to find the information key to understand, decide and act and keep repeating this in a never-ending cycle of performance improvement, whilst all the time acting within a legal and ethical framework that is internationally understood and accepted.
Some thoughts on Artificial General Intelligence (AGI)
Fully cognitive Artificial General Intelligence (AGI) could be described as a non-human system that has the ability to react appropriately to unknown inputs, within the context of inputs that it has not been trained on. AGI, based on the latest scientific and academic thought, is considered to be unachievable until at least the 2050 to 2060 timeframe and potentially never. Why never? Intelligence is based on the human perception of what intelligence is, and any future AGI system may not fit the human perception of intelligence and therefore the cognitive equivalent to a human may be unachievable.
Even if development does reach maturity within the above timeframes it is anticipated to only potentially be intelligently equivalent to a nine- to ten-year-old child educated in a developed country, albeit, without the capricious nature of its human counterpart. This might indicate limited success; however, this development timeframe and level of achievable cognitive ability may well improve with widespread investment in AI technologies over the next decade.
The development of AGI is not tied implicitly to the on-going development of Quantum computing, and the current industrial appetite as yet does not appear to have surfaced beyond one or two organisations to invest heavily to gain the Quantum Compute advantage. Current systems are expensive, in most cases multiple millions of pounds, they also presently embody neither a SWaP (Size, Weight & Power) nor a cooling system that is relevant to onboard air or space platforms. Importantly, the current systems based on the Quantum Computing power measured in Qubits are around the 5,000 Qubit or less and it is believed that AGI at even fairly low levels of general cognitive ability would require 100,000 + Qubits. Therefore, while Quantum Qubit driven computer development may accelerate over the coming years, as yet there is no sign of this becoming the norm.
The development of traditional computing of course is outstripping Quantum in terms of computational power and desirable SwAP within a more attractive associated cost model. Moore’s Law is an empirical relationship linked to gains from experience in production model. Moore’s Law has been applied in the case of computing power annual growth; this predictive model became plesiochronous with the widespread availability of both Graphics Processing Unit (GPU) and Field Programmable Gate Array (FPGA) processing, and this is enabled through enhanced semi-conductor construction. The availability of GPU and FPGA is driving computing power in a non-linear and exponential performance increase. In near future timeframes this may also be the case for Quantum systems and their development.
This somewhat bleak picture of the potential for AGI should not cloud judgment for pursuit of ML as a capability, nor should the output of some of Hollywood’s most creative filmmakers and their painting of an even darker picture of AGI reaching singularity, the ability to think for itself and therefore act independently of human oversight or control. The Hollywood dystopian vision with which we have all become familiar leads to the destruction of the human race, or enslavement as batteries in the case of the Matrix movie series, where the human populous (well most of them) are physically and mentally pacified within a virtual world, created and managed by an ever-self-developing Artificial Intelligence. Against this populist image, there is a clear need to engage with the issue of what we want the future of AI to look like and focus on achieving it.
Read blog two here