As an aside, I was reminded of the fact that it is only recently the term ‘Artificial General Intelligence’ (AGI) has been associated primarily or exclusively with the cognitive aspect; the original definition by Marvin Minsky was that (by 1970) “we will have a machine with the general intelligence of an average human being - it will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight’. This is in stark contrast to the recent statement by Demis Hassabis that “AGI is a system that should be able to do pretty much any cognitive task that humans can do”, which solely focusses on the cognitive aspect of intelligence.
So, if we accept the above definition, we now have a reasonable basis for assessing the ability of machines or computers to possess or acquire human intelligence: the essential definition of ‘artificial intelligence’. I must admit that, until recently, I was swayed by the logic that a machine could compete with, or out-perform, a human on cognitive tasks, but would be relatively inferior in terms of emotional, practical or creative intelligence as it would lack the requisite human experience and abilities that underpin these intelligences. However, as Harari points out, there are numerous cases where an AI system like chatGPT or Dall-E have demonstrated emotional intelligence or creative intelligence that meets or exceeds their human counterparts. And, robots such as Spot (developed by Boston Dynamics) have equally demonstrated human or super-human physical or practical capabilities.
But the more salient point is not whether machines can possess the same level of intelligence in each domain as a human, but rather whether they can successfully emulate a human in each capacity, so as to be effectively indistinguishable from a human in every aspect - a sort of generalized Turing test.
And the answer must be with all probability ‘yes’, this will indeed be the case. That is not to say that these machines will possess ‘AGI’, as they will not possess the full complement of human intelligences, rather they will possess what Harari calls ‘alien intelligence’, which I will designate as ‘ai’[1], which we can define as an intelligence that is able to successfully emulate human intelligence, in addition to potentially exhibiting non-human intelligence and capabilities.
One of the key attributes of this ‘ai’ is that, due to its non-human/alien methodology and construction, the analysis and decision-making algorithms will be inherently impenetrable or unfathomable by human comprehension. So, in effect, we will face the alarming prospect of both being unable to distinguish an ai from a human and unable to understand its reasoning. And therein lies the essential peril: we will not be able to control or manage the influence of these ai’s as we won’t even be able to detect their presence or methods, or understand their goals. This is an undeniably bleak conjecture, but one that need not come to pass if we apply the appropriate measures. This conundrum is elegantly described by Paul Siemens in a recent article, in which he posits that we need to define a new ontological entity, that is neither human, nor a God, nor just a ‘thing’, in order to make progress towards understanding the role that ai could or should play in our future.