Skip to main content
KBS_Icon_questionmark link-ico
DFI_LWWT_Series_Website_Header_2 ;

Reflections: To live well with ai technology, transparency is the key

Living Well With Technology Thought-Leadership Series
Dr Marcus Weldon

Chair, Advisory Council, Digital Futures Institute

24 September 2024

In the first of the Living Well With Technology Thought-Leadership Series, actor, author and comedian Stephen Fry delivered the lecture, AI: The means to an end or a means to the end? Dr Marcus Weldon, Chair of the Advisory Council of the Digital Futures Institute at King's details his thoughts about some of the themes of the lecture.

Over the past week, I have been reflecting on both the extraordinary lecture given by Stephen Fry at King's and the equally remarkable new opus, Nexus by Yuval Noah Harari that was published in the same week. The serendipitous confluence of these two pieces of work has allowed the gelling of a wide variety of different thoughts on the nature of intelligence (including what we should really mean by ‘Artificial General Intelligence’),

and how we should distinguish and blend human intelligence and machine intelligence in a way that allows the appropriate control but also the manifest assistance that we seek.

In short, how to balance peril and potential.

It is helpful to start with defining what we mean by ‘intelligence’. I, like many others of late, have been pondering and researching this question and it seems that there is a reasonable consensus that intelligence is comprised of 3 essential components:

  1. The ability to process inputs from multiple sources (and senses)
  2. The ability to glean an essential understanding or derive a perspective
  3. The ability to produce original output(s), i.e. a decision, judgement, artifact, scenario/hypothesis etc., of value

Of these, the first and third are necessary but not sufficient; it is the second ability that allows the condition of ‘sufficient’ to be met, as it is the one that moves us beyond the so-called ‘Chinese Room’ scenario that just maps inputs to outputs in a mechanical way. Furthermore, the word ‘original’ to describe the output is also key as I think that it is invariably true that the more original the output, the more intelligence we ascribe.

In his Inner Cosmos podcast series, David Eagleman also highlights factors such as the ability to balance exploration (of diverse ideas, new concepts) with exploitation (selection of best hypotheses for action/implementation) by suppressing what he calls ‘distractors’, and I think these are well encapsulated by the second and third components of intelligence above.

But, given this general definition of intelligence, it is important to then recognize that there are multiple intelligences that comprise human intelligence.

null
Generation by DALLE of the multiple aspects to human intelligence. Image: OpenAI — DALLE generation prompted by Dr Marcus Weldon

Humans use multiple methods to make any judgement, some of which are classically analytical (acquiring knowledge and applying logic or heuristics to this knowledge), and some of which are more sensory-input based (fear, desire-based, social, aesthetic systems). There are various theories of intelligence that exist (Sternberg, Gardner, etc.), but there is an essential core that is common, which can be summarized as four key components:

i) Cognitive Intelligence: The ability to analytically reason to develop intellectual/rational outcomes
ii) Emotional Intelligence: The ability to utilize human emotions (one’s own and others) to develop interpersonal outcomes (and communal belief systems/moral codes)
iii) Practical Intelligence: The ability to utilize physical experiences to develop novel practical outcomes
iv) Creative Intelligence: The ability to utilize aesthetic appreciation and imagination to develop artistic outcomes

Each Intelligence has varying degrees of nature versus nurture, and clearly also varies between individuals. But, any concept of general intelligence cannot be confined to just type i), which always seems to be implicit primary association; it has to span all four.

As an aside, I was reminded of the fact that it is only recently the term ‘Artificial General Intelligence’ (AGI) has been associated primarily or exclusively with the cognitive aspect; the original definition by Marvin Minsky was that (by 1970) “we will have a machine with the general intelligence of an average human being - it will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight’. This is in stark contrast to the recent statement by Demis Hassabis that “AGI is a system that should be able to do pretty much any cognitive task that humans can do”, which solely focusses on the cognitive aspect of intelligence.

So, if we accept the above definition, we now have a reasonable basis for assessing the ability of machines or computers to possess or acquire human intelligence: the essential definition of ‘artificial intelligence’. I must admit that, until recently, I was swayed by the logic that a machine could compete with, or out-perform, a human on cognitive tasks, but would be relatively inferior in terms of emotional, practical or creative intelligence as it would lack the requisite human experience and abilities that underpin these intelligences. However, as Harari points out, there are numerous cases where an AI system like chatGPT or Dall-E have demonstrated emotional intelligence or creative intelligence that meets or exceeds their human counterparts. And, robots such as Spot (developed by Boston Dynamics) have equally demonstrated human or super-human physical or practical capabilities.

But the more salient point is not whether machines can possess the same level of intelligence in each domain as a human, but rather whether they can successfully emulate a human in each capacity, so as to be effectively indistinguishable from a human in every aspect - a sort of generalized Turing test.

And the answer must be with all probability ‘yes’, this will indeed be the case. That is not to say that these machines will possess ‘AGI’, as they will not possess the full complement of human intelligences, rather they will possess what Harari calls ‘alien intelligence’, which I will designate as ‘ai’[1], which we can define as an intelligence that is able to successfully emulate human intelligence, in addition to potentially exhibiting non-human intelligence and capabilities.

One of the key attributes of this ‘ai’ is that, due to its non-human/alien methodology and construction, the analysis and decision-making algorithms will be inherently impenetrable or unfathomable by human comprehension. So, in effect, we will face the alarming prospect of both being unable to distinguish an ai from a human and unable to understand its reasoning. And therein lies the essential peril: we will not be able to control or manage the influence of these ai’s as we won’t even be able to detect their presence or methods, or understand their goals. This is an undeniably bleak conjecture, but one that need not come to pass if we apply the appropriate measures. This conundrum is elegantly described by Paul Siemens in a recent article, in which he posits that we need to define a new ontological entity, that is neither human, nor a God, nor just a ‘thing’, in order to make progress towards understanding the role that ai could or should play in our future.

null
Generation by DALLE of humans and machines operating in harmony. Image: OpenAI — DALLE generation prompted by Dr Marcus Weldon

I have spent a good deal of time reading different thoughts on elemental solutions, such as those proposed by Harari in Nexus, and engaging with creative thinkers on the subject like Stephen Fry, Paolo Benanti, Cassie Kozyrkov, and David Eagleman, and I think there is an emerging consensus that the nexus of the way forward is to require transparency.

‘Transparency’ in this context has multiple dimensions, with the base set that might allow us to navigate the alien landscape (or quagmire) being the following, in my view:

· Transparency of Identification: An ai system must identify itself as such when queried by a human.
· Transparency of Validity: An ai system must reveal its target goal(s), the origin(s) of the training data set(s) used and verification of correctness (‘truth’) of the answers provided to known test sets.
· Transparency of Influence: An ai system that curates collective human experience (for example by recommending content or moderating or participating in communications or communities) must reveal its presence and curating principle, e.g. the prior ‘maximum engagement’ edict of social media platforms.
· Transparency of Benevolence: An ai system must guarantee that it will not use data that is gleaned or mined or provided, to the material detriment of a human, e.g. inferred medical diagnoses or legal status will not be used in any way beyond the specific context.
· Transparency of Explanation: An ai system must provide an explanation of its essential logic in a narrative (not algorithmic) form that is understandable by humans, e.g. “the recommendation/answer is based on primary factors x and y that apply to your specific context/query, compared to the general population. The following alternative answers are more probable if different factors, such as a and b, are more relevant.”

In many ways, these transparency requirements or ‘laws’ are akin to Asimov’s Laws of Robotics that govern (physical) robot-human interactions. Clearly, compared to the physical or ‘hardware’ harms anticipated by Asimov, the much more subjective ‘software’ harms are more complex to define and protect against, but the above set seems to me to be a good starting point.

Now, returning to Harari’s treatise Nexus, in that work he describes chilling scenarios in which, in the absence of such a framework, the fundamental principles that underpin human civilization are in real jeopardy. Specifically, he highlights that in order for humans to survive and prosper there are critical tradeoffs to be made between truth and order, and power and wisdom, and that the networks that are formed by information flow play a pivotal role in determining the chosen paths. But there is also a manifest need for strong, intelligent self-checking mechanisms throughout the composite and complex human socio-political panorama, as illustrated in my reworking of Harari’s information flow schema below.

Null — Image: Dr Marcus Weldon
Figure 1) Extension of Harari's socio-political information flow schema, highlighting how networks are formed by, and allow the propagation of, information, which must be turned into knowledge by the application of intelligence; in turn this knowledge must be intelligently transformed into a balance of truth, order, wisdom and power, with strong self-checking mechanisms throughout in order to ensure appropriate balance and stability of the resulting regimen. Image: Dr Marcus Weldon

Yuval Harari and Stephen Fry both highlight the suggestion by the philosopher Dan Dennett that, just as counterfeiting protections have been in place for centuries for monetary systems, in order to build and maintain trust in these abstract systems, similar anti-fraud mechanisms must be possible for ai systems that are fraudulently (either explicitly or implicitly) posing as human.

Referring to the schema in Figure 1, I see the 5 Transparency Laws as one way to prevent such fraudulent behavior and allow humans to safely utilize ai systems to assist with the self-checking balance equilibria and the intelligent distillation of knowledge into a rational, stable combination of truth, order, wisdom and power.

Parting thought

It now seems appropriate to return to the question that the Digital Futures Institute poses on how to live well with such technologies, to which my answer would now be by “Using ai technology to augment and complement the complexity of human intelligence with full transparency and accountability”

 

[1] I choose to use the lower-case designation, i.e. ‘ai’ to distinguish this from the conventional use of AI to mean artificial human-like intelligence, and in homage to Stephen Fry’s proposal to make matters less confusing in sans serif fonts (like this one) for which AI is indistinguishable from the name Al

 
 

----

Living Well With Technology

Thought-Leadership Series

Marion Thain
Chair-Director - Digital Futures Institute, Professor Marion Thain

This series aims to lead a cross-sector conversation about our digital futures by convening tech leaders, academics, public figures, and policy makers in a series of public events and associated open-access online publications that will provide a tangible and impactful outcome to the annual programme. 

The events will be solutions-focussed, aiming to avoid the yay/nay-technology binary offering thoughtful and impactful analysis/ideas that both acknowledge the challenges and point the way forward to solutions.

The annual keynote lecture will become the centrepiece of an online publication series that will convene a national and international debate on how we can create a better digital future, bringing together thought leaders from across the sectors to engage in a series of collaborative workshops and create publishable blogs, podcasts, interviews etc, enabling a rich and active exploration of the issues.

This series is generously supported by MAIS SpA.

In this story

Marcus Weldon

Marcus Weldon

Chair of the Digital Futures Institute Advisory Council

Latest news