Looking to the future, what other issues could a SAGE-AI advise on? Philosophers, psychologists and researchers in Digital Humanities are now raising concerns about AI systems triggering the human ‘anthropomorphic’ instinct to ascribe rich human-like mental lives to entities, and in particular robots, that exhibit human like behaviours.
While our interactions with robots promises to be transformative, there is also the potential for serious societal disruption without adequate and intelligent regulation. Could robot use to support human carers (e.g., in the under-resourced care sector) be more nourishing, more acceptable by those who are being cared for, if their humanoid designs suggest that they genuinely care and experience empathy?
But then how would widespread use of care robots impact the extent to which we humans feel we are responsible for caring for our elderly? On the other hand, sex robots will be designed to simulate arousal and reciprocal attraction. How will treatment of these humanoid robots as subservient ‘sex slaves’, while simultaneously being thought of as conscious, affect the capacities of their owners to develop respectful sexual relations with other humans, possible leading to tragedy?
A SAGE AI could promote and consolidate research that seeks to answer these questions, and advise that we regulate against humanoid sex robots, while advocating their use in care settings?
Pandora’s box is open and ChatGPT4 and other powerful AI systems have been released. There is no going back. We must minimise the risk of unexpected consequences, and forearm ourselves against those that we anticipate, while also positioning ourselves to reap the transformative benefits of AI. To do that we need smart regulation. An interdisciplinary understanding of AI and its impact on society needs to be front and centre when it comes to thinking about AI safety and regulation.