AI, confirmation bias and poisoning the internet
Consider the significant societal challenges we are already facing: the polarisation of societies into rival “tribes” with increasingly entrenched political and cultural beliefs. Could regulation, informed by a SAGE-AI, have helped mitigate the role that social media’s use of AI filtering and recommendation algorithms has played in exacerbating our contemporary post-truth polarised predicament? To answer this question, consider the following interdisciplinary understanding of how these algorithms effectively operationalise the confirmation bias -- the human instinct to selectively attend to evidence and opinion that support, and so further entrenches, our beliefs.
Our distant ancestors lived in small farming communities, in which the confirmation bias might have served to entrench these groups’ shared tribal beliefs; that is, beliefs relating to values, governance, resource allocation, religion, mythology etc. The effect would be to strengthen bonds amongst tribal members, and so promote cooperation and a shared resolve to repel incursions from rival groups. We have thus evolved to experience dopamine mediated rewarding feelings when our tribal beliefs are confirmed.
Our ancestors relied only on each other to mutually reinforce tribal beliefs. However, with the internet, the available information is now not only vastly greater, but has the potential to expose us to misinformation and extremist views on an unparalleled scale. Moreover, the “attention economics” of the internet, and in particular social media platforms such as Facebook, have incentivised how this vast repository of online information is filtered for our consumption.
Our search and click histories effectively provide a profile of the tribal beliefs that we engage with. Algorithms then selectively feed us with more of the same, and the rewarding feelings accompanying confirmation and reinforcement of our cherished tribal beliefs entices us to spend more time online, increasing exposure to revenue generating adverts. Thus, AI filtering and recommendation algorithms are technological incarnations of our innate confirmation bias, selectively feeding, confirming and entrenching our existing opinions.