Data-driven machine learning approaches are trained on vast sets of data to learn to identify patterns, make predictions and generate content. These are the types of AI system that you typically read about in the press, things like generative AI and ChatGPT. These systems are usually ‘black-box’, meaning we can’t tell how the model comes to its decisions or verify its behaviour, and they will learn to replicate whichever biases and societal inequalities are captured in the data they’re trained on.
Symbolic AI systems on the other hand have explicit beliefs about what they believe to be true about the world, and explicit reasoning procedures that they apply to these beliefs. This explicit model of reasoning means it’s easier to see why an AI is making a decision, making it well-suited to ensuring safety and trust as it more easily supports explanations and guarantees that a model is operating correctly and safely.
We need to find ways to combine both data-driven and symbolic approaches in hybrid AI systems to build scalable AI systems that people can verify and trust. Hybrid AI also has the potential to be more environmentally sustainable by reducing the need for energy hungry training from large datasets.
What we might stand to lose with AI
Large language models like ChatGPT have shown the potential AI has to change the way we do things, with people using them to write emails and essays, and companies replacing customer service functions with them.
But what are the consequences of delegating more and more tasks like this to AI? Might we lose the ability to construct well-formed arguments, or to think critically about the world? How will our relationships with other humans be impacted? What will AI mean for the types of work that we do? We must think seriously about these questions now, before it’s too late.
Undeniably, AI has the potential to significantly benefit society, if we get it right. We need to be able to trust that AI will uphold and promote the values that are important to us, like fairness, safety, accountability and privacy. We need to make sure that AI will empower us to tackle the societal challenges we face, not create new ones.