If we are to believe everything we read, then there is a good chance that AI will soon have taken over. In fact, by the time you get to the end of this article, it is more than likely that an AI government will have assumed control and that you’ll be under their dictatorship. Everywhere we look there appears to be a frenzied mainstream media article intent on spreading hysteria about the potential evils of AI.
My tongue is of course firmly in my cheek. There are undoubtedly reasons to be cautious with the rise of AI, as it begins to seep into almost every facet of our daily lives. One of the most eye-catching and thought-provoking pieces that I recently read on the subject was from Neil Saunders Senior Lecturer in Mathematics, University of Greenwich.
He wrote an article for The Conversation that highlighted the point that while there are reasons for concern around AI, he underlined that we frequently treat or talk about AIs as if they are human. He goes on to write that ‘Stopping this, and realising what they actually are, could help us maintain a fruitful relationship with the technology.’ It’s a great point and one that makes perfect sense. So, to find out more about AI’s potential and its possible benefits and concerns The Atlantic Dispatch caught up with Dr Neil Saunders.
Why do we as individuals and businesses treat AI like some sort of human that we have no control over?
Saunders: Having computers that act like us and interact with us as humans has been, it seems to me, a long-held dream of humanity. One can see why a business would like to have it, as it would ultimately cut costs and hopefully increase profitability.
I’m not sure people have really thought through the consequences of what happens when these machines have genuine autonomy. I think that’s why we’re seeing so many headlines about AI experts either resigning from their companies, or testifying to Congress that regulation is needed.
What are your thoughts and feelings when it comes to AI technology like Chat GPT? Can you see any potential benefits and/or concerns with it?
Saunders: On the one hand, I think it’s an amazing piece of technology. With all the data that is available and with the added computational power, we’re really seeing what neural nets are capable of. There are great benefits to be had with the technology – for example in medical research: finding cures for cancers and other diseases; also in my field of mathematics, we can use it to start tackling some really hard mathematical problems. But on the other, we always have to worry about the ‘bad actors’ who will (simply because they can) use it for subversion.
You mention in your article how Geoffrey Hinton resigned from his role at Google, and that he warned of the dangers of technology ‘becoming more intelligent than us.’ Fearing that AI will day succeed in manipulating people to do what it wants.’ What are your thoughts when you hear statements like this? Is the onus not on humans to use AI more responsibly?
Saunders: I think we have to be a little careful with statements like “it will make us do what it wants”. It’s not clear to me that AI is going to develop a mind of its own, replete with goals and desires. But with our tendency to ‘over-attribute’ human-like qualities to AI and to needlessly anthropomorphise it, we can easily get fooled into believing that it has wants and desires and cares for us when it simply doesn’t.
The onus is certainly on us: individuals, corporations and governments to use, regulate, and promote the safe use of AI. This is quite urgent. For example, very few people have been talking about the carbon footprint that training all these large language models entails – it’s huge.
Do you think we will see the Introduction of government legislation that limits the use of certain AI?
Saunders: I certainly hope we will. Whether that materialises into effective legislation remains to be seen.
Despite the concerns around AI do you think we are just at the beginning of our journey into exploring its capabilities? Where can you see us going with AI technology from here?
Saunders: This is really difficult to answer. My hope is that AI is specifically targeted to solve really hard problems, say in medical research, and will be used to help make progress that genuinely benefits humanity. However, it’s difficult not to see the spectre of a perfect storm of widespread deployment of AI, lack of Government regulation, and huge unanswered questions about what damage this technology is capable of in the wrong hands. Like any advancement in technology, we have to think carefully about the benefits and the risks.
With thanks to Dr Neil Saunders Senior Lecturer in Mathematics, University of Greenwich.