The ‘artificial intelligence’ was coined in 1956, but “AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage”.
In essence, AI is “a collection of technologies that can be
used to imitate or even to outperform tasks performed by humans using
machines”. AI is being used in a wide range of applications, from search
engines on the internet to self-teaching programs which have the ability to
learn from experience, such as Google’s Deepmind technology. Now, it seems,
“Machines are rapidly taking on ever more challenging cognitive tasks,
encroaching on the fundamental ability that sets humans apart as a species: to
make complex decisions, to solve problems-and, most importantly, to learn”.
The ethics of AI
AI poses some fundamental ethical questions for society. For
example, how should we view the potential for AI to be used in the military
arena? Although there is currently a consensus that, “giving robots the agency
to kill humans would trample over a red line that should never be crossed”, it
should be noted that robots are already present in bomb disposal, mine
clearance and anti-missile systems. Some, such as software engineer Ronald
Arkin, think that developing ‘ethical robots’ which are programmed to strict
ethical codes could be beneficial in the military, if they are programmed never
to break rules of combat that humans might flout. Similarly, the potential for
the increased autonomy and decision making that AI embodies opens up a moral
vacuum that some suggest needs to be addressed by society, governments and
legislators, while others argue that a code of ethics for robotics is urgently
needed. After all, who would be
responsible for a decision badly made by a machine? The programmer, the
engineer, the owner or the robot itself?
Furthermore, critics say that driverless cars may be
involved in situations where there is a split-second decision either to swerve,
possibly killing the passengers, or not to swerve, possibly killing another
road user. How should a machine decide? To what extent should we even allow
machines to decide? Others argue that technology is fundamentally ‘morally
neutral’, as: “The same technology that launched deadly missiles in WWII
brought Neil Armstrong and Buzz Aldrin to the surface of the moon. The
harnessing of nuclear power laid waste to Hiroshima and Nagasaki but it also
provides power to billions without burning fossil fuels”. In this sense: “AI is
another tool and we can use it to make the world a better place, if we wish.”
A threat to humanity?
A brave new world?
For advocates, the advance of AI has the potential to change the world in unimaginable ways, and they largely dismiss warnings about the dangers that it may pose. As Adam Jezard observes: “Such concerns are not new…From the weaving machines of the industrial revolution to the bicycle, mechanisation has prompted concerns that technology will make people redundant or alter society in unsettling ways.” Moreover, supporters ask us to consider the benefits that AI has already brought to us, such as speedier fraud detection, which will continue to develop and revolutionise the way we live our lives. In the field of medicine, one commentator posits the increasingly plausible idea of having a program which may in future be able to recognise the difference between cancer tumours and healthy tissue infinitely better than humans, which would revolutionise healthcare. Others also criticise arguments that advances in AI signal the end of humanity, and point to the fact that: “After so much talking about the risks of super intelligent machines, it’s time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges.” Perhaps more profoundly, others question why we are so quick to underestimate our abilities as humans, and fear AI. Author Nicholas Carr observes that although: “Every day we are reminded of the superiority of computers…What we forget is that our machines are built by our own hands”, and in actual fact: “If computers had the ability to be amazed, they’d be amazed by us.” In addition, fundamental to the pro-AI argument is the idea of technological progress being a good thing in and of itself. Futurist Dominic Basulto summarises this point when he speaks of ‘existential reward’, arguing that, “humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential.” From the industrial revolution onwards we have gradually made our everyday lives easier and safer through innovation, automation and technology. For instance, the onset of driverless vehicles is predicted to reduce drastically the number of road traffic incidents in the future, and: “Machines known as automobiles long ago made horses redundant in the developed world – except riding for a pure leisure pursuit or in sport”. So with all of the arguments in mind, are critics right to be wary of the proliferation of AI in our lives, and the ethical and practical problems that it may present humanity in the future? Or should we embrace the technological progress that AI represents, and all of the potential that it has to change our lives for the better?
Post a Comment