The Malicious Uses of AI

The distressing aspects of global governance and the use of artificial intelligence for evil.

In recent years, AI research and technology has vastly expanded and reached new heights. As AI capabilities become more powerful and widespread, so does the potential landscape of threats and malicious uses of AI in areas such as digital, physical and political security. According to Matthew Jordan, an AI and History of Science instructor at McMaster University, artificial intelligence is what’s known as a “dual-use technology”, which simply means that it can be used for both civilian and military aims (2020). When it comes to the malicious uses of AI, many seem quite worrying to me, whether it is authentic-looking spam emails or deceptive ads containing malware, the use of autonomous drones in military and non-military settings or the targeted spread of misinformation (Jordan, 2020).


Even more frightening is the fact that it doesn’t have to be high-tech devices that we may not necessarily be in regular contact with, such as drones used in a military setting, for us to be worried about robots being hacked for malicious purposes. There are plenty of devices in our own homes such as smart fridges, microwaves, doorbells or house-cleaning robots that can easily be vulnerable to being held hostage by malware.


The Use of AI in Warfare

Furthermore, the AI arms race between various countries is very likely to create a shift in the power dynamics between nations, and lead to global governance amongst countries that are quicker to adopt and develop stronger AI systems. While countries such as China and the United States are leading the AI weapons arms race, there are others that oppose automated weaponry and raise many ethical concerns. However, the question still remains whether these countries will still have to invest and further develop AI warfare despite their stance, in order to defend themselves from the nations that are choosing to advance in this field. I strongly believe that there needs to be active monitoring, ethical guidelines and international policies in place to prevent AI warfare from getting beyond the scope it was originally intended for. In the United States, for example, there is an official Department of Defence directive that sets out policy for the development and use of autonomy in weapons but although such arms control and norm development processes are critical, they are unlikely to stop motivated non-state actors from conducting attacks (Brundage et al, 2018).


What Can Be Done

It is comforting to know that there are people such as AI security researchers, cybersecurity experts and companies around the world working full-time on these issues. However, more needs to be done and in order to solve the dilemmas surrounding the malicious use of AI, there definitely needs to be more government intervention to create policies, laws and values in the AI landscape.


I believe creating awareness about the negative outcomes of artificial intelligence and how to protect oneself from it is a solution that can have swift and effective outcomes. Courses need to be taught at all types of institutions to give students the opportunity to learn and reflect upon the history, ethics, and outcomes of artificial intelligence from all angles. This is something being done at McMaster through the History of AI (INNOVATE 1Z03) course that was first introduced during the Winter 2020 term. However, education about AI also needs to be provided on a larger scale to reach vulnerable individuals who might not be aware of, or able to afford, the protections that some others can, which can only be done with the help of government initiatives. More aware users will be able to spot telltale signs of certain attacks, such as poorly crafted phishing attempts, and practice better security habits, such as using diverse and complex passwords and two-factor authentication (Brundage et al, 2018). Lastly, there needs to be more transparency and literature associated with artificial intelligence and machine learning in order to make it easier for people to understand why an algorithm has made a particular decision. Moreover, there also need to be clear guidelines in place on who would be held accountable for undesirable effects.


While the advancements in AI can be described as nothing short of miraculous and have made processes far more quicker, accurate and reliable than when they were in the hands of humans, we need to be aware of the undesirable consequences and how to protect ourselves from negative outcomes.



Written by: Rija Asif



References:

Brundage, M., Avin, S., & Clark, J. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.


Matthew Jordan. (2020, May 23). AI Governance and Policy (No. 3) [Audio podcast episode]. In Innovate 1Z03 Lectures.