THE MISUSE OF ARTIFICIAL INTELLIGENCE
(AI) BY TERRORIST ORGANISATIONS AND NON-STATE ACTORS
Lead Researcher: Muhammad Afiq Ismaizam
Artificial Intelligence (AI) has emerged
as a pivotal force driving transformative change across various industries,
offering unparalleled potential for automation, data-driven decision-making,
and personalised user experiences. Across sectors such as finance, healthcare,
and security, AI’s ability to swiftly analyse vast datasets has revolutionised
operational efficiency and facilitated innovation. However, alongside its
promise, AI also presents a range of challenges and ethical concerns. Issues
such as job displacement, algorithmic bias, data privacy violations, and the
ethical implications of autonomous decision-making systems underscore the need
for responsible AI development and deployment.
Yet, a critical question arises: what
happens if AI falls into the hands of terrorist organisations?
The misuse of AI poses profound ethical,
societal, and security risks, as AI technologies can be weaponized for
nefarious purposes. One of the most concerning instances of AI exploitation
involves the manipulation of AI algorithms to spread misinformation, deceive
individuals, and manipulate public opinion. Malicious actors can leverage
advanced algorithms to generate deepfake videos and fabricate highly realistic
fake news articles, leading to widespread disinformation campaigns and eroding
trust in legitimate sources of information.
Additionally, AI can be employed to
enhance cyberattacks, with adversaries using AI-powered tools to launch more
sophisticated and targeted attacks, such as phishing scams, malware
distribution, and social engineering tactics. Furthermore, the integration of
AI into autonomous weapons systems raises significant ethical dilemmas,
particularly regarding the potential for indiscriminate and unethical use of
lethal force, exacerbating global security concerns.
Given the rapid advancement of AI
technology, tracking its misuse by terrorist organisations and private actors
remains a significant challenge. This research aims to address the following
key questions:
By examining these critical questions,
this study seeks to provide a comprehensive understanding of the intersection
between AI and security threats, offering evidence-based recommendations for
policymakers, law enforcement agencies, and technology developers to mitigate
the risks associated with AI misuse in the context of terrorism.
Hak Cipta © 2025 Pusat Serantau Asia Tenggara Bagi Mencegah Keganasan (SEARCCT). Hak Cipta Terpelihara.
Paparan terbaik: Mozilla Firefox, Chrome, Internet Explorer 11 ke atas dengan resolusi 1250 x 768.
Tarikh dikemaskini pada :
20 April 2025
Jumlah Pelawat :