20 April 2025 1:29 am

Artificial Intelligence (AI) & Terrorism

THE MISUSE OF ARTIFICIAL INTELLIGENCE (AI) BY TERRORIST ORGANISATIONS AND NON-STATE ACTORS

Lead Researcher: Muhammad Afiq Ismaizam

 

Artificial Intelligence (AI) has emerged as a pivotal force driving transformative change across various industries, offering unparalleled potential for automation, data-driven decision-making, and personalised user experiences. Across sectors such as finance, healthcare, and security, AI’s ability to swiftly analyse vast datasets has revolutionised operational efficiency and facilitated innovation. However, alongside its promise, AI also presents a range of challenges and ethical concerns. Issues such as job displacement, algorithmic bias, data privacy violations, and the ethical implications of autonomous decision-making systems underscore the need for responsible AI development and deployment.

Yet, a critical question arises: what happens if AI falls into the hands of terrorist organisations?

The misuse of AI poses profound ethical, societal, and security risks, as AI technologies can be weaponized for nefarious purposes. One of the most concerning instances of AI exploitation involves the manipulation of AI algorithms to spread misinformation, deceive individuals, and manipulate public opinion. Malicious actors can leverage advanced algorithms to generate deepfake videos and fabricate highly realistic fake news articles, leading to widespread disinformation campaigns and eroding trust in legitimate sources of information.

Additionally, AI can be employed to enhance cyberattacks, with adversaries using AI-powered tools to launch more sophisticated and targeted attacks, such as phishing scams, malware distribution, and social engineering tactics. Furthermore, the integration of AI into autonomous weapons systems raises significant ethical dilemmas, particularly regarding the potential for indiscriminate and unethical use of lethal force, exacerbating global security concerns.

Given the rapid advancement of AI technology, tracking its misuse by terrorist organisations and private actors remains a significant challenge. This research aims to address the following key questions:

  1. How can the misuse of AI by terrorist organisations be tracked?
  2. Are there verifiable cases where AI has been specifically used by terrorist organisations to further their strategic or operational objectives?
  3. How do law enforcement and government agencies monitor AI-related threats?
  4. What policy responses are available to protect citizens against AI-driven attacks?

By examining these critical questions, this study seeks to provide a comprehensive understanding of the intersection between AI and security threats, offering evidence-based recommendations for policymakers, law enforcement agencies, and technology developers to mitigate the risks associated with AI misuse in the context of terrorism.

 

PUSAT SERANTAU ASIA TENGGARA BAGI MENCEGAH KEGANASAN (SEARCCT)

Kementerian Luar Negeri Malaysia

Hubungi

Hak Cipta © 2025 Pusat Serantau Asia Tenggara Bagi Mencegah Keganasan (SEARCCT). Hak Cipta Terpelihara.
Paparan terbaik: Mozilla Firefox, Chrome, Internet Explorer 11 ke atas dengan resolusi 1250 x 768.

Tarikh dikemaskini pada :

20 April 2025

Jumlah Pelawat :

1197581

Skip to content