Saturday
January 18, 2025

AI-Driven Terrorism: Unveiling a Crime against Humanity and its Threat to Global Justice

Featured in:

By: Harshit Singh

Artificial Intelligence and Terrorism: source Internet

The essence of global justice basically lies in the concept that it the ongoing and ever-continuing endeavors to achieve justice globally for the entire human-kind. It provides the rights to all the human beings. It also takes into consideration the issues which are pertinent for the global justice as a whole. The coming together of global community and uniting in conscience and establishing framework which is for public good which is integrated at both national and international level. There are different dimensions to global justice including normative, procedural, policy and institutional. Normative dimension includes evaluating the substance of rights which should be available globally while the procedural dimension is the incidental angle garnering the implementation of the rights identified in the normative dimension. Institutional dimension plies in a broader perspective of assessing which institutions are necessary for establishing global justice. Finally, as the name suggests, the policy dimension deals with the policies to be channelized for better accessibility of institutional frameworks established for achieving the global justice.

Threats to Global Justice

The attainment of justice, as frequently opined, proves to be an intricate and formidable endeavor. Multifaceted threats lie in the garb of looming threats to justice at global level. These include unequal share of resources and poverty, trade inequalities, migration challenges etc. One of the biggest threats arguably, is terrorism, armed conflicts and human right violations. Persecution, authoritarian regimes and terrorism based on geo-political scenarios poses a major threat. Terrorist activities often involve heinous violations and denial of human rights. The terrorism arena has been ever-evolving and changing into all the more dangerous forms. A major evolution in technology is the invention of Artificial Intelligence and the integration of AI with terrorism exacerbates the existing issues to global justice.

Artificial Intelligence (AI) has advanced significantly in the last several years, changing many industries and affecting social structures. The following blog explains the AI tools and their modus operandi in driving terrorist attacks which induced terrorism and that leads to crime against humanity and eventually a threat to global justice. The blog also suggests the counter measures that can be enabled in fighting this AI driven war against terrorism.

AI Enabled Chat Platforms

Chat apps in particular, which are AI-driven communication platforms, are powerful tools that terrorists might use to attract and radicalize people. These platforms use artificial intelligence (AI) algorithms to offer recruits individualized communications based on their interests and vulnerabilities. Extremist views can be normalized and a sense of belonging within extremist networks can be fostered through automated and persistent involvement via AI chatbots. Terrorists can hide their identities from prospective recruits by using these networks’ anonymity features. Furthermore, terrorists may reach a worldwide audience because to AI chatbots’ multilingualism, which helps them get beyond language barriers and increase the number of prospective recruits they can attract. “Rocket Chat” has been a dependable online communication tool in recent times; al-Qaeda and the Islamic State (IS) both adopted it in December 2018. Its slack-like interface makes it easier for jihadist organizations and their supporters to have encrypted chats, which in turn makes it possible for official and unauthorized material to spread over privately run servers. Because of the platform’s open-source nature, extremists can alter it to suit their demands and security specifications. Having direct control over servers lowers the possibility of external disruptions or content removal while guaranteeing continuous access.

The independent assessor of terrorism laws for the United Kingdom (UK), Jonathan Hall KC, has voiced worries about the possible dangers of AI chatbots that target and radicalize young and susceptible users. These chatbots’ preprogrammed propagation of terror ideas presents a serious risk to radicalization and the spread of extremist narratives. However, because AI chatbots are not specifically covered by the UK’s current anti-terrorism laws, Hall brought to light a substantial legal issue in prosecuting criminals utilizing AI chatbots for extremist narratives. Concerns for properly addressing and preventing AI-driven criminal actions, particularly radicalization, are raised by these legislative gaps.

Deep Fakes

Deepfakes, which were first praised for their entertainment value, have made it possible for users to easily include faces into a variety of situations and produce humorous videos. Deepfakes do, however, have a darker side, much like any technology advancement, which raises questions about its use by terrorist organizations and other criminal organizations. The majority of these misleading movies are produced by sophisticated deep learning algorithms, particularly those that employ Generative Adversarial Networks (GANs), which are made up of a discriminator network and a generator network.

The discriminator evaluates the authenticity of the stuff that the generator has altered. The generator produces manipulated content, while the discriminator critically assesses its authenticity. Terrorist organizations are using synthetic data more frequently as a result of violent non-State actors realizing how useful it is for scheming reasons to manipulate the information landscape. Fake images and films have been used by organizations like The Resistance Front (TRF) and Tehreeki-Milat-i-Islami (TMI) in India to incite particular groups, especially vulnerable youth. Deception and misinformation have developed into powerful instruments with broad applications. Since social media has become so widespread in the digital age, bad actors have an easier time stoking division, influencing public opinion, and undermining confidence in democratic processes and institutions. In 2022, a news station in Ukraine reported that a breach had resulted in the dissemination of bogus information, which included a deepfake video purporting to show the Ukrainian President pleading for surrender.

These cases show how, during major events like armed wars or geopolitical crises, deepfake technology may propagate false information and cause confusion. Moreover, audio deepfakes have become a serious problem. Speech Synthesis Technology, or Text-to-Speech (TTS), allows malicious persons to mimic voices in order to trick and influence people by pretending to be the victim’s voice in audio messages. Deepfakes can take advantage of emotional weaknesses in the context of radicalization by producing edited videos that support extremist ideology, disseminating fabricated testimonies to support radical viewpoints, and airing propaganda that glorifies violence.

AI based Social Engineering Attacks

Although it was originally thought of as a sci-fi fantasy, swarm drone technology has evolved into a real, disruptive force that is changing the face of combat. A swarm drone attack is a planned attack carried out by many drones working together. The ability to launch numerous drones at once increases the impact that they can have on targets and, in the worst-case scenario, increases the possibility of widespread destruction and mass casualties. Terrorists can purchase drones from the commercial market, but coordinating a large group of them presents significant difficulties. It takes skilled operators, a strong communication system, and in-depth knowledge of drone technology to operate several drones efficiently. It takes a great deal of technical know-how and access to cutting-edge equipment to develop such capabilities, and many terrorist groups may not currently have these resources. But in the future, the hurdles to entrance might go down as criminal networks share information and technology develops quickly[i].
Terrorist groups have become more technologically proficient, purposefully employing less complex and easily accessible technology. Terrorists can benefit asymmetrically from the incorporation of new advanced technology, particularly Artificial Intelligence (AI), because of its accessibility and possibly lower financial requirements. Federal agencies face problems from AI-powered remote attacks, which makes it necessary to build countermeasures to deal with new AI threats.
Given that governments have not been the primary force behind the creation of AI, a complete restriction on its proliferation is not feasible. Some apps can’t be completely prohibited, such as digital assistants for commercial writing. Restrictions on life-threatening technologies, including Lethal Autonomous Weapon Systems (LAWS), are doable, nonetheless. The topic of fully autonomous weapons (FAWs) and their ethical and security implications has been deliberated globally, including within the framework of the United Nations (UN). The employment of FAWs, which choose and attack targets on their own, presents serious issues and highlights the necessity for legislation to stop terrorist groups from obtaining them. The creation and application of automated detection methods, made possible by programs like DARPA’s Media Forensics and Semantic Forensics, are essential to halting the spread of deepfakes. A number of nations, such as China and India, have enacted laws making the intentional deployment of deepfakes illegal. Reverse image search and other cutting-edge detection technologies help to recognize and validate distorted information, supporting public awareness campaigns encouraging responsible media usage.
 Counter Measures
Countermeasures that are comprehensive are needed to combat terrorists’ hostile use of drones. Drones with GPS capabilities cannot access restricted regions thanks to geofencing, which uses RFID or GPS to create virtual boundaries around military bases and key infrastructure. Micro drones can be detected and eliminated by Anti-Drone Systems (ADS) using laser-based and jamming methods. Drones are forced to land right away by jamming radio frequencies and global navigation satellite systems (GNSS). Additionally in development are high-power microwave counter-drone systems that use electromagnetic radiation to quickly take down numerous drones’ internal circuitry. Taken together, these steps provide a complete strategy to combat the challenges that the digital age presents, including AI, deepfakes, and hostile drone use by terrorist organizations.
Conclusion
Even while there is still time for terrorist organizations to potentially use AI-enabled skills, it is important to keep an eye out for developments in this field. Organizations looking to harness developing technologies need to be proactive in addressing possible dangers. Concerns over possible terrorist abuse of AI are heightened by the technology’s increasing accessibility to the general public and its integration into vital infrastructure. The advent of weaponized deepfake technology presents a serious problem since it can completely transform deception by producing extremely lifelike and almost imperceptible false audio and video recordings. These advanced deepfakes pose serious risks since they rely on taking advantage of cognitive flaws to create obstacles that are hard to defend against and lack clear escalation limitations.
Furthermore, there are a number of security issues with the growing usage of drones for civilian purposes. Due to their increased availability and improved capabilities, hostile groups may be able to conduct assaults and acquire intelligence using these drones. The regulatory environment surrounding drones is still complicated, necessitating a thorough strategy for countermeasures that combines regulatory, passive, and aggressive methods. It is anticipated that the probability and sophistication of such assaults would increase as non-state actors gain access to increasingly sophisticated technologies like drones and artificial intelligence. The idea that state actors could use unmanned aerial vehicles (UAVs) as proxies adds more complication and a possible source of escalation to this difficult situation.

2 COMMENTS

4 1 vote
Article Rating
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
niodlemagazine
niodlemagazine
8 days ago

Noodlemagazine This is my first time pay a quick visit at here and i am really happy to read everthing at one place

noodlemagazins
noodlemagazins
5 days ago

Noodlemagazine I just like the helpful information you provide in your articles

Find us on

Latest articles

Related articles

The Bush Doctrine as a Grand Strategy: In Retrospect

By: Dhritiman Banerjee The MENA region and Afghanistan have become increasingly important focal points among experts on National...

After Google’s Willow, Israel launches Quantum Computer Powered by...

By: Suman Sharma Israel’s first domestically built quantum computer, utilising advanced superconducting technology, is now operational. The 20-qubit...

Army Day 2025: Apt Display of India’s Military Might

By: Lt Col JS Sodhi (Retd), Editor, GSDN As was appropriate the Indian Army’s decision to hold the...

India’s ‘Island Diplomacy’: Building Trust and Partnerships in the...

By: Aishwarya Dutta Indian Prime Minister Narendra Modi’s 2018 Shangri-La address provided a broad framework for India’s Indo-Pacific...

Compendium 2024: Here’s a déjà vu on how India...

By: Aastha Agarwal On January 01, 2024, the BRICS, in line with Johannesburg Declaration of the 15th Summit,...
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
100% Free SEO Tools - Tool Kits PRO