By: Harshit Singh
The essence of global justice basically lies in the concept that it the ongoing and ever-continuing endeavors to achieve justice globally for the entire human-kind. It provides the rights to all the human beings. It also takes into consideration the issues which are pertinent for the global justice as a whole. The coming together of global community and uniting in conscience and establishing framework which is for public good which is integrated at both national and international level. There are different dimensions to global justice including normative, procedural, policy and institutional. Normative dimension includes evaluating the substance of rights which should be available globally while the procedural dimension is the incidental angle garnering the implementation of the rights identified in the normative dimension. Institutional dimension plies in a broader perspective of assessing which institutions are necessary for establishing global justice. Finally, as the name suggests, the policy dimension deals with the policies to be channelized for better accessibility of institutional frameworks established for achieving the global justice.
Threats to Global Justice
The attainment of justice, as frequently opined, proves to be an intricate and formidable endeavor. Multifaceted threats lie in the garb of looming threats to justice at global level. These include unequal share of resources and poverty, trade inequalities, migration challenges etc. One of the biggest threats arguably, is terrorism, armed conflicts and human right violations. Persecution, authoritarian regimes and terrorism based on geo-political scenarios poses a major threat. Terrorist activities often involve heinous violations and denial of human rights. The terrorism arena has been ever-evolving and changing into all the more dangerous forms. A major evolution in technology is the invention of Artificial Intelligence and the integration of AI with terrorism exacerbates the existing issues to global justice.
Artificial Intelligence (AI) has advanced significantly in the last several years, changing many industries and affecting social structures. The following blog explains the AI tools and their modus operandi in driving terrorist attacks which induced terrorism and that leads to crime against humanity and eventually a threat to global justice. The blog also suggests the counter measures that can be enabled in fighting this AI driven war against terrorism.
AI Enabled Chat Platforms
Chat apps in particular, which are AI-driven communication platforms, are powerful tools that terrorists might use to attract and radicalize people. These platforms use artificial intelligence (AI) algorithms to offer recruits individualized communications based on their interests and vulnerabilities. Extremist views can be normalized and a sense of belonging within extremist networks can be fostered through automated and persistent involvement via AI chatbots. Terrorists can hide their identities from prospective recruits by using these networks’ anonymity features. Furthermore, terrorists may reach a worldwide audience because to AI chatbots’ multilingualism, which helps them get beyond language barriers and increase the number of prospective recruits they can attract. “Rocket Chat” has been a dependable online communication tool in recent times; al-Qaeda and the Islamic State (IS) both adopted it in December 2018. Its slack-like interface makes it easier for jihadist organizations and their supporters to have encrypted chats, which in turn makes it possible for official and unauthorized material to spread over privately run servers. Because of the platform’s open-source nature, extremists can alter it to suit their demands and security specifications. Having direct control over servers lowers the possibility of external disruptions or content removal while guaranteeing continuous access.
The independent assessor of terrorism laws for the United Kingdom (UK), Jonathan Hall KC, has voiced worries about the possible dangers of AI chatbots that target and radicalize young and susceptible users. These chatbots’ preprogrammed propagation of terror ideas presents a serious risk to radicalization and the spread of extremist narratives. However, because AI chatbots are not specifically covered by the UK’s current anti-terrorism laws, Hall brought to light a substantial legal issue in prosecuting criminals utilizing AI chatbots for extremist narratives. Concerns for properly addressing and preventing AI-driven criminal actions, particularly radicalization, are raised by these legislative gaps.
Deep Fakes
Deepfakes, which were first praised for their entertainment value, have made it possible for users to easily include faces into a variety of situations and produce humorous videos. Deepfakes do, however, have a darker side, much like any technology advancement, which raises questions about its use by terrorist organizations and other criminal organizations. The majority of these misleading movies are produced by sophisticated deep learning algorithms, particularly those that employ Generative Adversarial Networks (GANs), which are made up of a discriminator network and a generator network.
The discriminator evaluates the authenticity of the stuff that the generator has altered. The generator produces manipulated content, while the discriminator critically assesses its authenticity. Terrorist organizations are using synthetic data more frequently as a result of violent non-State actors realizing how useful it is for scheming reasons to manipulate the information landscape. Fake images and films have been used by organizations like The Resistance Front (TRF) and Tehreeki-Milat-i-Islami (TMI) in India to incite particular groups, especially vulnerable youth. Deception and misinformation have developed into powerful instruments with broad applications. Since social media has become so widespread in the digital age, bad actors have an easier time stoking division, influencing public opinion, and undermining confidence in democratic processes and institutions. In 2022, a news station in Ukraine reported that a breach had resulted in the dissemination of bogus information, which included a deepfake video purporting to show the Ukrainian President pleading for surrender.
These cases show how, during major events like armed wars or geopolitical crises, deepfake technology may propagate false information and cause confusion. Moreover, audio deepfakes have become a serious problem. Speech Synthesis Technology, or Text-to-Speech (TTS), allows malicious persons to mimic voices in order to trick and influence people by pretending to be the victim’s voice in audio messages. Deepfakes can take advantage of emotional weaknesses in the context of radicalization by producing edited videos that support extremist ideology, disseminating fabricated testimonies to support radical viewpoints, and airing propaganda that glorifies violence.
AI based Social Engineering Attacks
Noodlemagazine This is my first time pay a quick visit at here and i am really happy to read everthing at one place
Noodlemagazine I just like the helpful information you provide in your articles