File Copyright Online - File mutual Divorce in Delhi - Online Legal Advice - Lawyers in India

Weaponization of Artificial Intelligence

what is Artificial Intelligence?

Artificial intelligence in its most basic sense means an artificial or computerized system which acts like a human brain, at times even far more accurate and dangerous.
Evolution of artificial intelligence can be traced from our day to day life, like for example chat bots, auto input, or driverless cars, life feels so easy when you have a intelligent brain doing all your things, but there is a dark side to it .

How does a Machine Learn?
A machine learns through programming, very simple isn’t it, actually no, if we talk about programming then we talk about bias, yes if we want the artificial brain to think like a human being then someone needs to put that thinking to the machine, now every human being is diverse, with different mind sets, ideologies, thinking and also different prospective,

So how can we be sure that a machine is not behaving like its programmer?
The answer to it is Neural networking, nowadays machines learn from experiences which means they are thinking, like human beings which means they are also biased and dangerous.

The dark side
As of 2019 many countries are funding various research and development programs to understand and develop artificial intelligence which has led to a lot of worries among the citizens about its use and future repercussions with this, lets talk about Weaponization.

As of now countries like Russia, US, China,are using automated weapons in warfare so that they can reduce human involvement and also reduce deaths, but the most important part of machine learning is the law, who will people blame for any accidents, as of now the EU countries do have legislations to to fix responsibility and liability,  German Traffic Act imposes the responsibility for managing an automated or semi-automated vehicle on the owner and envisages partial involvement of the Federal Ministry of Transport and the Digital Infrastructure. A more comprehensive and understandable approach to the definition of current and prospective legislation regarding robotics is presented in the EU resolution on robotics (European Parliament Resolution, 2017).

It defines types of AI use, covers issues of liability, ethics, and provides basic rules of conduct for developers, operators, and manufacturers in the field of robotics, the rules base on the three laws of robot technology by Azimov (1942). Russia also has a draft law, the Grishin law, which amends the civil code of Russian federations and fixes responsibility on the manufacturers, owners and programmers, but when we talk about weaponization, we talk about Lethal Autonomous Weapons, which can kill people at times even civilians.

The main problem with weaponization of artificial intelligence is learning and biasness, a robot or any system which can make decisions based on the circumstances is very well to make mistakes but only to learn from it, that is where the problem lies, It Learns, now if a human being wanted to learn something what it does is, it makes a mistake and then learns from it, now in a warfare situation a decisional mistake might kill a civilian or even worse its own soldiers, because you cannot predict what a artificial brain might be thinking because its learning and then taking decisions which means it can have a devastating effect on lives.

Artificial intelligence  maybe far more worse than nuclear attacks, because it’s a super artificial machine which has the capacity to multiply itself and play a dominant character, now if we talk about learning then a machine will do something like kill a civilian in order to learn from it, because human beings also do the same in order to learn something we must fail, but in this case the failure might be too much to come out for and over that we cannot fix liability on someone.

If we go back to 10o or 200 years back when various experiments were being conducted in the field  of surgery and medicine, doctors used to develop and experiment methods (ex lobotomy) to cure patients but in due process it would kill them, but they did learn from it and also thousands to life was lost mostly prisoners .Therefore my point is, that we should not create human like creatures which are far more capable than us and at one point may probably overtake us, a mistake in a battlefield will be remembered by all of us, but that dosent stop people (machines) from making another mistake, because there is always something to learn from it.

Written By: Trishit Kumar Satpati

Law Article in India

Ask A Lawyers

You May Like

Legal Question & Answers

Lawyers in India - Search By City

Copyright Filing
Online Copyright Registration


How To File For Mutual Divorce In Delhi


How To File For Mutual Divorce In Delhi Mutual Consent Divorce is the Simplest Way to Obtain a D...

Increased Age For Girls Marriage


It is hoped that the Prohibition of Child Marriage (Amendment) Bill, 2021, which intends to inc...

Facade of Social Media


One may very easily get absorbed in the lives of others as one scrolls through a Facebook news ...

Section 482 CrPc - Quashing Of FIR: Guid...


The Inherent power under Section 482 in The Code Of Criminal Procedure, 1973 (37th Chapter of t...

The Uniform Civil Code (UCC) in India: A...


The Uniform Civil Code (UCC) is a concept that proposes the unification of personal laws across...

Role Of Artificial Intelligence In Legal...


Artificial intelligence (AI) is revolutionizing various sectors of the economy, and the legal i...

Lawyers Registration
Lawyers Membership - Get Clients Online

File caveat In Supreme Court Instantly