Artificial Intelligence > Ethics of AI - Legal Rights

Ethics of AI -
Legal Rights

By LEONARD MCGAVIN
Published: June 2, 2008

The hunt for Artificial Intelligence (AI) has ethical problems that are inherent in technological progression. AI also opens up a variety of other ethical problems that humanity will have little time to consider. The question of machines reaching the level of intelligence of humans is not a question of "if", it is a question of "when". Current predictions see Strong AI occurring in the next 10 - 40 years.

The progress of technology is reaching exponential proportions. Obsolescence is also occurring at unprecedented levels. Whether or not the obsolescence is purposely built into products or not, there is a trend that is causing not only difficulty for consumers but is also creating huge implications for the future of legal rights for machines.

To make the task of allowing for ethical issues of AI harder, mankind is busy at work in different AI fields despite the fact there are only vague definitions of what "intelligence" and "understanding" are actually comprised of let alone considering the inherent ethical issues that exist in the field. When will it be that we cross the line and say "Yes, model 4040 has rights because it is intelligent and has emotional simulation. Model 4020 on the other hand was only intelligent and doesn't deserve any rights". One machine is given rights and another machine is not. What happens if a machine has been given rights and later becomes obsolete? Is the best way to handle the issue to bury it and never give machines societal rights even though they may be more intelligent than ourselves and have more feeling than we could even comprehend? That certainly sounds dangerous and sounds like a good way to piss-off a new race.

Ethics as a field is based on creating and considering regions within "gray" areas that satisfy the societal norm - or to be put more simplistically - to figure out what is right and what is wrong. Despite our intentions of trying to categorise everything into neat boxes there is truly no hope of doing so in any field, let alone, AI. "What is right and what is wrong" is based on arbitrary value systems that have been carried through generations of society to help order the chaos that surrounds us. What is defined as right and wrong in a society is currently what is passed by government and ruling parties - usually in the form of legislation. The problem exists that technology is moving at break-neck speeds and definitions of what is right and wrong have no chance of keeping up.

When the time is upon us to decided about the legal rights of machines the question itself will struggle to exists as we will be facing more chaos than we can handle. There is, however, one ethical question we face right now; if technological progression is leading us to chaos should we stop it or is the point moot as the cat is already out of the bag?

Comments

1. Jonas on June 5, 2008

What is the point of this article? It doesn't take a position at all, just blows a few ethical questions around and then gives up. And who says "It's only a question of when?" Do you have any idea how complex the human brain is? Scientists haven't even come close to understanding it, and this idiot is ready to relegate it to the bottom shelf just because he watches too much Star Trek.

2. Anonymous on June 6, 2008

The 'questions' you propose have been contemplated since the beginning of this century (and earlier if you want to extend metaphors to include monsters). Isaac Asimov solved this supposed riddle with 3 rules.

Add also that AI are designed explicitly to serve, and as such will have the trait built into them. Compare this with existing species and races, and there is no origin to mark than one to serve another.

In essence, the question of 'will machines have rights' is answered by 'only if we want them to.'

3. Cobalt on August 22, 2008

Any emotion or feelings that an AI has must be built into the system. If the programmer knows that the AI will, for example, work as a robot farmer 24/7 at a farm in the middle of nowhere for no pay, then he must choose not to program feelings into the system. The program would still be Artificially Intelligent, but wont be Artificially Emotional. Since it doesn't care about working, the legal or ethical issue never arises because it cant complain and it doesn't have any feelings.

If the programmer chose to give the program the ability to have emotions, then it certainly must be given some rights.

Since the actual programs can be custom built, it's possible to stop any known issues associated with it's duties before it happens.

4. Smitty on December 18, 2012

I think it sounds like something that could be a greaty contribution to society as long as they aren't given any social rights and if they oblige by the 3 Laws of Robotics

Any Comments?


More...

» AI: Intelligence, Learning and Understanding

The Forever Web App Project

By LEONARD MCGAVIN
Published: November 15, 2009

The Forever Web App Project is an AI project to demonstrate a web app's ability to exist on the web unassisted (except by strangers) for as long as possible after a given date.

Sudoku & Artificial Intelligence

By LEONARD MCGAVIN
Published: March 16, 2009

Using AI on Sudoku could be considered overkill. Either way, any algorithm written to solve a Sudoku puzzle could be considered intelligent by understanding what it accomplishes.