Wednesday, March 4, 2009

Rights. Robot Rights.

AKA, the sequal to Robot Rock.

[Note: I'm aware that there's a weird gap after that bracket. I can't get rid of it.)

Lt. Ville brings up an excellent philosophical point-should robots, if they are smart enough, possess human rights? I'm not talking about the crappy machines which do nothing but take people's jobs; I'm talking about the sentient, almost living robots such as 3PO, R2-D2, or Sonny. although they are machines, these (
admittedly fictional) beings are as human as their biological counterparts. They think, learn, use instinct, possess individual and evolving personalities, and Sonny even dreams. So, should hypothetical intelligent robots be deserving of human rights, if their only difference is what produces the mind?

First, I wikied 'human rights.'

Human rights refer to the "basic rights and freedoms to which all humans are entitled."[1] Examples of rights and freedoms which have come to be commonly thought of as human rights include civil and political rights, such as the right to life and liberty, freedom of expression, and equality before the law; and social, cultural and economic rights, including the right to participate in culture, the right to food, the right to work, and the right to education.

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
With this in mind, I would say no.

The problem for human rights lies in robots' inherent lack of free will. Free will is what enables humans to partake in elections, pursue their concept of happiness, and makes us individuals. Free will also keeps society in balance, as it is very difficult for large numbers of people to engage in uncivil behaviour (it happens, but rarely anything revolutionary).

Robots, however, tend to not possess free will. Take, for example, the Terminator in T1. In T1, the Terminator demonstrated a high level of intelligence by using Sarah's mother to find out where Sarah's living. However, the terminator possesses no free will, and is always bound by its programming and parameters. In T2, the Terminator shows that he must follow John's orders without fail. The Terminator is aware of this, yet due to its lack of free will, it can do nothing but follow its programming.

However, several robots have shown that they can break free of all programming and adopt true free will. The Terminator demonstrates in T2 this when he directly disobeys John's orders, and destroys the final CPU chip. In i, robot, Sonny demonstrates his humanity by weighing moral options, and placing his trust in anti-robot bigot Spooner.

Which raises another question: are robots capable of morality? In my cynical opinion, a large part of what keeps society together is the consequences of bad actions (such as murder or thievery). Logically speaking, a robot wouldn't commit crimes solely because of the consequences; this entails that if a robot could steal without being caught, then it would do so unfailingly. This in turn leads to another problem; if robots were to increase in numbers to the point that they began to outnumber humans, then it is entirely probable that robots would launch a coup to acquire power. Power is, after all, a rational desire, and why wouldn't robots want it? If unbound by morality, any large group of sentients with a single common goal would unite to assume power.

The only exception to this rule* (and I am really annoyed with only being able to use fictional examples; extrapolating fiction to reality only works if you ignore reality, which I have no desire in doing) is Sonny from i, robot. Although he would almost certainly benefit from robots governing the planet, he states that it is too "heartless." However, I am unwilling to believe that all robots can be imbued with a human level of morality.

Despite these exceptions, AI can still be reprogrammed, and because of this I don't think that robots will ever be capable of free will, as their views are never truly their own.

*It is established that Sonny can renege on his 'moral programming', which is what gives free will. I can't tell if the droids from the Star Wars universe possess absolute free will, or if they do possess parameters that prevent them from, say, turning against the Rebellion and defecting to the Empire. Given the risks, it is unlikely that the rebels wouldn't add such restrictions.

PS: Does anybody else think this post is totally ridiculous?

No comments: