21 September 2013

Can you sue an android?


It can sometimes be hard to resolve crimes a man commits and to determine who is responsible. Now, imagine an android built by a man. Who is responsible if he/ or she, commits a crime? Can an android actually commit one?
Androids are a fiction. For now. But with each day scientists are getting closer to creating that positronic brain all the sci-fi talks about. Considering how technologically advanced we already are this shouldn't take too long. So isn't it time to start rethinking the law? Especially for androids who will at some point become a part of our society.
What is going to happen though? Are they going to be our slaves? Or just tools? And when does a tool becomes a slave? My post is going to ask more questions than have answers. However questioning is always a great start.

So, a tool or a slave? It all depends on how sophisticated the device is. Or if it is self conscious? When does a toaster becomes sentient? Nevertheless we can already program a computer to talk in the first person. To act as if it is self aware. But is it? And once we define the individuality of an android, we have to define his rights as well. And with rights there always come responsibilities and restrictions. Would the same laws apply to both humans and androids?
I recently read an article in New Scientist about lawyers who actually sit and talk about future laws, or more like laws which could become relevant
I guess in the first two instances the answer is not as difficult as it might seem. Let's say the android did something to protect or to serve his owner, then it would be the owner's fault. (Considering the three laws of robotics are in place). If, however, the android would be malfunctioning then obviously it is the manufacturer's fault. Or even if the three laws are imperfect- let's assume they are not going to be the same ones as Asimov predicted, maybe we will have more of them, maybe less, and perhaps all together different- the responsibility for the android would fall upon its creator (a company in this case).
Data wants to be more human
However, if the android is consider an equal to a human being- or at least not his slave- then shouldn't the machine be responsible for all his actions? But then does this mean that he could vote? Drive a car? Get married? Get imprisoned? For what purpose? To spend 20 years in a jail, and simply turn off his functions and hibernate and after his sentence is over just walk out of the prison and continue to live his life from the point he left it? That would make no sense whatsoever since it would be totally ineffective. Again, I seem to come back to the initial question- should the same law apply to both humans and androids? But one would suffer the consequences, feel the emotional pain and shame, and the other would simply turn off his system. On the other hand, we might be able to insert an emotion chip to an android brain therefore make him more aware, and more traumatized if he has to go to the jail, or pay a fine etc.
Probably the most famous android in science fiction universe is Data from Star Trek. Data is considered a sentient being with rights. And since he is a big help for the Starfleet and humanity in general, one would not object to the fact that this android is having so much freedom and right to choose. What the creators of Star Trek forgot is that Data is a good android. It is much easier to grant rights to something/someone which is beneficial for the mankind. But what if we got a bad android? A criminal? How do we deal with him? As I have already mentioned there would be certain issues in imposing the same laws on humans and androids since androids function differently.
Androids are going to have an enormous life span, would be less incline to get hurt or ill- if at all- and wouldn't be able to reproduce. Or is building an android considered a reproduction? In this case we would have to assume that such a task would be simple enough to be performed just by one human, or one android. And this I strongly doubt since you need the whole factory to make a car, so how do you expect to make a positronic brain in your garage? Therefore 'the parents' of a new android would be so to speak the manufacturer. But what does this mean, the whole company? Its founder? The stock holders?

A whole army of androids. Slaves or equals?
It seems as if another question should be in place. Would building of androids be more beneficial to the man or would it only create more problems? Nevertheless, I omitted the big question : what if androids would feel superior to humans? And aren't they indeed superior? - consider their strength, computing, life span.. on the other hand they would most certainly lack the understanding of jokes, sarcasm and intuition.
What would happened if they start fighting for their rights? Thus shouldn't we give them those rights straight away, with the first machine which possesses a positronic brain?

I definitely agree that we need science fiction lawyers who are going to make laws based on assumptions how the future will look like. A funny job, indeed. I guess one can't really make a mistake there...
at some point. This is not a joke, I promise! They have conferences where they discuss issues such as what if an android would accidentally harm a human. Who would be responsible? The owner? The manufacturer? The android?

No comments:

Post a Comment