It can sometimes be hard to resolve crimes a man commits and to determine who is responsible. Now, imagine an android built by a man. Who is responsible if he/ or she, commits a crime? Can an android actually commit one?
Androids are a fiction. For now. But with each day scientists are getting closer to creating that positronic brain all the sci-fi talks about. Considering how technologically advanced we already are this shouldn't take too long. So isn't it time to start rethinking the law? Especially for androids who will at some point become a part of our society.
What is going to happen though? Are they going to be our slaves? Or just tools? And when does a tool becomes a slave? My post is going to ask more questions than have answers. However questioning is always a great start.
So, a tool or a slave? It all depends on how sophisticated the device is. Or if it is self conscious? When does a toaster becomes sentient? Nevertheless we can already program a computer to talk in the first person. To act as if it is self aware. But is it? And once we define the individuality of an android, we have to define his rights as well. And with rights there always come responsibilities and restrictions. Would the same laws apply to both humans and androids?
I recently read an article in New Scientist about lawyers who actually sit and talk about future laws, or more like laws which could become relevant
I guess in the first two instances the answer is not as difficult as it might seem. Let's say the android did something to protect or to serve his owner, then it would be the owner's fault. (Considering the three laws of robotics are in place). If, however, the android would be malfunctioning then obviously it is the manufacturer's fault. Or even if the three laws are imperfect- let's assume they are not going to be the same ones as Asimov predicted, maybe we will have more of them, maybe less, and perhaps all together different- the responsibility for the android would fall upon its creator (a company in this case).
Data wants to be more human |
Probably the most famous android in science fiction universe is Data from Star Trek. Data is considered a sentient being with rights. And since he is a big help for the Starfleet and humanity in general, one would not object to the fact that this android is having so much freedom and right to choose. What the creators of Star Trek forgot is that Data is a good android. It is much easier to grant rights to something/someone which is beneficial for the mankind. But what if we got a bad android? A criminal? How do we deal with him? As I have already mentioned there would be certain issues in imposing the same laws on humans and androids since androids function differently.
Androids are going to have an enormous life span, would be less incline to get hurt or ill- if at all- and wouldn't be able to reproduce. Or is building an android considered a reproduction? In this case we would have to assume that such a task would be simple enough to be performed just by one human, or one android. And this I strongly doubt since you need the whole factory to make a car, so how do you expect to make a positronic brain in your garage? Therefore 'the parents' of a new android would be so to speak the manufacturer. But what does this mean, the whole company? Its founder? The stock holders?
A whole army of androids. Slaves or equals? |
What would happened if they start fighting for their rights? Thus shouldn't we give them those rights straight away, with the first machine which possesses a positronic brain?
I definitely agree that we need science fiction lawyers who are going to make laws based on assumptions how the future will look like. A funny job, indeed. I guess one can't really make a mistake there...
No comments:
Post a Comment