I read from www.newscientist.com that more and more “machines/technology” are being programmed in such a way that they can learn, make decisions and take actions by themselves. But are we ready to give way to the so-called “autonomous age”?
One example is motor vehicles which can drive, steer and park themselves. Another are “smart homes” that monitor health and boast friendly robots to prove companionship. With the rise of such machines, I think we should all be aware of the various complex social, ethical and legal questions that such a development will bring.
For example, intelligent motor vehicles may be safer and more convenient to motorists in general but what about the few times that such motor vehicles will make mistakes? Some might think that technology is mostly infallible but most are aware that there will be instances when mistakes occur. Will the public be more forgiving because a machine made a mistake or will the public be more outraged? What are the legal consequences when an “intelligent” motor vehicle does run someone down or cause damage to property? Who will be liable - the owner, manufacturer or the company that developed the software? Who will ultimately be liable?
On the other hand, knowing that such machines are all part technological development, will we stop its advancement? Knowing that they will make our lives easier, more convenient and probably safer, will we delay the inevitable? I say it’s inevitable because one way or the other we will get to the point where machines will do “everything” for us. It is unstoppable for as long as there are curious minds and creative hands. How would all of us react towards a computer that does our homework for us? How about software that writes our blogs for us? How about a robot that looks and acts exactly as we do? How about autonomous lawyers, doctors, politicians, etc.? With such a level of autonomy, will machines eventually become legally responsible for their actions? I know I’m getting too far ahead of myself here but it is enjoyable to think about it and the possibilities that our future holds for us. :)
We all have seen lots and lots of movies where machines/robots replace human beings in almost all aspects of life. When something does go wrong, who then becomes responsible? Will we always blame the “machine”? Or will we be liable for choosing the machine in the first place?
Let’s take another example. For example a machine is designed to make right decisions about whether or not to save someone’s life. How do we decide if the machine made the “right” decision? Sometimes “right” decisions are not always the most rational but those which are partly governed by our emotions.
How about you guys? What kind of autonomous machine do you want to see in the future? :)
Subscribe to:
Post Comments (Atom)
1 comment:
the only non-action movie that i can remember that involves artificial intelligence is Bicentennial Man (Robin Williams). the others are all action :)
i think that we should exploit artificial intelligence but not for common daily tasks such as driving.. the GPS and the engine are enough for me.
Post a Comment