On the other hand, other practical advances in artificial intelligence and robotics have not been under the microscope as much so most people are unaware of just how far we have come in automating much of what is traditionally handled by either dumb software or actual human beings.
For example, artificial intelligence has already been successful in 2017 at:
- Making innovative music video concepts
- Mastering some of the hardest video games of all time
- Writing original music
- Deducing recipes and ingredients from pictures of food (as a person would)
- Acting as an art historian
- Serving as a operating system for military weapon systems
- And a lot more because frankly the list above is just from the articles I gathered randomly over the past 2 months. If I did a search, the list of achievements would be much, much longer.
Frankly, most futurist experts that theorize or work with artificial intelligence closely believe, like myself, that human-level artificial intelligence can appear as soon as just a few years from now (around 2030 or so). Ray Kurzweil and Elon Musk are two big names that, like me, believe exactly that even though other futurists think it will happen only further along... though predictions are always within the 21st century.
Elon Musk, for one, has been quite vocal in educating and warning people about just how fast we're developing and implementing artificial intelligence during this period. The warning, in a nutshell is not along the lines of "we shouldn't develop A.I." but more along the lines of "we have to be proactive on regulation instead of reactive."
I tend to agree with M. Musk on this one, which is why I often write and record on the subject of A.I.
My deeper opinions on the subject can be seen in the video below:
Thankfully, on regulation, the European Union seem to be proactive on the subject with general discussion and a report that has been produced in February 2017 primarily on liability issues surrounding Learning artificial intelligence and automation. It is a start, and positively surprising since this was organized and attended by government officials. The report isn't a legal document however, just information and discussion notes so it is far from a document that can be enforced if a robot or A.I. does something unpredictable and undesirable.
Several very intelligent individuals, including Elon Musk and Stephen Hawking have signed an Open letter to the United Nations urging them to impose a ban on the development of A.I. military systems, but it may take a while before there is anything significant that can come out of that and I'm not sure if countries like Russia, China and the U.S. would sign such a ban either.
The point is, A.I. development, just like Moore's Law is exponential and companies are deploying more and more advanced A.I. all over the place to make our lives better. Regulatory agencies and governments need to move very quickly, or else we may get ourselves into very difficult positions trying to undo what may come if rogue agents are allowed to develop A.I. with too much access without the proper barriers set.
Researchers on the other hand, along with the advancements of A.I. are developing some very cool and needed improvements to Aasimov's 75 year old Three Laws of Robotics which state:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
The researchers of the University of Hertfordshire have come up with something that is a little bit more up to date that works best with our modern understanding of evolving learning artificial intelligence. They call it Empowerment and in general terms they are working on a simple set of rules that allow the A.I. to truly understand their environment, as a human would, and then force the A.I. to behave in such a way to give himself and others around it as many options as possible. This is an improvement because there is no way we can predict how a learning A.I. will interpret rigid sets of rules, as science fiction writers have speculated over the years. Heck, proof of the possible failings of the Three Laws of Robotics are pretty well reflected in the movie "iRobot" where VIKI, the main A.I. decides on her own how to interpret the Three Laws, to the detriment of old model robots and the humans she is actually trying to protect by forcing them into custody.
These regulation attempts are being worked on by many companies primarily to avoid liability issues right now, but if government and national legislative entities get involved to regulate the development and deployment of A.I. and automated systems soon and properly, we can possibly avoid the dreaded "Terminator conundrum" and other A.I.-powered catastrophes that we've been exposed to through so far through clever movie scripts.
I for one, will continue to encourage the development of A.I. and robotics without slowdown but at the same time, we absolutely have to stop, ban and reject any attempt to develop or deploy A.I. designed to seek and destroy as well as put in place a universal set of laws, such as the Empowerment principles into all A.I. worldwide. With proper development of A.I., we can engineer a true paradise on Earth, which should be a goal for any peace-loving human being!
No comments:
Post a Comment