Saturday, February 25, 2017

New EU Laws Forecasting Self-Aware Artificial Intelligence

Stand aside Asimov, in the real world, we need real laws to manage advances in artificially intelligent robots.

And so we almost did.

The European Union Parliament has ruled this February 2017 on establishing some rules (not laws yet) to manage the upcoming robotics and artificial intelligence development in order to:

  • Control liability related to self-driven cars and people transportation drones.
  • Liability around other robots and artificially intelligent entities that may learn and behave in unexpected ways (due to deep learning algorithms and other methods).
  • Help with the transition between a human workforce and an increasingly AI-established workforce.
I've been talking about this for a while now and I'm quite happy to see that governments, who are typically very slow in acting on technology, are actually taking decisions before AI enabled robots and software become truly able to learn and go beyond their own programming.

This is likely a reaction to the upcoming movement in the car industry (BMW, Ford, Chinese transportation drone companies, etc...) to put millions of self-driven cars and drones on the streets around the world on or about the year 2020 (3 years from now).  There is obvious fear about how people will react when the first self-driven car will occur or when the first AI enabled industrial machine hurts someone.

And what about the sex industry?  What happens when people panic because the economy can't sustain itself with too many efficient AI's doing a better job than any human can, taking their place in the world?

Lawmakers, as is their instinct are mostly thinking about the economics and the liabilities.

I talk about this and where we're heading with all this in the following video:



In a nutshell, lawmakers in the EU have rejected the Robot Tax  which is good.  Though it is a simple way to tax the work done by an AI robot that replaces a human in order to retrain said replaced person, but the industry would be stifled with such a measure.  

Instead, I am happy to see that the ruling is moving towards the installment of Basic Income for all instead, which is considered a riskier proposition but if we know AI robots can take most of our jobs, it means they can run the economy wheel instead of us, providing needed basic goods and services  to the people while citizens receive a basic amount of currency to trade for other things.

What seems to be a more complex discussion is on the accident liability aspect of things.  They are saying that liability would go more on the owner or manufacturer of the AI robot most of the time unless the robot has been running and learning on its own long enough to become essentially responsible of it's own actions, ergo being liable on its own.  So they're thinking of making regulations to enforce the use of "Kill buttons" on every such robots so that we can shut them down if they go astray and risk harming people.

Now this all brings us into sci-fi movie territory doesn't it?  (I, Robot, Terminator, etc.....)

In my opinion, if we're thinking this way, we have to start thinking of when an AI actually could be considered having rights of its own and is considered a conscious being.   If we consider a robot entirely liable for it's own actions because it has learned enough, it is just as liable as an adult is in the human world.  Before that, it looks like the robot's "parent" is liable for it's actions.

Very interesting and a concept I truly welcome myself.  There is no reason why we can't declare AI's as conscious and have them agree to rules that are beneficial to all.  

Thought you would not be alive when sci-fi became reality?

It's just the beginning!

No comments:

Post a Comment