But since life on Earth is way more complicated than what any science fiction writer can imagine, the chances of a distributed military computer guiding massive armies of Terminator robots to destroy humanity for its own good are extremely slim.
What we have to ask ourselves is this: what are we going to do with the amazing science and technology we are developing?
Are we going to use it to enhance our lives, and use it to promote health, contentment and happiness?
Or are we going to use these technologies to help satisfy our thirst for power and dominance over others?
I would argue that fundamentally, human beings are good and as individuals only want good for themselves and others.
On the other hand, if manipulated by powerful entities and the media into thinking there is a constant danger around the corner, as societies, we can be cruel, belligerent and dispassionate.
What path will you choose for yourself?
I'll take the time to explore this further in the video below.
In truth, we're truly on the verge of something wonderful... or terrible. But it's not all white or black.
Right now, we're able to develop artificial intelligence capable of powerful things such as:
- Lip-reading: in fact Oxford's lip-reading AI outperforms professional humans
- Creating compelling video ads: in a competition between an AI creative director and a seasoned human creative director, this Japanese ad agency's AI was the preferred choice.
- Natural language: Google and IBM both have done great things to provide tools allowing for machines to understand humans talking (talk to text for example) from context, not only words.
- And a lot more.
These are tasks generally associated with human understanding and creativity. This is art and language, not "1" and "0"s anymore.
While we're making machines able to understand humans in a rudimentary way, and analyse data somewhat effectively, these tactics can also be used to analyse an "enemy" nation's weaknesses, soft spots or even cracks in a system's digital security, with much better efficiency than any human.
Properly coded drone AIs can also be used to analyse targets and threats and let loose to find valid targets to bomb on its own, with no human to guide it and likely no trace who it belongs to either.
We've not only started using AI in our lives a few years ago and deploying more and more, but we're also deploying actual robots to do jobs normally associated with professionals.
Most prominent in the robotic news lately are the "Robocop" -esque robotic officers that have been deployed in Dubai, Singapore and in China to monitor threats and assist civilians. This is helpful.
On the other hand, robotic officers have also been trained to shoot small arms by a Russian company too, likely to eventually sell concepts, prototypes or fully functional robotic soldiers to whoever would be interested in paying for them.
All in all, we the people of today will be the ones responsible for the robotic / AI accidents that are likely to occur as some of us agree to the purchase and use of such technology for military or aggressive purposes.
Or we could keep the pressure on to only allow our governments and non-governmental organizations to only develop and acquire AI / robots that are fully defensive in nature or designed to improve our quality of life.
We are the People. We have the power of our votes and our voices.
Want to prevent harm? Make sure learning AI we deploy are peaceful and see us as friendly people we can live with in harmony.
The future of AI is already in development and it is fascinating with social repercussions that could amount to everyone on the planet being able to live in total peace, in a paradise.
We just need to grow out of our medieval traditions to get there.
No comments:
Post a Comment