Read Time: 3m, 53s

The Law of the C++

June 20, 2018

Google’s principles for artificial intelligence

The concept of artificial intelligence is not a new one. For as long as technology has been around, there have been sceptics, opponents, and outright doomsday theorists predicting what the future will look like for the human race and its reliance on machines.

One of the most famous warnings about the need to safeguard against technological backlash came from author Isaac Asimov back in the 1940s. Asimov coined the concept of the ‘Three Laws of Robotics,’ from his collection of famous I, Robot short stories.

These laws read as such:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Back then it made for fantastic science fiction about the future of the world. Then, in 2004, it made for an action movie that saw Will Smith defending the world from a horde of killer robots. Of course we were all saved … and in style.

But it seems science fiction is becoming more science fact by the day—although thankfully we’re not quite at the killer-robots part yet.

Earlier this month, Google revealed its own version of the Three Laws concept; only these are not for a story—they are very much for the real world of A.I. Theirs read as such:

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles

Google quotes that these principles are “dynamic and evolving,” and that they will approach them “with humility, a commitment to internal and external engagement, and a willingness to adapt [their] approach as [they] learn over time.”

This kind of vernacular—and specifically choosing to use the word ‘principles,’ rather than ‘laws’—is undoubtedly very deliberate. Google is treading the fine line between people’s wariness about A.I., and their desire for life to be made easier and more automated.

But the similarity to a concept from a near-eighty-year-old science fiction novel is eerie. These principles are about improving the quality of life for humankind, but without compromising our security or safety.

Artificial intelligence is advancing at a rate of knots and—whether we want to or not—there is little we can do to stop it or even slow it down. The best thing we can do is to be prepared, and it would seem that this is what Google is doing by creating these principles.

With that face, how could we ever be worried about A.I.?

 

And Google is by no means the only organisation really beginning to think about how A.I. will function as its capabilities advance. In our own backyard, the Australian National University has launched the 3A Institute—representing Autonomy, Agency and Assurance.

Implemented in conjunction with CSIRO, and with Professor Genevieve Bell as the Director, the organisation is tasked “to build a new applied science around the management of artificial intelligence, data and technology and of their impact on humanity.”

These principles are about improving the quality of life for humankind, but without compromising our security or safety.

While all of this preparation makes us sound like a bunch of paranoid androids (thanks, Hitchhiker’s Guide to the Galaxy), the fear is not just that we’re all going to be destroyed in a Jurassic Park-type extermination. There are, in fact, all types of concerns created from the rise of A.I.

Of particular concern for Google’s philosophy is equality and ethics. The chief purpose of A.I. is to benefit society as a whole without jeopardising people’s right to privacy—and certainly not perpetuating existing biases, nor creating new ones.

Google has a strong history of doing things for the greater good, and it would appear that its latest innovation has the same hallmarks of previous work. Its laws are, like Asimov’s, designed to benefit everyone—albeit with a healthy dose of caution built into the mainframe.

Either that or the robots are already too smart and Google is trying to stop the takeover before it’s too late.

If that happens, it’s reassuring to know that we can call on Will Smith to save us all.
Does your brand’s mainframe need some tweaking?

Contact us on 1300 932 435 or helen@wearedando.com.

Share On

Sign up to our newsletter

  • This field is for validation purposes and should be left unchanged.