HomeWinBuzzer NewsSatya Nadella Shares His Six Principles for a Good Human-A.I. Partnership

Satya Nadella Shares His Six Principles for a Good Human-A.I. Partnership

In an article for Slate Magazine, Satya Nadella, Microsoft’s CEO, writes about how a partnership between humans and artificial intelligence can help solve society’s problems.


In the article, Satya Nadella lays out in detail why he thinks advanced machine learning is moving in the direction of an era where humans and machines will work together in a more profound way.

According to Nadella, the moment for “greater coordination and collaboration on A.I.” has definitely arrived. Saquib Shaik, the blind Microsoft engineer who got featured on Build 2016 serves him as an exemplary case.

Saquib Shaik as a engineer was key in developing Seeing AI, which uses computer vision and natural processing to describe a person's surroundings, read text, answer questions and even identify emotions on people's faces. This advanced form of AI can be run on mini-computers and worn like a pair of sunglasses, offering great help to blind people.

Nadella uses this example as an argument, saying that the use of artificial intelligence should no longer be limited to games where the human player competes with the computer. Neither, in his opinion should people perceive computers as villains, the way they are portrayed in films like 2001: A Space Odyssey, or as mere personal like Alexa, Siri, and Cortana.

That´s one reason why he contends that “the debate should be about the values instilled in the people and institutions creating this technology.”

Nadella shares that Microsoft's approach to A.I. first involves wanting to “build intelligence that augments human abilities and experiences” where “it's not going to be about humans vs. machine.” As he points out, humans have insight, physicality, emotion, empathy, and creativity, which can be combined with “powerful A.I. computation—the ability to reason over large amounts of data and do pattern recognition more quickly—to help move society forward.”

Satya Nadella´s six principles for a healthy human-A.I. “partnership”

Based on these assumptions Nadella then provides the following six “principles and goals”, he believes AI research must follow so humans can be safe and benefit the most of it.

  • “A.I. must be designed to assist humanity: As we build more autonomous machines, we need to respect human autonomy. Collaborative robots, or co-bots, should do dangerous work like mining, thus creating a safety net and safeguards for human workers.
  • A.I. must be transparent: We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyzes the world. Ethics and design go hand in hand.
  • A.I. must maximize efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future.
  • A.I. must be designed for intelligent privacy—sophisticated protections that secure personal and group information in ways that earn trust.
  • A.I. must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.
  • A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.”

In a nutshell, Nadella indicates that Microsoft's principles and goals regarding A.I. espouse that A.I. has to be designed for humanity's assistance. They also indicate that A.I. must be transparent and must maximize efficiencies without destroying people's dignity. In addition, A.I. must be “designed for intelligent privacy” and “must have algorithmic accountability.” It must also protect against bias and ensure proper, representative research.

Many of the software companies now are gearing towards machine learning and artificial intelligence, and Google researchers are even working on a rescue button against dangerous AI. If ethical standards can be established towards their development and use, then it may indeed be possible to use artificial intelligence to alleviate many of society's problems.


Recent News