Reid Hoffman Wikimedia

AI is evolving at an alarming rate, and for some, that’s quite concerning. Elon Musk, Stephen Hawking, and other public figures have been vocal about the dangers of such technology. Today, two more have joined them – LinkedIn Co-Founder Reid Hoffman and eBay founder Pierre Omidyar.

Hoffman and the Omidyar Network have teamed up with the Knight Foundation to safeguard the development of AI. Each donated $10 million to the fund, while the Knight foundation committed $5 million. $1 million also came from the founder of the Raptor Group Jim Pallotta and The William and Flora Hewlett Foundation.

“It is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers,” said a Knight Foundation spokesperson.

Harvard University Base

The initiative will be named the Ethics and Governance of Artificial Intelligence Fund. It will be based out of the MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard. The main function is to support AI ethics projects and kickstart a discussion.

“There’s an urgency to ensure that AI benefits society and minimizes harm,” said Hoffman. “AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible.”

His philosophy is similar to that of Microsoft and other tech giants. The Redmond giant has previously announced plans to democratize AI, ensuring its open to everyone. This new organization could augment the Partnership on AI by Google, Facebook, Amazon and Microsoft. However, the EGAIF plans to go further by addressing the following issues:

  • “Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?”

The funds first collaboration will be the upcoming AI Now event in New York. The conference will discuss the short time social and economic implications of AI.

“A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it,” said Jonathan Zittrain, co-founder of the Berkman Klein Center.