Apple has announced that Ali Farhadi, one of its senior executives in charge of artificial intelligence and machine learning, has left the company to become the CEO of the Allen Institute for Artificial Intelligence (A12), a non-profit organization that aims to ensure that AI is aligned with human values and works for the common good of humanity.
Farhadi joined Apple in 2020, after selling his AI startup Xnor.ai to the tech giant for an estimated $200 million. Xnor.ai specialized in developing efficient and embedded deep learning technologies that enable ubiquitous AI. At Apple, Farhadi led several machine learning teams in Seattle and Cupertino, working on projects such as Siri, Core ML, and Neural Engine.
“As we face unprecedented changes in the development and usage of AI, I could not think of a better time to return to AI2 as CEO,” said Farhadi. “Today more than ever, the world needs truly open and transparent AI research that is grounded in science and a place where data, algorithms, and models are open and available to all.”
The Allen Institute for Artificial Intelligence (AI2) is a non-profit research institute founded in 2014 by the late Paul G. Allen, philanthropist and Microsoft co-founder. The institute's mission is to conduct high-impact AI research and engineering in service of the common good.
AI2 is headquartered in Seattle, Washington, and employs over 200 researchers and engineers. The institute's research focuses on a wide range of topics in AI, including natural language processing, computer vision, machine learning, and robotics. AI2 also has a number of open-source projects, including the AllenNLP library for natural language processing and the Aristo project for building commonsense knowledge and reasoning systems.
AI2 has made significant contributions to the field of AI. For example, the institute's work on natural language processing has led to the development of new methods for understanding and generating text. AI2's work on computer vision has led to the development of new methods for understanding and recognizing objects in images. And AI2's work on machine learning has led to the development of new algorithms for training and deploying machine learning models.
Apple Trailing in the Mainstream AI Industry
Farhadi's departure from Apple comes at a time when the company is facing increasing competition and scrutiny in the field of AI, as rivals like Google and Amazon are investing heavily in developing and deploying advanced AI technologies across various domains. Apple has also been criticized for lagging behind in innovation and quality of its AI products, such as Siri and Face ID.
However, Apple has also made some notable strides in AI in recent years, such as introducing the Neural Engine chip in its devices, launching the Research app to collect health data from users, acquiring several AI startups, and publishing more research papers and patents on AI. Apple has also emphasized its commitment to privacy and security as a differentiator for its AI offerings.
Last month, Apple issued a ban on generative AI models amongst its employees. Apple has banned its employees from using generative AI tools that could potentially leak its confidential data or code to third-party developers. The report quotes anonymous sources who said that Apple is afraid that these tools could transmit user conversations or code snippets back to the developer for training or improvement purposes, without the user's awareness or permission.
For instance, Apple is worried that GitHub Copilot could capture secret Apple code and disclose what the company is working on or replicate its products. Likewise, Apple is concerned that ChatGPT could divulge sensitive information such as product plans, customer data or trade secrets during a chat session.
Apple is not the only company to take such measures. Samsung, JPMorgan Chase and Verizon have also banned the use of ChatGPT by their employees, while Amazon has reportedly warned its staff about using the AI tool. These companies are also concerned about the security and privacy risks posed by generative AI tools.