OpenAI, the non-profit research company that created the popular ChatGPT chatbot and powerful language model GPT-4, is reportedly working on an app store for AI software. According To Reuters, the store would allow developers to sell their AI models built on top of OpenAI's technology.
The app store is still in the early stages of development, but it has the potential to revolutionize the way AI is used. Currently, there is no centralized marketplace for AI software, which makes it difficult for developers to find and sell their models. The OpenAI app store would solve this problem by providing a one-stop shop for AI developers and users.
OpenAI CEO Sam Altman revealed the company's marketplace plans at a London developers' meeting. Aquant and Khan Academy, two customers of OpenAI, also showed their willingness to sell their AI models powered by ChatGPT on the store.
The app store would also help to ensure that AI models are used responsibly. OpenAI has a strong commitment to responsible AI, and the app store would be a way to enforce this commitment. For example, the store could have policies that prohibit the sale of AI models that are used for harmful purposes, such as generating hate speech or disinformation.
OpenAI is Trying to Create a Path for Generative AI Development
The OpenAI app store is still a long way off, but it has the potential to make a major impact on the way AI is used. By providing a centralized marketplace for AI software and enforcing responsible AI practices, the app store could help to make AI more accessible and beneficial to society.
Earlier this week, I reported on OpenAI has been lobbying European officials to water down the European Union's proposed AI Act. The Act, which is currently being negotiated by the European Parliament and the Council of the European Union, would impose stringent regulations on high-risk AI systems, such as those that are used for facial recognition or social scoring.
OpenAI has argued that its general-purpose AI systems, such as GPT-4, should not be considered “high risk” and should therefore be exempt from the Act's regulations. The company has also argued that the Act's requirements for transparency, traceability, and human oversight are too burdensome and would stifle innovation.
In other recent OpenAI news, According to a report by the Wall Street Journal, Microsoft was warned by OpenAI, the research organization that created GPT-4, that the model was not ready for public deployment and that it could pose ethical and social risks.
Although GPT-4 is a highly capable language model, it is not without its flaws and challenges. One significant limitation is its deficiency in comprehending real-world situations, which can result in the generation of inaccurate, deceptive, or dangerous information. Additionally, it may reflect and magnify prejudices and biases present in its training data.
OpenAI has limited access to GPT-4 and added safeguards to ensure its responsible and ethical use. However, Microsoft, a major partner of OpenAI, has obtained access to the full model and integrated it into its Bing chat mode, which allows for conversational search. Microsoft claims that this feature improves user experience and provides engaging and informative responses. The company integrated the model into Bing Chat despite OpenAI's caution.