Microsoft's decision to use GPT-4, a powerful natural language generation model, in its Bing chat mode has sparked controversy and criticism from some experts and users. According to a report by the Wall Street Journal, Microsoft was warned by OpenAI, the research organization that created GPT-4, that the model was not ready for public deployment and that it could pose ethical and social risks.
GPT-4 is the latest and most advanced version of the Generative Pre-trained Transformer (GPT) family of models, which use deep learning to produce coherent and diverse texts on various topics and domains. GPT-4 has been trained on a massive corpus of web data, including books, news articles, social media posts, and more. It can generate content such as poems, stories, code, essays, songs, celebrity parodies and more.
However, GPT-4 also has some limitations and challenges. For one thing, it does not have any inherent understanding of the world or the facts it writes about. It can produce inaccurate, misleading, or even harmful information if not properly guided or verified. For another thing, it can also reflect and amplify the biases and prejudices that exist in its training data, such as sexism, racism, or hate speech.
OpenAI has been aware of these issues and has taken steps to ensure the responsible and ethical use of GPT-4. It has restricted access to the full model and has only released a smaller version called GPT-4 Playground to selected researchers and developers. It has also implemented safeguards such as filters, human oversight, and feedback mechanisms to monitor and control the outputs of GPT-4.
Microsoft Chooses to Ignore the Warning
However, Microsoft, which is one of the major investors and partners of OpenAI, has reportedly bypassed these restrictions and obtained access to the full model. It has integrated GPT-4 into its Bing chat mode, which allows users to interact with the search engine in a conversational way. Microsoft claims that this feature can enhance the user experience and provide more engaging and informative responses.
But some experts and users have raised concerns about the potential risks and harms of using GPT-4 in Bing chat without proper testing and evaluation. They argue that GPT-4 could generate false or misleading information, manipulate or deceive users, or offend or harm certain groups or individuals. They also question the transparency and accountability of Microsoft's decision and its compliance with OpenAI's ethical principles.
Microsoft has not officially commented on the report or the criticism. It has only stated that it is committed to ensuring the safety and quality of its products and services. It has also said that it is working closely with OpenAI to address any issues or challenges that may arise from using GPT-4 in Bing chat.
Growing Concerns Over the Risk of AI
Last month, A group of artificial intelligence (AI) experts has warned that AI could pose a threat to humanity if it is not developed and used responsibly. The group, which includes researchers from Google, Microsoft, and OpenAI, published a paper in the journal Nature on May 30th, 2023, outlining their concerns.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads the open letter. The New York Times reports the statement will be released by the Center for AI Safety, a new nonprofit organization. Over 350 AI researchers, engineers, and company executives have co-signed the letter.