In recent months, AI tools such as ChatGPT and Microsoft's Bing Chat have quickly transformed productivity. However, there are many concerns about AI and its development, with many uneasy about a lack of regulations around the technology. Now, the Biden administration in the United States is exploring ways to reign in AI and create rules that will govern its development and use.
On Tuesday, the US Commerce Department released a formal request for comment on “accountability measures”. As reported by The Wall Street Journal, the public request seeks to know if new AI models should have a certification requirement before they are released.
If you are thinking it is madness there are no checks and balances currently required to release an AI, you would be correct. Apple cannot release a new mobile phone without checking hundreds of regulatory boxes, but companies that have control over AI are basically free to launch models that could be dangerous.
Microsoft through its multi-billion-dollar investment in OpenAI has driven the AI boom that has defined tech during 2023. Using OpenAI's GPT-4 engine has allowed Microsoft to mainstream AI into its ecosystem, including Bing Chat, Bing Image Creator, Microsoft 365 Copilot, Azure OpenAI Service, and GitHub Copilot X.
Microsoft has commercialized AI across search, productivity, commerce, cloud, and coding. While many foundations were set in recent years, the company's surge has happened in just a few months. So aggressive has Microsoft's push into AI been, that lawmakers were simply not ready.
Lawmakers Are Ready to Respond to AI Concerns
Caught off guard, lawmakers are now waking up to the potential risks of AI. And make no mistake, those risks are diverse, including AI taking people's jobs, being used for misinformation, privacy and data concerns, or causing psychological/societal issues to more profound world-changing risks such as becoming a potent cybersecurity threat or becoming self-aware and deciding humans are pointless.
“It is amazing to see what these tools can do even in their relative infancy,” says Alan Davidson, head of the National Telecommunications and Information Administration, the Commerce Department agency that sent the request. “We know that we need to put some guardrails in place to make sure that they are being used responsibly.”
President Biden briefly touched on AI during an advisory council of scientists in Washington last week. He was asked if AI is dangerous and the President said: “It remains to be seen. It could be.”
Are OpenAI and ChatGPT a Risk?
We have seen in recent weeks that countries are beginning to express concerns about ChatGPT, and in some cases are banning the popular chatbot. The Italian Data Protection Authority has ordered OpenAI to stop providing the chatbot in the country.
In its decisions, the data regulator argues ChatGPT does not comply with the European Union's General Data Protection Regulation (GDPR), which requires data controllers to inform users about how their data is processed and to obtain their consent. Germany is also reportedly considering banning ChatGPT and it is likely other countries will follow suit.
ChatGPT is a chatbot that can generate human-like text responses to a given prompt. It can answer questions, converse on a variety of topics, and generate creative writing pieces. It is based on a deep learning architecture which enables it to learn patterns in language and generate text that is coherent and human-like.
Calls to Pause Development
Last month, Elon Musk led a group of tech experts – including Apple co-founder Steve Wozniak – in sending an open letter that calls for ceasing of AI development. Musk's pushback against generative AI is telling considering he is a co-founder of OpenAI.
The open letter comes from the FutureOfLife initiative, a project that wants to place more controls on AI development over concerns about the emergence of artificial general intelligence (AGI). They want tech companies to agree to pause the development of AI more powerful than OpenAI's GPT-4 for at least six months.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads.
As for OpenAI, the company says that it welcomes regulations and has been open about the potential risks of AI, both in the present and in the future. However, the org insists that it takes a safer approach to its models and has strict test requirements that must be met before a model is made public.
“We believe that powerful AI systems should be subject to rigorous safety evaluations,” OpenAI said in a recent blog post. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”
Tip of the day: Windows lets you use Cortana to translate sentences, words, or phrases, with the results read back to you automatically. This makes it particularly useful for group scenarios, but you can also type if you're unsure about pronunciation. Cortana translation sports an impressive 40 languages and utilizes machine learning to provide natural results in many cases. Check our full guide to learn how to use Cortana for quick translations.