HomeWinBuzzer News OpenAI's Legal Framework Update: Implications for Military Use

[UPDATE] OpenAI’s Legal Framework Update: Implications for Military Use

OpenAI has updated its terms of service, consolidating formerly explicit restrictions on certain uses of its models into broader guideline principles.

-

[UPDATE 18.01.24 – 12:44 CET] has reached out to us to say that its policy update attempts to clarify how its AI interacts in potential military uses. According to a spokesperson, usage of OpenAI products for weapons development and surveillance remains prohibited:

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”


[15.01.24 – 13:23 CET]

OpenAI has updated its terms of service, consolidating formerly explicit restrictions on certain uses of its models into broader guideline principles. These changes have effectively removed the specific prohibition of “military and warfare” applications, along with other previously disallowed usages such as “the generation of malware” and “astroturfing.” A spokesperson from OpenAI, Niko Felix, told The Intercept that the new universal policies are designed to be more accessible and applicable to the diverse and global user base of OpenAI tools.

Implications of Policy Changes

Under the revised framework, OpenAI's four universal policies encompass a range of potential harms without directly mentioning specific applications. The policies include broad directives such as not using OpenAI's services to harm oneself or others and not repurposing output from the services to cause harm. OpenAI emphasizes that while not all military uses might be considered harmful, the use of its models for violent purposes, such as weapon development or property destruction, remains prohibited. It is noteworthy that OpenAI has not provided a definitive statement on the full scope of military applications under these policies.

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” Felix said in an email to The Intercept. “A principle like ‘Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

The GPTs Felix mentions is a feature in the OpenAI API that allows developers to leverage some of the company's technology to create their own AI bots. Those GPTs can be found in the newly opened GPT Store, which made its debut last week. 

Reinforcing AI Model Security and State Legislation

The issues surrounding the use of AI and for harmful purposes extend beyond policy updates. Researchers from Anthropic have shown that standard behavioral training techniques offer insufficient defense against AI models that have been maliciously altered or poisoned with backdoors. Their study suggests that more robust methods, potentially borrowed from related fields or developed anew, are needed to neutralize these threats effectively.

As OpenAI's policy revisions prompt discussions on the ethical use of AI, the research community and legislators alike grapple with ensuring the responsible development and application of these influential technologies.

Last Updated on January 18, 2024 12:51 pm CET by Luke Jones

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News