In an unannounced update to its civilian use, OpenAI has opened the door to military applications of its technologies. While the policy previously prohibited the use of its products for “military and war” purposes, that language has now disappeared, and OpenAI did not deny that it was now open to military uses.
The Intercept noticed the change firstwhich appears to have gone live on January 10th.
Unannounced changes in policy wording happen fairly often in technology, as the products that govern usage evolve and change, and OpenAI is clearly no different. In fact, the company’s recent announcement that user-customizable GPTs will go public alongside a vaguely worded monetization policy likely required some changes.
But the change in civilian policy can hardly be a consequence of this particular new product. Nor can it be credibly argued that the exclusion of “military and war” is merely “clear” or “easier to read,” as an OpenAI statement on the update does. It is a substantive, consequential policy change, not a restatement of the same policy.
You can read the current usage policy hereand the old one here. Here are screenshots with relevant sections highlighted:
Apparently the whole thing has been rewritten, though whether it’s more readable or not is more a matter of taste than anything. I happen to think that a bulleted list of clearly prohibited practices is more readable than the more general guidelines they’ve been replaced with. But the policy-makers at OpenAI clearly think otherwise, and if this gives them more freedom to interpret a practice that was until then completely disallowed, that’s just a pleasant side effect. “Do not harm others,” the company said in its statement, “is broad but easily understood and relevant in many contexts.” More flexible, too.
Although, as OpenAI spokesperson Niko Felix explained, there is still a general ban on the development and use of weapons – you can see that it was originally and separately listed under “military and war”. After all, the military does more than make weapons, and weapons are made by people other than the military.
And precisely where these categories do not overlap, I would guess that OpenAI is looking at new business opportunities. Not everything the Defense Department does is strictly war-related. As any academic, engineer, or politician knows, the military establishment is deeply involved in all kinds of basic research, investment, small business capital, and infrastructure support.
OpenAI’s GPT platforms could be very useful for, say, military engineers who want to summarize decades of documentation of an area’s water infrastructure. It is a genuine conundrum for many companies how to define and navigate their relationship with government and military money. Google’s “Project Maven” famously took it a step too far, though few seemed all that bothered by the multibillion-dollar JEDI cloud contract. It may be OK for an academic researcher on an Air Force Research Laboratory grant to use GPT-4, but not a researcher within AFRL working on the same project. Where do you draw the line? Even a strict “no military” policy must stop after a few deductions.
That said, the complete removal of “military and war” from OpenAI’s prohibited uses suggests that the company is, at the very least, open to serving military customers. I asked the company to confirm or deny that this was the case, warning them that the language of the new policy made it clear that anything but a denial would be construed as an affirmation.
As of this writing they have not responded. I will update this post if I have any news.
Modernize: OpenAI offered the same statement to The Intercept and did not dispute that it is open to military applications and customers.