Former head of the National Security Agency, retired Gen. Paul Nakasone, will join the board of directors of OpenAI, the AI company announced Thursday afternoon. He will also sit on the board’s “safety and security” subcommittee.
The high-profile addition is likely intended to satisfy critics who believe OpenAI is moving faster than is wise for its customers and possibly humanity, introducing models and services without adequately assessing their risks or locking them down.
Nakasone brings decades of experience from the Army, the US Cyber Command and the NSA. Whatever one feels about the practices and decision-making in these organizations, it certainly cannot be blamed on a lack of expertise.
As OpenAI becomes increasingly established as an AI provider not just in the tech industry, but in government, defense and big business, this kind of institutional knowledge is valuable both for its own sake and as peace of mind for worried shareholders. (No doubt the connections he brings to the state and military apparatus are also welcome.)
“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.
That certainly seems true: Nakasone and the NSA recently defended the practice of buying data of questionable origin to feed surveillance networks, arguing that there was no law against it. OpenAI, for its part, simply took, rather than bought, large amounts of data from the Internet, claiming when caught that there is no law against it. They seem to be of one mind when it comes to asking for forgiveness instead of permission, if indeed they ask for either.
The OpenAI release also states:
Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to enhance cybersecurity by rapidly identifying and responding to cybersecurity threats. We believe that AI has the potential to provide significant benefits in this area for many institutions that are frequently targeted by cyber attacks, such as hospitals, schools and financial institutions.
So this is also a new market game.
Nakasone will join the board’s safety and security committee, which is “responsible for making recommendations to the full board on critical safety and security decisions for OpenAI projects and operations.” What this newly formed entity actually does and how it will operate is still unknown, as several of the senior executives working on security (as it pertains to AI risk) have left the company and the committee itself is in the midst of a 90-day assessment of the company’s procedures and safeguards.