In Rubrik’s IPO filing this week — among the details about employee numbers and cost statements — was a nugget that reveals how the data management company thinks about creating artificial intelligence and the risks that come with the new technology: Rubrik has quietly created a governance committee to oversee how AI is applied to its business.
According to Form S-1, the new AI governance committee includes directors from Rubrik’s engineering, product, legal and security intelligence teams. Together, the teams will assess the potential legal, security and business risks of using AI creation tools and consider “steps that can be taken to mitigate such risks,” the filing states.
To be clear, Rubrik is not an AI business at its core — its only AI product, a chatbot called Ruby released in November 2023, based on Microsoft and OpenAI APIs. But like many others, Rubrik (and its current and future investors) envision a future in which artificial intelligence plays an increasing role in its business. Here’s why we should expect more moves like this in the future.
Increasing regulatory scrutiny
Some companies are adopting AI best practices to take the initiative, but others will be forced to do so by regulations such as the EU AI Law.
Dubbed “the world’s first comprehensive AI law”, the landmark legislation – expected to become law across the bloc later this year – bans certain uses of AI deemed to pose “unacceptable risk” and defines other applications “High Risk”. The bill also sets out governance rules aimed at reducing risks that may escalate harms such as bias and discrimination. This risk assessment approach is likely to be widely adopted by companies looking for a reasoned way to adopt AI.
Privacy and data protection lawyer Eduardo Ustaran, a partner at Hogan Lovells International LLP, expects the EU AI law and its myriad obligations to reinforce the need for AI governance, which in turn will require commissions. “In addition to its strategic role of designing and overseeing an AI governance program, from an operational perspective, AI governance committees are a key tool for addressing and minimizing risks,” he said. “This is because collectively, a properly resourced committee should be able to anticipate all areas of risk and work with the business to address them before they materialize. In a sense, an AI governance committee will serve as the foundation for all other governance efforts and provide the necessary assurance to avoid compliance gaps.”
In a recent policy document on the impact of the EU AI law on corporate governance, ESG and compliance consultant Katharina Miller agreed, recommending that companies establish AI governance committees as a compliance measure.
Legal audit
Compliance isn’t just about pleasing regulators. The EU’s AI law has teeth and “the penalties for non-compliance with AI law are significant,” says UK-US law firm Norton Rose Fulbright. famous.
Its scope also goes beyond Europe. “Companies operating outside the EU may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data,” the law firm warned. If it’s anything like GDPR, the legislation will have an international impact, especially amid increased EU-US cooperation on artificial intelligence.
AI tools can get a company into trouble beyond AI legislation. Rubrik declined to comment with TechCrunch, likely because of its IPO quiet periodbut the company’s filing says the AI governance committee assesses a wide range of risks.
The selection criteria and analysis include consideration of how the use of AI creation tools could raise issues related to confidential information, personal data and privacy, customer data and contractual obligations, open source software, copyright and others intellectual property rights, transparency, output accuracy and reliability, and security.
Please note that Rubrik’s desire to cover legal bases may be for a number of other reasons. It could, for example, be there to show that it expects responsible issues, which is critical given that Rubrik has previously faced not only a data breach and a hack, but judicial intellectual property.
A matter of perspective
Companies will not only consider AI through the lens of risk prevention. There will be opportunities that they and their clients don’t want to miss. This is one reason why artificial intelligence creation tools are implemented despite having obvious flaws such as “hallucinations” (ie a tendency to fabricate information).
It will be a fine balance for companies to strike. On the one hand, bragging about their use of AI could boost their valuations, regardless of how real that use is or what difference it makes to the bottom line. On the other hand, they should calm down about the possible risks.
“We are at this key point in the evolution of artificial intelligence, where the future of artificial intelligence depends in large part on whether the public trusts artificial intelligence systems and the companies that use them,” privacy counsel for privacy software provider and security OneTrust, Adomas Siudika. He wrote in a blog post on the topic.
Creating AI governance committees will likely be at least one way to try to help on the trust front.