Nvidia is releasing three new NIM microservices, or small independent services that are part of larger applications, to help enterprises bring additional control and security measures to their AI agents.
One of these new NIM services targets content security and works to prevent an AI agent from generating harmful or biased outputs. Another works to keep conversations focused on only approved topics, while the third new service helps prevent an AI agent from attempting to jailbreak or remove software restrictions.
These three new NIM microservices are part of Nvidia NeMo Guardrails, Nvidia’s existing collection of open source software tools and microservices intended to help companies improve their AI applications.
“By implementing multiple lightweight, specialized models as guardrails, developers can fill gaps that can arise when only more general global policies and security are in place — as a one-size-fits-all approach does not secure and control complex AI workflows correctly. ” it press release he said.
It seems that AI companies may be starting to realize that getting businesses to adopt their AI agent technology won’t be as simple as they first thought. While the likes of Salesforce CEO Marc Benioff recently predicted that more than a billion agents would leave Salesforce alone in the next 12 months, the reality will likely look a little different.
A recent study by Deloitte was foreseen that around 25% of businesses are either already using AI agents or expect to do so in 2025. The report also predicted that by 2027 around half of businesses will be using agents. This shows that while businesses are clearly interested in AI agents, they are not adopting AI technology at the same pace as innovation in the AI space.
Nvidia likely hopes that initiatives like this will make the adoption of AI agents seem safer and less experimental. Time will tell if this is true.