Many have described 2023 as the year of artificial intelligence and the term made several word of the year lists. While it has positively impacted workplace productivity and efficiency, AI has also presented a number of emerging risks for businesses.
For example, a recent one Harris Poll commissioned by AuditBoard revealed that roughly half of working Americans (51%) currently use AI-powered tools for work, no doubt driven by ChatGPT and other productive AI solutions. At the same time, however, nearly half (48%) said they feed corporate data into AI tools not provided by their business to help them in their work.
This rapid integration of artificial intelligence tools created at work presents ethical, legal, privacy and practical challenges, creating the need for businesses to implement new and robust policies around artificial intelligence production tools. As it stands, most haven’t yet – a recent Gartner overview revealed that more than half of organizations lack an internal genetic AI policy, and the Harris poll found that only 37% of employed Americans have a formal policy on the use of non-company-provided AI tools.
While it may sound like a daunting task, developing a set of policies and standards now can save organizations from serious headaches down the road.
AI Use and Governance: Risks and Challenges
Developing a set of policies and standards now can save organizations from serious headaches down the road.
The rapid adoption of Generative AI has made it difficult for businesses to keep up with AI risk management and governance, and there is a clear disconnect between adoption and official policies. The Harris poll mentioned earlier found that 64% perceive the use of AI tools to be safe, indicating that many workers and organizations may be overlooking the risks.
These risks and challenges can vary, but three of the most common include:
- Too much selfconfidence. The Dunning–Kruger effect is a bias that occurs when our knowledge or abilities are overestimated. We have seen this manifest in relation to the use of artificial intelligence. Many overestimate the capabilities of artificial intelligence without understanding its limitations. This could produce relatively harmless results, such as providing incomplete or inaccurate results, but it could also lead to much more serious situations, such as production that violates legal usage restrictions or poses a copyright risk.
- Security and privacy. Artificial intelligence needs access to large amounts of data to be fully effective, but sometimes this includes personal data or other sensitive information. There are inherent risks that come with using AI tools unchecked, so organizations need to ensure they are using tools that meet their data security standards.