48% of employees have entered organizational data into an AI-powered tool their company hasn’t provided them for work.
- More organizations and employees are adopting AI for their work. Simultaneously, the drawbacks of AI also necessitate implementing risk management controls.
- So, where are organizations regarding AI implementation in relation to the presence of risk management controls? A recent study by Audit Board and The Harris Poll tried to find out.
As more organizations and employees realize the benefits of artificial intelligence (AI), the suspicion of the technology’s potential to replace human workers is being replaced by rapid application in the workplace. That said, the technology’s drawbacks require implementing certain risk management controls.
Audit Board recently commissioned The Harris Poll to study American employees about the implementation of AI tools in relation to the presence of basic risk management controls. A few interesting trends and potential concerns emerged from the study.
The following are a few insights from the study.
More Employees Are Adopting AI
The good news is that employees are becoming less suspicious about AI. Further, as more employees realize its benefits and seek more efficient ways of working, they are incorporating the technology into their daily work activities. According to the study, 51% of employees use AI-powered tools like Grammarly, ChatGPT, and Dall-E.
So, what are employees using these tools for? While 26% use them to do research for their work, 23% use them to create written materials. About 22% use them to create content, and 19% for design work.
Few Companies Have a Formal Policy About Non-Company-Supplied AI Tools
While over half of the respondents use such tools, only 37% said their organization has a formal policy about using non-company-supplied AI-powered tools. From a perspective of risk management, this statistic represents an unmitigated risk where workers use AI as they wish with sensitive organizational information. The following statistic solidifies this concern.
The study confirmed that 48% of employees entered organizational data into an AI-powered tool their company hadn’t provided them for work. This highlights the risks associated with data privacy, security, and AI functioning as workers use AI tools without proper vetting by the organizations’ IT security teams.
So, what type of organizational data are employees entering into AI-powered tools? While 24% enter written material that needs editing, 21% enter reports or material that need to be summarized. About 18% enter process documentation and the same percentage of people enter business results data. About 16% enter software code, and 14% enter proprietary information.
Many Employees Believe AI-powered Tools Are Safe and Secure
According to the study, 64% of respondents believed using AI-powered tools in their work was safe and secure. This underscores a more significant concern.
A major risk is tied to artificial intelligence — a human cognitive bias called the Dunning-Kruger effect. This effect explains humans’ tendency to be overconfident while being ignorant of possible risks. In the context of AI, this cognitive bias may lead workers to overestimate an AI tool’s capabilities while lacking an understanding of the technology. For example, a worker may use an unapproved AI-powered tool to analyze organizational data, receiving unintended results. Further, they may take these results at face value, having too much trust in the tool’s capabilities.
Balance AI Adoption With Risk Management Strategies
While things look positive regarding employees losing their inhibition of adopting AI-powered tools, the study underscores the need for implementing robust risk management strategies. A few basic controls needed include having a clear policy for using AI-powered tools, data handling, and educating employees on AI’s limitations. Using AI in the workplace will continue to expand, and hence, a comprehensive approach to policy development and risk management is necessary