By Jessica Barboza, OSCPA marketing and communications intern
With generative artificial intelligence (AI) on the rise, it's important for firms to be aware of certain risk factors, such as confidentiality and accuracy, to determine the best uses for generative AI.
“When using third-party software with generative AI functionality built into it, there is a risk of sharing firm or client confidential data that needs to be protected,” said Sarah Ference, risk control director for the Accountants Professional Liability Insurance Program at CNA, the underwriter for the AICPA Professional Liability Insurance Program. “Because once you have confidential data, you have an obligation to protect it.”
The second risk is accuracy and ensuring that the response from generative AI is something that can be relied upon, she said. Ask how the AI was trained and what information was used to train it. Understand what kind of protocols the AI tool has in place to ensure that the response it's generating is reasonable.
The smarter that generative AI technologies get, the more likely professionals may be to take a generated response at face value, she said. However, it’s always critically important to review the response that is delivered to the CPA by AI. Referring to generative AI as a “tool” can help foster a mindset in the user that they are still responsible for the services that they deliver to their client, Ference said. Generative AI is a tool to help the user deliver those services, not deliver the services for them.
“Ultimately, the CPA is responsible for the service they're delivering to the client,” she said. “The CPA may choose to use generative AI, subcontractors or third-party software tools to help deliver the service, but it's the firm that’s responsible for the work product that is delivered by the firm to the client. The firm may have separate agreements with third parties regarding the quality of the work the third party provides to the CPA, but that's between the third-party and the CPA firm, not the firm and its clients,” she said.
When using any third-party tool, including AI, a firm should perform due diligence to understand the policies in place to protect confidential client information that a firm might share, she said. It’s important to understand how generative AI works.
Establishing a firm-wide policy on appropriate use is a good first step to determining how generative AI can best be utilized at a firm, she said.
“I don't know if it's realistic to prohibit the use of AI, but if there are ways that a firm can roll it out responsibly, I think that's the way to go,” she said. “If there are certain tools that the firm absolutely does not want to be used, then turn off the access to those tools. Make it a prohibited website or something that restricts employees from using it.”
“Generative AI poses an opportunity for firms to get innovative and be creative as long as they are maintaining a good, healthy risk management mindset,” she said.