Generative AI has recently captured the world’s imagination. From human-sounding chatbots to fantastical image creators to sophisticated DNA-to-protein modelers, generative AI systems can produce more innovation with each model revision. The potential business uses for this technology can be measurable: content creation and ideation, on-demand customer service, research and development, virtual executive assistants, insurance claims estimates, and even legal assistance. These AI systems have the power to accelerate digital transformation and deliver better business outcomes in an evolving digital economy. But what about the myriad risks from bias, bad data, novel privacy breaches, intellectual property violations, disinformation, and more?
Responsibility and risk
The openness to tap into the power of generative AI will likely continue to grow with an estimated $15.7 trillion dollars of potential contribution to the global economy by 2030, according to PwC’s Global Artificial Intelligence Study. As AI use cases expand, especially into more regulated spaces, governing AI frameworks should be able to mitigate risks and build trust. According to the 2023 PwC Trust Survey, “Nearly all business leaders say their company is prioritizing at least one initiative related to AI systems in the near term." But over the next 12 months, only 35% of executives say their company will focus on improving the governance of AI systems. This is a disconnect we need to solve—and it’s particularly important in highly regulated sectors like financial services that require data privacy and security, along with strict compliance with constantly changing rules and regulations.
“There’s a unique opportunity for the financial services sector to blaze the trail for using this technology with the proper guardrails for responsible use in place,” says Vikas Agarwal, Principal, Risk and Regulatory Financial Services Leader at PwC US—a firm that is leading the way in supporting the responsible and ethical use of AI technology and data. Over the next three years, PwC US is betting big on the possibilities of this exciting new technology and making a $1B investment to expand and scale its AI capabilities. Through an industry-leading relationship with Microsoft and offerings using OpenAI’s GPT-4/ChatGPT and Microsoft’s Azure OpenAI Service, they aim to help companies harness the power of AI securely.
“Financial services can set the standard for other, less regulated, spaces to adopt generative AI with the appropriate cautions,” Agarwal continues. For instance, in the financial services risk and compliance sector, generative AI might be able to support anti-money laundering (AML) investigations by providing insights based on data/transaction patterns analysis, customer segmentation, data enrichment using third-party data sources, and media search results. These insights help give compliance teams a starting place from which they can review, validate and build on alert narratives more quickly—as can entity resolution and unstructured data assessment.
Generative AI can also help businesses stay on top of the latest regulatory changes with tasks like helping to automate data collection. This kind of technology is meant to accelerate human judgment and to make it more efficient, not to replace it with code. “Human skill and judgment will still be needed for critical thinking, ethical decision making, monitoring, evalsuation—as well as the prompt engineering and model fine-tuning needed for interacting with generative AI,” says Agarwal. “Deep industry experience and human decision-making will be more important than ever.”
Given the power of such tools, bad actors are also likely to use generative AI to attempt to attack a company from the outside. They can manipulate AI systems to make incorrect predictions, write customized spam emails, deny service to customers, or create fake accounts that are then used in phishing attempts for personal information. Generative AI could also be used to ramp up innovative money laundering schemes that a traditional algorithm can’t detect.
Business leaders need to mitigate these risks, and others. PwC’s Responsible AI is a suite of customizable frameworks, tools and processes that can enable oversight and ongoing assessment that help you confirm the safety, security and robustness of your AI, reduce bias, stay compliant with relevant regulations and more.
Another powerful tool to guide the responsible use of AI is “model governance”—a risk management process that can monitor the AI models built for specific purposes.
Why organizations should adopt model governance for AI
Generative AI is built atop vast datasets of words, imagery, financial records and other kinds of unstructured data. This data is then used to train and refine a specific AI model, but there’s not always transparency and governance around what data was used. This is especially true with models purchased from and trained by outside vendors. And even internal training processes should be carefully monitored to produce usable results from customer data without creating privacy and security risks.
Bias and discrimination may also pose significant legal and reputational risks for institutions. Robust AI model metrics should be adopted to enable “concept drift” monitoring, toxicity detection, bias detection, “deep fake” detection, and adversarial attack detection—to name only a few. The ongoing monitoring process will also need to include action plans on how to swiftly remove misinformation or malicious content fed into the model’s learning—before the model can generate similar content for new users.
To avoid problems, constant tuning and testing is imperative. Financial institutions that plan to use generative AI should be prepared to run multiple test iterations where human analysts can assess if a particular model is creating accurate results and unbiased content. Analysts should consider how the model handles various use case requests and how the model reacts to certain subsets of the population to avoid discriminatory bias. And model designers should add guardrails that direct a user toward human intervention when the model’s response to a user request falls below a certain level of confidence. Taking these precautions can also help provide the feedback clients will need on model output results.
Another potential risk—for organizations that rely on public data or that use cloud services—arises when organizations store generative AI data or models on servers belonging to other companies. In such cases, organizations may not always be able to retrieve or delete sensitive data upon request or on time, potentially making them non-compliant with data privacy laws like GDPR. And even in internal instances, it may become difficult to delete data or even certify that someone’s personal data isn’t being used within a model. Businesses need to know they can trust the generative AI tools they utilize and share data with. (Companies that build models only from internal data or that deploy tools only across internal networks are in a safer position here.)
Model governance can help address these problems. This overall process has become increasingly important in financial services and other industries as the use of models and broader analytics continues to expand rapidly. It helps organizations control access, implement policies, and provide oversight of their model systems. As models based on AI are seeing broader use, the checks and balances governance provides is imperative. Model governance puts guardrails around generative AI use by monitoring the various models used by AI systems. It provides a means for auditing and testing to avoid inaccuracy and bias, and enforce the standardization and transparency that regulators (and sound model management practitioners) look for. To do this, companies need a platform that enables complete end-to-end management, monitoring, facilitation, and governance of their generative AI models.
Building responsible frameworks to guide responsible AI use can be daunting. Turning to a trusted resource with experience in this emerging tech can be invaluable when setting priorities and implementing generative AI strategies. Building in trust and ethics from the start—and considering tech-enabled solutions—can help companies execute on their plans. Model Edge, a PwC product offers the transparency and governance needed to use generative AI more responsibly and with less risk. Model Edge combines industry-leading practices with PwC’s proven frameworks and methodologies to establish next-generation AI model governance and help organizations consider ethics and implications at the onset.
With its ongoing monitoring frameworks, Model Edge can continuously monitor models’ performance against industry standards in order to reveal biases that models may be prone to. Using these controls built directly into Model Edge, organizations can gain confidence in their modeling programs.
By visualizing the model’s operating processes, Model Edge makes the models more interpretable. And exhaustive documentation features help organizations demonstrate their use of AI/ML programs with confidence.
Model Edge’s advanced reporting and document automation capabilities further mean that key decisions are documented and that updates are seamlessly captured.
In Model Edge, PwC has collaborated with financial institutions to help safeguard their work and data; the framework was recently named a leader in the IDC MarketScape for Worldwide Responsible Artificial Intelligence for Integrated Financial Crime Management Platforms. Model Edge has already supported large banks and other financial institutions in creating a sustainable program that helps them grow and evolve.
Model governance and validation need to be nimble and sustainable; financial institutions shouldn’t have to move to new tools every few years. They need a solution that can grow with them and that can keep them ahead of possible problems.
“Fighting the sophistication of financial crime and fraud with responsible AI is table stakes for risk managers in today’s environment,” says Agarwal. “The ‘responsible’ piece is key. Mitigating AI risks requires transparency, accountability, bias mitigation, privacy, and data protection. You need to inspire confidence with customers and regulators that when it comes to something like lending decisions that they’ll be handled with fairness. Now more than ever, businesses should evalsuate their practices around responsible AI to get ahead of future regulations. That means layering in human oversight and control into your processes.”
Building and maintaining trust with stakeholders, from board members and customers to regulators, is key. Financial institutions and other businesses need to be ready to demonstrate ongoing governance over data and performance and be responsive to emerging issues. It’s beyond compliance with regulation—building trust is also good for business and brands.
PwC’s Responsible AI is built into Model Edge to help guide organizations through ethical and fairness considerations that foster responsible and unbiased use of AI-based decisions.
Reaping the benefits
As organizations consider the use of generative AI, they should take precautionary steps now, implementing model governance to put the proper guardrails in place to mitigate the risks of which we’re already aware.
Using generative AI successfully involves building models more effectively, considering unintended consequences, appreciating potential risks, and identifying where model performance may fall short. Through responsible AI use and rigorous model governance, companies can be better prepared to reap the benefits of this exciting new technology while responsibly limiting their risk.
This story was produced by WIRED Brand Lab for PwC.