From Hindsight to Insight to Foresight FAQ Guide on the Use of AI for Financial Crime Compliance Related Resources Terminology 1. What is artificial intelligence? + Artificial intelligence (AI) refers to any computer system or machine capable of mimicking human intelligence. In other words, it is the ability of a computer system to emulate human-like cognitive abilities such as learning and problem-solving.Although AI has been receiving significant attention recently because of relatively new technologies like large language models (LLMs), AI as a scientific domain is not inherently new. The foundational concepts of AI began appearing in scientific literature in the 1940s and the actual term “Artificial Intelligence” was coined in 1955 by Stanford Professor Emeritus John McCarthy. Elements of AI such as machine learning (referring to the ability of algorithms and statistical models to learn and adapt without being explicitly programmed) have been deployed for decades across a variety of use cases and industries. Image There are many forms of AI, often grouped by capabilities and functionalities.[1] Image Today, all AI is Narrow; General and Super AI remain theoretical. Refer to the Appendix for definitions of these capabilities and examples of some of the respective functionalities. 2. Is the use of AI new to financial services or e-commerce? + No. Both sectors have used AI for some time. For example, AI in the financial services sector traces its origins to the 1980s, when it was primarily employed to identify fraud.[2] Other examples of early adoption by the financial services industry include back-office automation, credit scoring/risk underwriting models, portfolio management, structured derivatives pricing and customer service chatbots. Advances in AI have continued to introduce additional functionality and complexity. E-commerce businesses have also used AI for decades to, among other things, analyse customer data and make personalised product recommendations, respond to routine customer inquiries, and predict customer demand and drive dynamic pricing decisions. 3. What is the difference/relationship among AI, generative AI and large language models? + Generative AI, or Gen AI, is a subset of AI that focuses on “generating” new content such as images, audio, video, text, code or even 3D models that are original and not just a variation of existing data.Despite its increased functionality, Gen AI is considered Narrow AI because it operates under far more limitations than even the most basic human intelligence.Large language models (LLMs) are a type of generative AI trained on vast amounts of data with a large number of parameters that generate novel text-based responses. Today there are a number of proprietary LLMs built/developed by third parties with a conversational interface (e.g., ChatGPT developed by OpenAI), accelerating user interactivity and ease of adoption. 4. What is financial crime? + Financial crime broadly refers to all crimes that involve taking money or other property that belongs to someone else to obtain a financial or professional gain. The specific activities included as financial crime are called predicate crimes or offenses and are generally determined by jurisdictional law.The extent to which a company is exposed to any of these financial crimes is a function of many variables including the nature of its products and services, its customer base, its geographic footprint, and its control environment.Predicate crimes or offenses include but are not limited to:Bribery and CorruptionCyber CrimeDrug TraffickingEnvironmental CrimeHuman SmugglingHuman TraffickingIllegal Arms TraffickingMarket AbuseOrganised Crime and RacketeeringProliferation FinancingTax EvasionTerrorist FinancingTrafficking in Arts and AntiquitiesViolations of Sanction and Export Control Requirements Application Risks 10. What are some of the risks associated with the use of AI? + The use of AI carries some of the same inherent risks that financial institutions already face, such as protecting the privacy and security of the data used and relying on incomplete or inaccurate data to form judgments, although it can be argued that the use/misuse of AI exacerbates these risks.AI also poses the following risks, among others:Ethical considerations/fairness/bias: If the data used to train an AI system contains embedded biases, while unintended, the AI may replicate or even amplify these biases in its outputs, leading to inaccurate, unfair or discriminatory decisions.Lack of transparency, interpretability and explainability: Some AI models, particularly those based on deep learning, function as “black boxes” that provide little insight into how they make decisions; this raises questions about whether the developers and users of these models actually know what the models are doing, and also complicates efforts to validate and modify the models given such opacity.Evolving regulatory frameworks: Because AI policy is being developed at the local, national and regional levels and there is no global AI “policy,” inclusive of legal frameworks, ethical standards and principles, the risk exists that a company’s decision to deploy AI, while made in good faith based on available guidance, may subsequently be determined to fall outside of acceptable parameters/norms/guidance.Regulator acceptance: In part related to lack of transparency, regulators may have concerns about the use of AI by financial institutions, putting a firm on the defensive to prove that AI produces better results than the methodologies and more “traditional” tools previously deployed.Trustworthiness: A famous mathematician (George Box) once stated: “All models are wrong, some are useful.”[5] While Gen AI models are generally trained on vast data sets, they may present incorrect or misleading results (such as “hallucinations”) as fact for a number of reasons, including bad training data and bad assumptions. In addition, given their inherent design, these models can produce different results with each iteration and are not subject to traditional methods of model validation (such as replication, as per above stated rationale).Over-reliability on technology and lack of “human-in-the-loop” factor: There may be a tendency to defer to the technology and overlook the continued importance of human oversight and supervision.Impact on the workforce: AI raises concerns about job loss from an employee perspective, and concerns about obtaining new competencies/upskilling the existing workforce from an employer perspective. On a more philosophical note, and not specific to the financial services industry, AI raises an existential risk — that artificial general intelligence (AGI) surpasses human intelligence.[6] Apart from existential risk which cannot be managed by an individual company, the lynchpin of an institution’s AI risk management programme should be its own AI Use Policy. (See Question 12 below.) Before You Start 11. What are some of the questions and considerations companies should ask before deploying AI? + Among the key questions institutions should ask when considering the deployment of AI are the following:Do we have a specific problem/use case we are trying to solve, and can AI solve it? Without a clearly articulated goal and objectives, the risk is high that AI deployment will be unsuccessful.Should we buy, build or “borrow” an AI solution? Determining whether an institution should “buy or build” should include due diligence on commercially available options, and consideration of in-house expertise/resourcing, extent of customisation needed, cost, data security and privacy, scalability, opacity/ transparency of the solution, and time to market. These are fundamentally the same factors that apply to any technology “buy or build” decisioning, although in the case of AI “borrow” becomes a relevant approach by starting with foundational models and AI capabilities available through hyper-scalers and tuning them to a specific use case.Do we have the skillsets, competencies, talent/ resource know-how and cadre experience to manage the AI implementation internally, or do we need to engage outside assistance? In addition to basic problem-solving and change management skills, AI implementation, depending on its nature, may require a number of specialised skills including programming languages, data modelling, data warehousing and data processing, understanding of machine learning, advanced analytics, data science, and knowledge of intelligent user interfaces (IUIs).Do we have the data we need to train AI models? Although some AI tools may use publicly-sourced information, many need a sufficient amount of reliable and relevant internal data to learn. Without adequate data, the proverbial “garbage in, garbage out” still rings true. While many enterprises are “data rich,” the data still needs to be easily “consumed” (e.g., centralised, complete, in good hygiene, etc.) to be usable.Have we considered the challenges and costs of accessing the data we need? Data ingress/egress is expensive across multiple clouds; ideally AI models are co-located architecturally with the needed data.Does the potential benefit of the AI justify the cost? Alongside of the costs of AI implementation itself, it is important to identify the key performance indicators (KPIs) that will be used to measure results and assess the effectiveness of your AI initiatives. (More on this below).Does the planned implementation align with the organisation’s principles and guidance on AI governance and usage? Many organisations have implemented board-approved, enterprise-wide AI governance standards that delineate approved uses of AI; establish the information required to make an informed decision about an AI tool, including identification of all attendant risk; and prescribe monitoring requirements. Implementing AI on an ad hoc basis, absent company governance standards, may expose the implementation decision to second guessing internally from senior management and the organisation’s board of directors and by external parties, including regulators.Do we understand and are we prepared to manage the regulatory expectations for the use of AI? Understanding regulatory expectations, especially for financial institutions and e-commerce companies that operate in multiple jurisdictions where expectations may differ, is critically important for regulated institutions.Do we understand all of the downstream effects of the use of AI? The implementation of AI can be transformative and may require changes to policies, procedures, data management programmes, other technologies and internal training programmes, among other potential impacts.How will our organisation manage change during this transformation? Adopting AI may necessitate significant changes across processes, roles and cultures, which should be managed proactively through effective communication and training programmes.Have we identified all key risks that may arise from our use of AI (see Question 10) and developed appropriate risk mitigation plans? Being able to manage AI risks effectively requires a solid understanding of specifically how the risks manifest in AI. That means that risk management, compliance and internal audit personnel responsible for designing and testing the AI control framework must have a solid understanding of AI, and should have a “seat at the table” as these AI initiatives are launched/rolled out.Will the planned use of AI have a direct impact on customer engagement? If yes, does there need to be some advance communication with customers to help them understand and accept these changes?Do we have a plan for ensuring the continued reliability of the AI model? As with any other models, institutions need to ensure on an ongoing basis that an AI model is operating as intended and remains conceptually sound. This requires frequent testing/ validation, performance monitoring, and outcome analysis to assess the accuracy of the AI model’s output and whether it is operating per its prescribed/intended use; data drift monitoring to identify whether the nature of the data that the AI model interacts with is faulty, thereby potentially requiring adjustments to the model; and bias and fairness checks. 12. What should be included in a company’s AI Use Policy? + An AI Use Policy should document the company’s responsible use of AI and should include the following content:Purpose and scope — goals of the use of AI, aligning with strategy, any limitations on where in the institution AI may be usedRoles and responsibilities — governance and oversight of the development, acquisition and use of AIAI development — standards for the in-house of AIDue diligence of third-party providers — standards for performing initial and ongoing due diligence on third party providersAuthorised AI tools — permissible and prohibited AI toolsRegulatory requirements — relevant laws, regulations and guidance applicable to the use of AI, including ethical considerationsMonitoring — requirements for evaluating the ongoing integrity, reliability and suitability of AI toolsTraining and awareness — necessary training and upskilling for employees about responsible use of AI technologyExceptions to policy — the institution’s exception management policy and procedures 13. There are many different types of financial crimes. Does each type use different AI tools? + AI capabilities such as chatbots, natural language processing (NLP), audio signal processing (ASP), computer vision, Gen AI, and machine learning algorithms and more have potential benefits across different types of financial crime detection. Examples 14. What are examples of AI use cases for AML/CFT? + By analysing vast amounts of data and identifying complex patterns, AI can significantly improve the accuracy of detecting illicit activity, resulting in fewer false positives (legitimate transactions flagged as potentially suspicious) and false negatives (suspicious transactions that are not identified).By automating the process of capturing, documenting, and organising the alert/case narrative in a standardised and traceable format using AI, the alert review team of one bank was able to increase its productivity 5X.Value: Efficiency, cost effectiveness 15. What is an example of an AI use case for sanctions and export controls? + A financial institution uses AI to risk score its sanctions alerts, dispositioning those that pose little risk, and directing higher risk alerts to humans to resolve.Value: Efficiency, regulatory compliance 16. What are examples of AI use cases for fraud? + Fingerprint scanners, facial recognition and voice recognition technologies can be used to offer an extra layer of security, making it more difficult for fraudsters to impersonate legitimate customers.A payment processor uses time, location, device and GPS data to determine whether activity occurring in distant geographies may be fraudulent. The company believes that AI will eventually learn to evaluate certain behaviors, including swiping speed and gestures when assessing the likelihood of fraud.Value: Effectiveness, reputational harm minimisation, customer protection, revenue leakage aversion 17. What is an example of AI use cases for market manipulation? + A broker-dealer uses AI to analyse large datasets from multiple sources, such as market data, transactional data, social media and news feeds, to identify deviant trading patterns or anomalies in real time. This enables firms to undertake real-time monitoring, detect and deter potential violations, and send out timely alerts for investigation across a series of use cases (e.g., rogue trading, insider information, market abuse, collusion, sales malpractice, elder abuse, etc.)Value: Effectiveness, regulatory compliance, reputational harm minimisation 18. What is an example of AI use cases for anti-bribery and corruption? + An institution uses AI which learned from historical data to analyse large data sets, flag transactions and establish links between entities that deviate from established patterns and may indicate improper payments.Value: Effectiveness, prudent risk detection, compliance 19. In addition to some of the use cases cited above, how else can AI be used to detect transaction laundering in e-commerce? + Computer algorithms can be used to examine merchant sites electronically and can spot indications of front companies that the human eye might not be able to detect.Value: Effectiveness, risk and loss mitigation 20. How would a company measure the impact of AI on its compliance effort? + Measuring ROI for AI investments can be complex as many benefits are long-term and difficult to quantify precisely. Among the metrics institutions may consider are the following:a. Improved efficiency as evidenced by better productivity and/or reduction/reallocation of staffb. Reduction of false positives/improved detection rates (i.e., more signal/less noise)c. Better regulatory outcomes including better examination results, fewer violations of law and penaltiesd. Reduced customer friction such as faster client onboarding and quicker resolution of questions about customer transaction activity, less need to contact customerse. Greater agility to manage new threats 21. What are the expectations and requirements for the use of AI? + Governments, regulators and standard-setting bodies are all developing guidelines and frameworks for the use of AI.As governments and regulators across the globe consider the transformative impact that AI will have, they are developing governing frameworks and communicating their expectations for the ethical and responsible use of AI. The EU AI Act is one significant example of a government framework. Other jurisdictions and regulators are still in an information-gathering phase and have not published final guidance, although most have at least signaled through speeches and in industry for what they are thinking.Examples of emerging standards for AI governance issued by standard-setting bodies include:NIST AI Risk Management Framework is designed to equip organisations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment and use of AI systems over time.ISO/IEC AI Framework provides guidance on managing risks associated with the development and use of AI. The document offers strategic guidance to organisations to assist in integrating risk management into significant activities and functions. The policy paper published in August 2023 by the UK’s Office of Artificial Intelligence and Department for Science, Innovation and Technology does a great job of succinctly offering five guiding principles for the responsible development and use of AI, which are common to most of the published and emerging guidance:Safety, security, resilience and robustnessAppropriate transparency, interpretability and explainabilityEthics and fairnessAccountability and governanceContestability and redress As governments and regulators across the globe consider the transformative impact that AI will have, they are developing governing frameworks and communicating their expectations for the ethical and responsible use of AI. Lessons & Insights 22. What are some of the lessons learned by companies that have adopted AI tools? + Some companies that were early adopters of newer AI tools learned the hard way of the importance of making sure they address all questions provided in the response to Question 11.Some early adopters overestimated the functionality and benefits of an AI tool; in some cases, this included misjudging time savings and the extent to which staff could be reduced.A valuable lesson learned by early adopters was the benefit of starting small and scaling. This allowed them to prove their value proposition, gain necessary experience without being overwhelmed by the process, and test the technology before scaling to larger initiatives and ultimately industrialising newfound capabilities. 23. What impact will the use of AI likely have on the staffing of financial crime compliance departments? + Depending on the nature and extent of adoption, the use of AI may allow for staff reduction, principally among the staff who perform routine tasks. This would leave compliance professionals more time to focus on what’s really important — the activities that require human judgement and experience. The use of AI will also prompt the need to add (or upskill) staff in more specialised roles, including individuals who understand how to use AI tools and effectively evaluate AI outputs, and who can evaluate the ongoing performance. 24. What is the future potential for the use of AI to fight financial crime? + The use of AI to fight financial crime can be a game changer — achieving cost-savings while driving efficiency and improving efficacy that the industry has been unable to achieve to date. Given the continued evolution of AI capabilities, potential use cases are limited only by our imagination and institutions that don’t leverage AI will find themselves at a disadvantage. 25. In the race to achieve foresight, who will win — financial services or e-commerce? + In the battle to fight financial crime, we believe the collective efforts and lessons learned from all interested parties — both public and private sector — will drive the most progress. Both the financial services industry with its extensive experience fighting financial crime in a highly regulated environment and the e-commerce industry, which includes many tech-savvy digital natives, have much to contribute to the common goal of stopping the bad guys.In one survey of 356 experts, half believe human-level AI will exist by 2061, and 90% said it will exist in the next 100 years. But, for now at least, it is important to remember that AI is a tool, not a replacement for humans. It allows humans to focus on what’s really important — the things that require human experience, judgement and creativity.** AI timelines: What do experts in artificial intelligence expect for the future?— Our World in Data Our solutions Pro Briefcase AI Services Artificial Intelligence (AI) stands at the forefront of innovation and is revolutionising the way businesses operate and compete. Al is critical to define the trajectory of future growth and value. The opportunity is vast and balance is key to strategic and responsible use of Al. Pro Building office Emerging Technologies Protiviti’s cloud services and Emerging Technologies team help organisations embrace new technologies to support business strategies, optimise business processes, and mine data to bring new solutions to market and gain a competitive advantage. Pro Document Consent Financial Crime Compliance Protiviti offers a multi-dimensional set of solutions to help your organisation efficiently fight financial crime while staying in sync with regulatory changes. Pro Document Files Regulatory Compliance Protiviti’s regulatory compliance team brings a blend of experience and fresh thinking through a unique mix of consulting talent combined with former industry professionals, including risk and technology executives, commercial and consumer lenders, compliance professionals, and financial regulators. Topics Risk Management and Regulatory Compliance Industries Retail