The Next-Generation of AML: Tracking Users Based on Actual Behavior
The view of customer risk you have today is not reflective of the actual risk they pose to your firm. Bold? maybe, true? absolutely. Historically banks calculate the risk associated with their customers based on what they voluntarily declare when first opening an account: background, employment, location, money sources, counterparties, etc. This is a terrible model for calculating risk because it’s based on an assumed view of risk rather than evaluating where the real risk is exposed, their behavior.
For years I’ve been using the phrase: “transactions are trying to tell you a story…are you listening?” For most people the response is: “I know they are trying to tell me a story, however, I don’t speak the language and can’t understand what is being said.” No matter how simple this sounds, it’s a serious issue that our industry has yet to solve. Historically we have done a poor job at efficiently and effectively recognizing suspicious activity of our customers. At AyasdiAI we have applied several years of some of the brightest minds in AI to improve the accuracy and efficiency of existing systems by injecting artificial intelligence and automation where needed.
A typical onboarding process involves preset questions that customers answer based on which they are segmented into high-risk or low-risk customers. A TMS deals with high-risk clients by keeping thresholds low to catch something suspicious. This generates a lot of unnecessary alerts (false positives). A low-risk customer gets high thresholds, so those customers get a lot of wiggle room to do fishy things before anything gets flagged. In other words, bad onboarding information (KYC) is feeding into ill-informed assumptions (rules and thresholds) for the TMS which in turn is producing a lot of noise (false positives) that investigators have to work on. The only action banks have been able to take to combat this problem is to throw more manpower at it which is not a sustainable solution. Symphony AyasdiAI has created a solution specifically for this problem.
Ayasdi AML creates a view of real customer behavior and it does this in two ways. Behavior is never one-dimensional. It is an accumulation of many different factors coming together to form a case of actual suspicious activity rather than assumption-based results. Customer-provided KYC information can be inaccurate, whereas transactions offer a statistical and unbiased view of every customer’s behavior and how they use their accounts. Ayasdi AML works on top of existing TMS and surveillance systems to ingest KYC, transactions, and historical data to build fine-grained segments of customers based on more features than ever used before. Better segmentation means thresholds can be more accurate and a reduction in false positives.
The second feature of Ayasdi AML takes in all sources of data from the omnichannel customer experience (think web-logs, call center data, transactions, KYC, and TMS alert data) to provide banks with a holistic view of their customers’ behavior and how that behavior is evolving over time. At times the changes in behavior will be subtle, other times it will be drastic and worth noting. This second view can catch suspicious activity that is too complicated to find with just a TMS platform.
The reality is that every entity on the planet will establish its own statistical normal behavior over time and Ayasdi AML will not only provide that view of normal behavior but also notify you of any changes that can be of interest to the bank. Technology has become very advanced and transparent. Gone are the days of black-box platforms that would not pass regulatory scrutiny. Even though our application is performing complex functions, it provides clear documentation in the form of an audit trail of why a customer’s behavior qualifies as suspicious, so the investigator’s decision is based on facts and defensible in the long run.