THANK YOU FOR SUBSCRIBING
In 1970, the federal government came up with a novel ideal to help fight crime. With the passage of the Bank Secrecy Act (BSA), Congress essentially deputized America’s banks, giving these institutions the mandate to provide actionable intelligence to law enforcement derived from the activities running through the banks’ accounts. This proved to be a fruitful source of information for law enforcement and, as a result, additional legislation expanding the BSA was passed in short order, including the Money Laundering Control Act (1986), Annunzio-Wylie Money Laundering Suppression Act (1992), PATRIOT Act (2001), and the Anti-Money Laundering Act (2020), among others. Each new piece of legislation added new parties, controls or reporting requirements to the BSA, but all with the same goal in mind: getting information to law enforcement in a timely manner so as to identify and ultimately prosecute criminals.
While the BSA has proven to be valuable to law enforcement, for banks it has been something closer to a money pit, incurring significant costs in technology and labor, while often conflicting with the bank’s mandate to return value to shareholders. Over the years, banks have attempted to manage BSA costs by introducing novel technologies to help reduce the labor needed to comply with both increasingly complex and widening legislative requirements, but also with greater expectations set by regulators.
Early BSA compliance was accomplished via manual transaction reviews, then by looking over spreadsheets of transactions for unusual patterns of activity. In its current state BSA compliance is done largely by automatic rules-based transaction monitoring. But, even with the advent of automatic monitoring, BSA teams still have staggeringly high false positive rates, typically hovering around 5% of transactions. In other words, an average of 95 percent of all transactions reviewed by BSA investigators never make it to law enforcement by way of a Suspicious Activity Report (SAR). Even in relatively small financial institutions, these systems can produce thousands of alerts per month, resulting in the need to hire armies of investigators to weed out the false positives. This is neither cost-effective for the bank, nor does it add to job satisfaction for the investigator; no one wants to do a job that is, essentially, 95% busywork.
In addition, there is no real way to know, short of reading about it in the news, if the items that do make it to SARs even helped law enforcement in their endeavors. It is common to hear that 95 percent of all SARs are never reviewed. Several people in Law Enforcement have told me this is not true, that the majority of filings are actually reviewed, but the BSA industry has no real way to validate what is actually useful.
Enter the Machines. Machine Learning (ML) is not a new technology, as Google or Netflix will attest, but it has not yet found full adoption in BSA compliance, and is only starting to be hesitantly brought on to help teams manage the massive number of false positive alerts. This hesitancy is usually attributed to regulatory concerns, but this does not really seem to be the issue. The regulators have taken pains to encourage the use of innovative technologies, even issuing guidance in December 2018 encouraging financial institutions to adopt innovative technologies and processes in their fight against financial crime (OCC News Release 2018-130, “Agencies Issue A Joint Statement on Innovative Industry Approaches.”). The real reason banks are hesitant to bring on ML technology may be the simple fact that BSA officers are typically not well versed in technology, nor are they expected to be. BSA Officers should understand financial crime and regulatory expectations, not Python. Nor do BSA teams typically have dedicated Data Scientists who can understand and explain these technologies to both the BSA officers and the regulators. These are hurdles the industry needs to overcome, as ML technology may finally help banks be a true partner to law enforcement by providing timely, actionable intelligence while reducing false positive, and, thereby, reducing the financial burden on the banks.
As would be expected, most banks starting out with ML are targeting the reduction of false positive alerts. This can be done a few different ways. One is through better segmentation of the customer base. Essentially, when looking for suspect behavior, you have to fist understand what normalcy looks like. A lot of cash coming in from a pizza place on the boardwalk is not necessarily suspicious, but cash coming in from an online retailer is. Present technology largely segments customers on static identifiers, such as NAICS Codes, and transactional volume. ML segmentation can identify significantly more intersections to allow for a better segmentation of the customer base. For example, a food truck parked at Washington Square in New York may produce more incoming cash than a roof top restaurant on Park Avenue in the same city. At present, these would both be classified as restaurants, so a large amount of cash from the roof top establishment may not raise the alarms it should. By better segmenting customers, banks can better adjudicate normalcy and identify suspect behavior. This method does have some limitations, including needing a significant amount of formatted data to make the connections, as well as a customer base large and varied enough to make these kinds of connections meaningful.
Another prominent use case of ML is using it for adjudicating likelihood of alert escalation. In this scenario, the machines are fed dispositioned alerts, along with all data available to the investigator at the time the alert was reviewed. This includes some customer information, other transactional data, counter party information, etc. The machines then ingest the information to identify connections being made within the massive pool of data, and produce a prediction as to how likely, given the information available, an alert will result in an escalation. The system then hibernates the alerts, subject to a series of business rules, based on the predictive score. This method can be useful in controlling labor costs, but suffers from the Garbage-In-Garbage-Out (GIGO) problem. For instance, if the team is filing on every person identified in the Panama Papers, then the machine will rank all such alerts as escalations, regardless of how suspect the activity might be, or how useful the filings will be to law enforcement.
The above are only two examples of how ML can assist BSA teams. There are a number of other use cases, and as BSA officers become increasingly comfortable with the technology, these will multiply. But, most of the ML technologies being used today are aimed at reducing labor costs by either reducing false positives or making investigations more efficient. If the industry truly wants to fulfill the mandate of the BSA to provide timely intelligence to stop criminal activity, we will need a new generation of BSA monitoring: machine learning-based anomaly monitoring.
As noted above, current state automatic transaction monitoring is a rules-based technology that identifies unusual activity based on dollar amount or statistical thresholds (i.e., standard deviation from historical activity in a set time frame), then kicks out an alert to be reviewed. The majority of ML technology on the market today either works on top of, or in tandem with, these systems, making them better and more efficient, but still produces a large number of false positives. Further, these systems are fed by specific databases that are updated on a periodic basis. As such, the data being fed into the system is, by nature, historical and limited. Future state machine learning based BSA monitoring should be able to comb through all data, wherever it sits, to identify, in near real time, anomalous activity that may be indicative of suspect of criminal activity. This anomalous activity would be identified through a refined, ML-based segmentation that compares actual activity to expected, along with other key indicators of risk and customer information. In such scenario, false positive rates could be reduced to nearly zero, and SAR filings could occur in a matter of day or even hours, rather than weeks or months.
Of course, even such advanced technology needs human input to be effective. For any partnership to work, both parties need to contribute. ML may be able to identify unusual behavior, and allow BSA teams to file more quickly, but without feedback as to how meaningful those filings are, SAR filings will still suffer from the GIGO problem. The Anti-Money Laundering Act of 2020 addresses this in many ways, providing for more communications between law enforcement and BSA teams in the private sector, and requiring the Financial Crimes Enforcement Network (FinCEN) to set national AML priorities so banks may better direct resources to where law enforcement is seeing trends in criminal activity. It is far too early to see how well this will work in practice; while the legislation was passed, the rules have yet to be written. In the interim, BSA officers should not only embrace current ML technologies, as they have been urged by the regulators, but begin to wonder what the future state of BSA compliance could look like.