For many years, the fraud analyst held a highly-esteemed position in fraud detection, at the frontline of transaction scanning. Rules were in place to flag up dodgy looking transactions and the human expert decided whether to approve the transaction.
Those days for most online businesses are long gone. Large rulesets, while often accurate and useful at identifying fraud, are too complicated to manage. The time period for manual reviews is shorter in a world of instant fulfilment. The huge shift to e-commerce means that hiring enough people to keep pace with the growing volumes is neither affordable or desirable.
To cope with the speed and scale of online commerce, new technologies have come to replace some of the functions of the fraud analyst. Machine learning models have come to the fore, and not just because they are cheap and efficient. The truth is that machines are better at more transaction analysis tasks than humans. They don’t get tired. They aren’t biased. They can simulate a thousand scenarios in the time it takes a human to look something up on a dashboard. But machines are far from perfect.
Where humans still beat machines
Machines excel at routinized work drawing on historical data. Humans are intuitive - we can see things that are wrong even if we have not seen them before. It’s an instinct that we have not yet successfully trained into machines. Hard cases and new trends are much better pursued by an analyst. The goal is to then train a machine to stop future occurrences. The good news is that working with a well-implemented ML system frees up precious time for an analyst to perform these more productive tasks.
How to train the machines
A common concern with the introduction of a machine learning solution is how to train it to adapt to the fraud experience of a specific company. This is really a question of good system design. To a large extent machines will adapt automatically to the data they receive.
But an analyst may want the solution to adapt before it sees the data. An example of this is where an analyst knows of a fraud risk which the model has not seen yet, but can’t allow the fraud to happen just so the models learn.
This is where rules come into play. It is possible to put a rule in place to prevent a specific form of fraud to ensure that no losses take place while the machine learns. You can manually block transactions so the model learns they are bad. A model can be retrained with synthetic data to stop future similar fraud patterns.
Contrasting skill sets
So what about the individual analysts in a fraud team, has technology changed the skill sets required? Here are two excerpts from recent job postings on a recruitment website.
This is a good example of an entry-level manual review and customer service role. The role description includes some activities which are either difficult or undesirable to automate. For example, filing chargeback disputes is frustratingly difficult to automate, although good services are in place at places like Chargebacks 911 and The Chargebacks Company. On the other hand,calls to customers should never be automated and are an example of where human soft skills provide a valuable personal touch.
The communication aspect of a fraud analyst role should not be underestimated. Communicating well to customers is critical, as is communication between teams. It’s a key skill for an analyst to be able to explain clearly why a transaction was stopped to both colleagues and clients.
Job Description 2
A very different job posting shows a different set of in-demand skills for today’s fraud teams.
This description is much more akin to what we do in our investigations team at Ravelin. It’s about conducting research outside of what is happening in an automated system. And this research is focused squarely on improving the automated decision making.
SQL skills are very valuable and although the role above doesn't demand advanced data science skills, it lays the groundwork for a career path in that direction.
The two roles neaty illustrate the mix of skills required. The first has fewer hard skills but does show the importance of communication in a successful fraud team. The second is a very good example of the type of analytical skills an analyst can use when a machine learning system is performing the core transaction analysis.
So long as fraudsters combine technology with social engineering there will need to be a mix of both automation and human insight to prevent fraud. Additionally, because fraud victims are people, we need to excel at communication and embed empathy in or processes. As we recognise the valuable but separate roles of technology and people, we can build truly effective fraud teams.