Blog / Fraud analytics, Machine learning

Are you spending too much time on manual review?

Whilst manual review of suspicious transactions will continue to play an important role in fraud prevention, chances are that it's is costing your business more than it should.

Are you spending too much time on manual review?

There are a number of metrics you should look at when assessing the performance of your fraud prevention efforts. Key metrics obviously include chargeback rate and prevent rate (i.e. you need to monitor how much you are losing to chargebacks and keep false positives down as much as possible), but a cost often overlooked is manual review. You should keep track of both the number of manual reviews you do each month and the average manual review time. Combined with the costs of labour, these give you the total cost of manual review. This is a number you should be tracking just as keenly as your chargeback rate and is one of the easiest ways to reduce the total cost of fraud to your organisation.

Manual review: a costly last defence against fraud

According to the 2015 Merchant Risk Council (MRC) Global Fraud Survey, merchants typically manually review 10-15% of online orders. Other reports suggest that as many as 26% of eCommerce orders are manually reviewed. These figures understandably vary between verticals, as the benefits of manually reviewing most orders may outweigh the costs for luxury retailers, but not for food delivery apps.

While online merchants increasingly feel that they have fraud itself under control, the Cybersource 2016 UK eCommerce Fraud Report reveals that reducing manual review has become their greatest fraud challenge. Leaving an order ‘sitting’ in the queue for hours or even days results in poor customer experience, potentially causing order cancellations and affecting your key metrics including customer lifetime value and net promoter score. Even if reviews take place post-transaction as is common in the on-demand space, long manual review queues and an imbalance between customers accepted and declined after manual review are red flags for high operational costs and inefficiency.

Improving the way your orders are reviewed does not have to be difficult or costly. There is an abundance of advice available online on optimising the efficiency of your manual review process. For a start, you should ensure your agents can review orders efficiently by making sure they have one screen where they can access all available information on a customer. With large numbers of orders to manually review, it also makes sense to prioritise them based on time-sensitivity or customer status.

The Ravelin Dashboard shows each customer’s full order history and payment methods on the same screen, allowing your manual review team to make the right decision at a glance.

While basic operational improvements and investing in improving review workflow tools can reduce the time and money spent internally on manual review, they address the symptoms, not the cause of the problem and therefore largely miss the point.

Although it only takes an average of 5.6 minutes to review a suspicious transaction (MRC Global Fraud Survey, 2015), time spent on manual reviews remains the fraud challenge of greatest concern to eCommerce merchants. Most often, the problem is not that you’re taking too long to review orders, it’s that you’re reviewing too many. The biggest gains in time and money saved can be made by cutting down the number of transactions sent through manual review in the first place.

Manual review is most effective as the last defence against fraud and you should only have to manually review orders that are difficult to decide on. The goal, therefore, is to have as close to a 50:50 split as possible between manually allowed and declined transactions. If your business has a 1% rate of chargeback fraud (and assuming you do not automatically decline any orders), you should not be reviewing more than 2% of your transactions. However, merchants often report allowing the vast majority of manually reviewed orders, a sign that operational resources are spent unnecessarily and genuine customers are forced to wait in the queue. The underlying cause of this is that your fraud detection and decisioning tool is not performing as well as it should.

Where rules fall short

Achieving the desired 50:50 split is a challenging task if you are relying on a rules-based scoring method. An effective rule base takes time and expertise to build and does not scale well, as constant rule changes are needed to deal with peak seasons and evolving patterns of fraud. We often hear how growth through new channels and markets is also held back by fraud risk and the time it takes to find the right rules to beat fraudsters.

What you need is a smarter scoring method, one that makes the right decisions in real time and evolves without the need for constant rule changes. This is one of the reasons why an increasing number of merchants are recognising the benefits of machine learning in decisioning - read more about this here. While machine learning has until recently only been available to large corporations with the budgets to hire teams of data scientists, Ravelin and other cutting-edge fraud prevention solutions have made this technology available to most merchants. A key benefit of tools like Ravelin is that they not only use your own historical data to train their machine learning models, but they also learn from similar companies across their network. This means that even if you are processing a relatively small number of transactions, you can take full advantage of their large data sets for your vertical and get accurate decisions from Day 1.

Ravelin first gives you a fraud score for each of your users from the moment they sign up and updates it with each interaction they make with your platform. This fraud score is simply the percentage probability of a customer with given attributes being fraudulent, based on the historical data of both your own customers and those across our wider network. With this knowledge, you have the power to set decision thresholds (allow/review/prevent) that make sense for your risk appetite and minimise the number of transactions that are unnecessarily reviewed. Ravelin’s probabilistic fraud scores give our clients the confidence to manually review fewer than 1% of their orders.

For some businesses, especially those that see a high volume of low value transactions, scores generated by machine learning models might even be accurate enough to skip manual review completely. Here’s what Zalan Lima, Head of Fraud at global taxi app Easy Taxi, had to say:

“Before Ravelin, we had two people working full-time on manual reviews. Now everything is automatic and we only log into the system once a week. [...] We chose Ravelin because they have the experience to make these decisions very fast. Now that we have fraud under control, we can grow the number of people who pay in-app.” Zalan Lima, Head of Fraud at Easy Taxi

However, for most merchants, manual review still has an important part to play in the fraud prevention process. Manual review can be invaluable when dealing with edge cases where there is no substitute for human insight, as well as helping machine learning models fine-tune their decisions and detect changing patterns of fraud. Ravelin’s models learn continuously from each decision you feed back to them, making them better prepared to deal with similar cases in the future, and automatically bringing you ever closer to the optimal 50:50 split between orders accepted and declined after manual review.

Learn more about machine learning here.

Related content