Rules were the foundations of old-school fraud solutions until machine learning came along and changed the game. Sleek, agile models made the overstuffed, creaking rulebooks seem outdated and a chore to maintain.
But this doesn’t mean rules are completely obsolete. There are still situations where fraud analysts need to directly intervene in prevention - and rules provide the means to do that. Rules are still a relevant part of the prevention toolkit that complement machine learning and other technologies. So what are the kinds of situations where rules can still be effective?
Acting fast to stop an attack
Fraud analysts can use rules to quickly stop a fraud attack whilst it’s happening. For example, if an attack can be traced to a specific location, a fraud analyst can use location blacklisting to prevent all orders from one address or a specific area. Unlike other customer data which can be faked (eg. phone number, email address), the customer location is one which often remains constant for a fraudster.
Proactively block new fraud trends
Machine learning systems use historical data which is around 3 months old because it can take up to 90 days for chargebacks to come through. If models use only the most recent data, the model may not always be able to distinguish the latest attack vectors used by fraudsters (who haven’t caused a chargeback yet) from the rest of the recent genuine customers.
A fraud analyst could be aware of an emerging trend in fraudster behavior, but the machine learning model hasn’t yet adapted to this behavior, or their business has not been targeted yet. In this situation, the analyst can proactively use rules to prevent this type of fraud before it impacts their business. Specific rules that drill down into the known characteristics of fraud with more than one condition can allow fraud managers to target exactly the right behavior.
Using rules to allow good customers
It’s important to remember that rules can be used to allow and not just to prevent. This can help to “smooth the edges” of a machine learning model when a business makes a change. For example, a retail business recently began sending us new data from their newly acquired brands. We used a combination of allow and prevent rules to help the machine learning model get the data it needed to learn new patterns, while safeguarding the business from significant fraud attacks. Using rules to allow customer behavior can also be useful when the fraud team is working with other business departments, for example marketing - where rules can be used to allow specific promotions to run.
With great rule-making power, comes great responsibility
Although rules can be very useful in the ways outlined above, they can also be problematic if used in the wrong way. A single misconfigured rule has the power to potentially block all traffic, or allow every transaction, including all fraud - both of these conditions could be disastrous for a business.
We see quite a lot of our clients tweaking rules as part of their everyday role, so we’ve developed tools to make sure rules are used with caution, and to enable fraud analysts to learn more about the impact of potential rules before they impose them on transactions. How do we do this?
As mentioned, misconfigured rules have the power to damage a business through blocking significant amounts of your user base. This could happen if a fraud analyst is new to a fraud system, or makes a simple typing mistake. We enable safeguards to prevent any rules which could result in mass, potentially damaging changes.
Whenever a new rule is added, we enforce an impact test to see what the outcome of this rule would be. We calculate the impact of the particular rule combination based on the individual business’ user data (10,000 customers a day from each of the previous 7 days).
This gives us a reasonable estimate of what percentage of the customers would have been allowed, reviewed or blocked due to the rule. If the rule has an impact of greater than 5%, the safeguard means the fraud analyst will not be able to do this independently and will need to ask their Ravelin investigator to enable the rule.
Our investigations team is able to understand the business goals and can work to find an alternative method for achieving the aim without impacting the rest of the userbase. Through working on a range of client businesses, our team has lots of experience in understanding which rule conditions work well together and how to determine the right combination.
As well as being on hand to help businesses work out the right combinations of rules, we also want to give fraud analysts the power to tinker with rules without actively impacting the user base. We’ve recently introduced test rules to make that possible.
Test rules allow you to make a new rule and assess its impact without actually turning it on to be live yet. This means you can test out different combinations and see which is most effective for what you’re trying to do. For example, you can see a list of customers who would have been blocked by enacting a new rule. You can also see an aggregated view of how the rule would perform over time in Analytics.
Rules are still relevant when used wisely
Using machine learning as the basis for fraud detection allows fraud analysts to get rid of extensive rule libraries and start with a clean slate. But although machine learning has delivered a huge upgrade to fraud detection systems, it doesn’t mean you should give up using rules completely. Rules can be used to stop attacks fast and to finetune your strategy if you have a specific goal. Safeguards and test rules give fraud analysts more power to assess the impact of potential rules, while making sure that the business isn’t impacted by a drastic change.