At Ravelin we hold two concepts in great esteem to build our company and our product. These concepts are “agency” and “trust”.
“Agency” means autonomy or control, capacity, and understanding. With our product, it means being in control of inputs and results.
In the same terms, “trust” means that the user believes the product works as expected, i.e., it correctly predicts fraud. More widely, the user believes the company making the product is trustworthy, and that the representatives they deal with are individually trustworthy.
But there is a problem. Machine learning is often about reducing user agency, and that can undermine trust. The classic example is that of a highly effective fraud manager being asked to trust a black box system, over which she has no true control (or understanding) of how decisions are made.
This is the story of how Ravelin came to find balance between trust and agency. It tells of the tradeoffs and rewards, and what trust actually means to us.
The easiest way to demonstrate this is to talk about our product.
The Ravelin product is a prediction. Our fraud product predicts which customer accounts or orders are fraudulent and which are not.
We provide these predictions directly to clients, along with a platform to consume, observe, and manage these predictions.
Older fraud detection companies were in the business of selling a tool, often configured and managed by the client themselves, who ultimately bore responsibility for the results obtained.
We decided quickly not to do this. First of all, we wanted to use machine learning as detection method in the pursuit of accuracy. Second, we figured that clients would rather spend effort on growing their businesses than building a data science team focused on fraud. Third, we knew we could get better results by pooling our expertise gleaned from many clients into one knowledge repository – both in human and machine terms.
But there was one big downside to our approach. You guessed it – “loss of agency”.
When a client configures their own tool, adds rules, and manages their own risk appetites, they have full agency. They may not have great accuracy (actually some do, by investing heavily in the data science team and getting them to focus on fraud but perhaps at the cost of other business goals), but there’s no doubt that agency is a powerful feeling.
The Ravelin approach initially took away this agency, with the benefit of providing huge gains in speed and accuracy. This led to highly accurate results, all configured at Ravelin by our Detection team. For the client, although accuracy and speed are always welcome, this loss of agency could be very painful, especially to clients who have had near full agency for a long time.
We came up with three ways to mitigate this loss of agency.
One, we started a trust initiative – a permanent product mission to give agency back in ways that don’t degrade accuracy. We call this initiative “Project Trust”, and such is its importance, that it informs every element of product planning and delivery.
The fruits of Project Trust are most clearly shown in our dashboard – the place where users interact with our predictions. We are constantly adding features to our dashboard that meet this aim.
Example: we allow clients to add or remove any number of rules quickly and easily, and we show the number of accounts that would be affected, with pertinent statistics and insights. Another example: when viewing a customer account in our dashboard we use predictive modelling to show “similar” accounts, either by identity or by behaviour.
These (and many other examples) highlight our attitude to agency and data. It is not enough to simply present a clients’ information back to them. We are giving them new agency and understanding in the form of investigative avenues to explore, or explaining the downside of enacting a harsh rule. This leads directly to trust.
Two, we augmented our probabilistic machine learning system with superior link analysis (graph networks) and a user friendly rules system.
Crucially, we don’t rely on a probabilistic machine learning system alone to make predictions. These three systems contribute to a composite prediction. Link analysis paints a vivid picture of interaction between users, and is very highly predictive of certain behaviours. Features derived from this graph consistently rank among the most important for any given client.
Many people are surprised by the effectiveness of our link analysis. Born from experience in law enforcement, our own system is highly optimised for speed so it can be used in time-sensitive operations like checkout flows.
Rules, meanwhile, are highly predictable in operation, and offer a “lever” that thoughtful fraud managers can pull in emergencies. We allow clients to manage these themselves, and have spent significant efforts making them powerful but user-friendly. The product also provides an impact analysis of the rule, both in terms of fraudsters and false positives. This is real agency – decisions made with an informed understanding of the situation. The dividend is, again, trust.
Three, we maintain a highly effective and driven Investigations and Intelligence team.
I am particularly proud of this team, which was founded with the explicit mission to give agency back to our clients in the form of people who genuinely care about your data problem (be it fraud, or some other issue), and will investigate it fanatically to help you identify and eliminate it.
They are a team of data specialists – two thirds data analysts and one third data scientists. This team also maintains our proactive intelligence gathering function to make the product better, and to inform our clients about emerging threats.
They also act as a crucial firewall between the client and the Detection team, responsible for modelling and high-level performance. This stops eager clients inadvertently putting their thumbs on the scales and degrading their own performance.
They perform deep-model analysis of emergent threats and attacks, or when clients find it hard to interpret a prediction. They conduct repeated analysis on client performance and help the Detection team hone models. They are our interface with the client users who use our product daily. They are thoughtful, patient data analysts and thorough investigators who really care about the problem.
Importance of being trusted
What happens without trust? This depends on how your machine learning product is applied.
In any case, you may lose a client if your product or company doesn’t generate trust. If used in business critical decision making (example: fraud & payments), then the situation is more complex. It can be hard to quickly switch between providers, but the reputation damage from lack of trust can be immense. A reputation built on trust, however, can increase the chance of sales, increases client happiness, and in my opinion, is far more motivational for employees than almost any other factor.
For B2B ML products in particular, it is really worth considering whether a Trust Initiative might be suitable for your product. This may require you to consider how you might provide new agency without degrading your product’s results. You might need to consider how dogmatic you are with respect to your product’s ML outputs. You might need to think about how you interact with clients.
In summary, we have maintained trust in our product even though as a machine learning company, we have in some ways reduced client control. This is by offering great results, but also by giving back new agency.
1. We constantly consider ways to provide new agency and autonomy in our product.
2. We don’t rely on a simplistic machine learning system that relies on just chargebacks or manual reviews to provide results.
3. And we formed an Investigations Team, composed entirely of data specialists, that addresses real client problems.
And as we start to serve predictions other than fraud, we will see new ways where we will reduce client agency. But by keeping our eyes on the dual principles of trust and agency, we hope to continue to build products that our clients love!