Why Knowledge Isn’t Enough: How Telematics Can Truly Change Driving Behavior – with Dan Ariely
Season 1 | Episode 14In this episode, Harald sits down with behavioral economist Dan Ariely to explore why simply knowing that something is dangerous – like texting while driving – doesn’t stop us from doing it. Together, they dive into the psychology of risk perception, how telematics can change driver behavior, and what insurers and product managers can do to create lasting impact. From diabetes management to speedometers redesigned for better decision-making, this conversation uncovers how emotions, timing, and human weaknesses play a role in making our roads safer.
Why Information Doesn’t Equal Change
Dan explains that just telling people what’s risky rarely changes behavior.
-
We know texting while driving is dangerous.
-
We know eating healthier, sleeping more, and exercising is good for us.
-
Yet, knowledge alone doesn’t close the gap between bad behavior and good behavior.
Research on financial literacy programs shows this clearly: despite hundreds of millions of dollars invested, behavioral changes like saving or budgeting barely shift.
The real issue? We don’t feel the danger.
The Negative Feedback Loop of Risk
Dan outlines how behavior reinforces itself:
-
Every time you check your phone and nothing happens, you “learn” the wrong lesson – that it’s safer than you thought.
-
Over time, this creates a false sense of security until eventually the rare accident does occur.
We’re also bad at handling delayed, probabilistic punishments (like fines or rare accidents). But if someone took €10 out of your wallet every time you touched your phone, you’d stop instantly.
The Role of Break Points
Drawing parallels to diabetes management, Dan highlights the concept of “break points.”
-
People resist temptations throughout the day, but willpower depletes.
-
When depleted, they give in – grabbing ice cream, skipping insulin checks.
-
In driving, the same principle applies: accidents may cluster in extreme states (tired, distracted, stressed, or overly excited).
Telematics should therefore not only focus on average behavior but also on identifying and preventing these extremes.
What the Data Reveals
Harald and Dan discuss how data uncovers hidden risk patterns:
-
April 15th in the US (tax day): more accidents due to stress.
-
Halloween: more child-related accidents because of kids on the streets.
-
Daylight saving changes: more wildlife collisions.
-
First three minutes of a trip: higher risk due to overconfidence close to home.
Edge cases matter just as much as averages.
Designing for Human Weaknesses
Dan compares the human mind to a “vintage Swiss army knife”: decent at many things but built for survival on the savannah, not modern driving.
-
Car design compensates for human limits (mirrors, blinkers).
-
Social media, by contrast, exploits weaknesses (gossip, outrage).
-
The challenge: redesign tools like the speedometer to show time saved per 10 miles rather than just speed. This reframes decisions and reduces speeding.
From Risk Pricing to Risk Prevention
Insurers traditionally priced risk, but now they can change risk through telematics.
Dan suggests:
-
Stop giving drivers just a general score (e.g., 68/100). It’s too abstract.
-
Instead, focus on one behavior at a time (braking, acceleration, signaling).
-
Give feedback in real-time or right after a maneuver.
-
Reinforce positives – “you indicated well” – to build habits.
The goal is to make good driving automatic, not to overwhelm with vague scores.
Engagement Without Overload
Harald asks how to keep drivers engaged, given that most telematics app usage drops after a few weeks. Dan’s answer:
-
Engagement isn’t the real goal, better driving is.
-
Set clear expectations: tell users driving will improve, decline, and require monthly refresh sessions.
-
Use short, focused practice days (e.g., braking on Monday, speeding on Tuesday).
-
Provide immediate, specific feedback – like tennis – rather than delayed monthly scores.
Final Takeaways
-
Delayed rewards don’t work. Immediate, incident-specific feedback is key.
-
Avoid reinforcing bad habits by breaking the “nothing happened, so it’s safe” cycle.
-
Target extreme states – stress, fatigue, distraction – where most accidents likely occur.
-
Redesign information tools (like speedometers) to make better decisions intuitive.
-
Build loyalty by interacting with customers in positive ways, not just after accidents.
Conclusion
This episode sheds light on why telematics must go beyond risk scoring to truly change behavior. By focusing on feelings, timing, and feedback loops, insurers can reduce accidents, improve customer satisfaction, and design systems that align with how people really think and act.
Full Transcript of the Episode
Harald
Dan, I first saw you on a TED talk where you spoke about people using their phones while driving. I remember it was an old Nokia or a BlackBerry—something ridiculously old. At that time I thought: that absolutely makes sense. We use our phones even though we know it’s extremely dangerous. In insurance telematics, we basically build our products around the idea that people don’t know what is dangerous—making them aware—and then waiting for them to adapt. But it’s not always the case. Why?
Dan
First of all, it’s very rarely the case. I wish humanity would solve its problems if we just told people what’s not good for them: eat better, sleep, exercise, take medication, don’t hate others. The gap between bad behavior and good behavior is not knowledge. The best demonstration is financial literacy. There are short and long courses on how to manage money better, and meta-analyses ask: how much impact do they have? Do people remember? Yes, somewhat. Do they act differently—save, budget, spend less? The difference is between 0.1% and 0.2%. Not zero, but very close. In the US, we spend between $700 and $800 million on these courses—lots of money and good intentions.
My intuitive example is texting and driving. I ask: how many of you in the last month touched your phone while driving? Most admit it. How many know it’s dangerous? Most do. The problem is we don’t feel it’s dangerous. With telematics, the question is how to get people to feel safety. It’s not knowledge—it’s feeling.
Whenever we think about behavior change, we shouldn’t just hammer people with “here’s the info” or “here’s money.” We should understand where the problem comes from. Two stories. First: the negative learning feedback loop. Imagine I think texting while driving is dangerous and the accident probability is 2%. My phone vibrates, I look at it briefly, nothing happens because the probability is small. What do I learn? Maybe it’s less than 2%. Then I do it more. Every time nothing bad happens, I learn the wrong lesson: that it’s safer than I thought. Not true, but that’s how it feels. We increase our feeling of safety until at some point we get into an accident. Every time someone picks up their phone and nothing bad happens, there’s still a cost.
Second: non-probabilistic punishment or reward. The penalty for texting and driving can be very high—killing someone, injuring yourself, getting a fine. But imagine I sit next to you and every time you touch the phone, I take 10 euros from your wallet. What happens?
Harald
You would stop very quickly.
Dan
Exactly. We are not good with delayed, probabilistic punishments. We act like our system is designed for short-term good, not long-term thinking.
Another study: I looked at a large group of people with type 2 diabetes using insulin. Who manages better and who doesn’t? Knowledge of diabetes? No. Knowledge of side effects? No. Understanding how to measure blood sugar? No. How to inject? No. General motivation? No. The number one predictor was “break points.” At some point we get fed up with life. We wake up resisting temptations—no cookie, don’t check Instagram, don’t respond to an angry email. Toward evening we get depleted. When we pass this break point—“screw it, I want something fun now”—the easiest reward is food, like ice cream in the freezer.
Managing diabetes is extra hard—more to resist, measure, and do. People reach break points more often. The break point was the main predictor of getting into a bad diabetes cycle. Why tell this? It made me wonder about telematics. In diabetes we don’t need to fix average behavior; we need to fix break-point behavior. For driving, are we trying to fix average behavior, or are accidents concentrated in outliers—extra happy and speeding, extra tired, distracted? Each of us has a distribution of driving. Do accidents happen anywhere, or at the extremes?
If it’s like diabetes, I wanted a device to warn people before a depleted state—a break point—so they could take a short walk and avoid the refrigerator moment. You probably have the data to analyze when people are prone to accidents. Is shifting average behavior enough, or is it about preventing extreme cases? I suspect it’s mostly the extremes. Sometimes someone just hits you and your state doesn’t matter, but can we predict those dangerous moments?
Harald
That’s a very interesting question. We ask our data that question. With data, you see what happens, but you need to talk to people to understand why. There’s a data Chinese wall between us and insurers: insurers know personal information, we know general information and data.
Dan
Sometimes you can interrogate the data. There’s a paper I love: in the US we pay taxes on April 15th—higher accident rates on that day. Stress matters. It doesn’t explain everything, but it’s a starting indicator. Even if you can’t ask people, there are ways to interrogate the data.
Harald
Absolutely. For example, we see more wildlife accidents when changing from daylight saving time because for wild animals it’s a distraction. We see more accidents with children on October 31st—Halloween—more kids on the road, dressed dark. We see peaks. We also see more accidents in the first three minutes of a trip because people know the environment and are probably less attentive.
Actuaries generally predict well, but if you look at the combined ratio in insurance—the profit margin—it hovers between 90 and 110, so they’re earning or spending about 10% of premium income. We need to catch edge cases to help actuaries make better decisions. Age, location, bonus-malus in Europe—those tell you a lot, but it’s actually about edge cases.
Dan
Predicting that an average 18-year-old is higher risk is not the same as prevention. When are they at higher risk? Close to home? That’s every trip. Can we get better? If you just alert all the time or focus on shifting average behavior, you might not get there. But if you detect when people get into a car stressed, that’s a dangerous start. Or if they just had a fight.
I see the human mind as a vintage Swiss army knife. The Swiss army knife isn’t particularly great at any single task; it’s okay at many, and portable—like our mind. “Vintage” because our tools evolved for a different environment—roaming the savannah, small groups—not driving 120 km/h with a phone. We apply old tools to modern problems.
The car industry understands human weak points and compensates: rear-view mirrors, side mirrors, blinkers. Social media is the opposite: it exploits our weak points—love of gossip, bad news, hate. With cars, we’ve improved safety; with social media, often the reverse. It’s about understanding human weaknesses and designing forward.
One illustration: this speedometer. Inside is the usual miles per hour. Outside is minutes per 10 miles, showing time saved by speeding. Going from 60 to 70 mph changes from 10 to a bit less than 9 minutes per 10 miles—hardly any savings. From 80 to 90 saves less than a minute. Our brain isn’t good at this; minutes per distance makes the trade-off salient. Imagine if speedometers only showed minutes per 10 miles with legal markers. You’d ask: is speeding worthwhile? There are lab demonstrations; nobody has deployed it at scale. We should give people information compatible with better decisions, not just what engineers collect. For example, we’d change recommended speed when it’s raining or a stressful day, or at a trip’s start when overconfidence is higher. The speedometer should help people decide safely, not just report speed.
Harald
When we talk about behavior change, we have two stakeholders. Insurers want customers to be safer—car, life, other lines—and consumers want to be safe because human life has value. What should we do to change behavior? You already gave a speed example and discussed delayed gratification. What about giving people feedback immediately after a trip—“you did well,” “not so well”? What mechanics would you tell an insurance marketer or product manager to implement in telematics for actual behavior change?
Dan
You’re right: insurers used to price risk and not change it. They should start changing behavior. It brings long-term customer satisfaction and competitive advantage; similar prices plus reduced risk is better. It’s also a tool for loyalty and retention. If you only interact when bad things happen, that’s a poor relationship model. Interact when good things happen.
What would I do? Learning requires understanding cause and effect and getting rewards for doing the right thing. If you gave me a point total at the end of my drive—“you got 68 points”—am I going to review my drive line by line? Almost never. People don’t even comb through bank statements to optimize spending. If you got home safe and scored 68, you won’t take time now to evaluate. We’re all busy.
So decide what you’re going to work on. In the app I’d list seven focus areas: braking, speed, acceleration, signaling, attention, etc. Recommend working on all, let users deselect. Tell people before they drive: today we’ll work on slowing safely, acceleration, signaling. Give feedback on that, ideally in real time. Even if not real time, framing it per behavior is better than a global score. If I know the target behavior, I can focus on it and you can reinforce good behavior: “You indicated well.”
My assumption is people have a distribution of driving that includes very good driving. The question is how to make the good part habitual. Make it behavior-by-behavior, focus until it becomes routine, reinforce positives, and fine-tune when needed. A general score asking me to figure it out myself won’t work.
Harald
Summarizing: would it be smart to tell a person in the morning before the first trip, “Yesterday, because of your speeding habits, you saved only two and a half minutes, but you sped for 30 minutes—so it doesn’t pay off”?
Dan
In general yes, but I’d go a step further. When people are calm, they want to drive safe. I ask students: if laptops were allowed, how many would check unrelated things? All hands go up. How many would go too long and later regret missing class material? All hands up. I help them: laptops are not allowed.
Some paternalism is better if it aligns with stated goals. Ask people: how important is driving safely? Give them a lever: how much do you want alerts and suggestions? Set this up on the sofa, not in the car. Then say, “You asked me to do this; I’m doing it for you.”
Harald
That’s awesome. How can we keep people engaged? Insurers say app usage lasts three to four weeks, then deteriorates. We allow each customer a specific experience based on behavior, but it still needs a marketer or product manager to “play the piano.” What’s your game plan to keep people engaged?
Dan
We don’t necessarily need engagement; we need better driving. There’s a seven-exercise fitness app I used five times, learned the routine, and didn’t open it again. From the app’s perspective that looks like churn; from a behavior perspective it worked.
Still, two ideas. First, give a mental model for usage: tell people upfront that driving improves then deteriorates, and once a month we need five days of focused practice—one day on speeding, one on braking, etc. Don’t demand daily use; give a realistic recipe. Second, to increase engagement, occasionally ask people to experience the boundary conditions to calibrate. Legal constraints may prevent asking for risky behavior, but experiencing controlled “too fast braking” by a small margin and seeing the feedback teaches what “bad” feels like. That can be engaging.
Harald
Nice thought experiment, but legally impossible for an insurer to ask for riskier driving. What about letting people rate themselves after a trip and comparing it with the actual score—would that engage people?
Dan
After a trip is probably too broad; the score is too complex. Bring it to a single incident.
Harald
For example: “How was your smartphone usage in this trip?”
Dan
I’d go even more immediate. Things we find engaging give quick feedback—like tennis: every shot gives outcome feedback. That’s rewarding and teaches fast. Running doesn’t give the same loop; yoga does through body feedback. To create engagement, make it more like tennis. If possible, right after a maneuver ask: “You just took a turn—on a 0–100 scale, how risky was it?” Then show the objective rating. That resolution is more interesting and rewarding than an overall drive score.
Harald
When I lived in the US, I had a bank account and got a monthly FICO score. I didn’t do much, but I was curious: 800, then 805. I didn’t understand how it worked, but I cared. Could we use that for telematics—tell people once a month how they’re driving and then have a focused day working on, say, speeding?
Dan
You could. I’d send a monthly score and compare it to neighbors or an average, maybe break it down. But if you ask which approach improves driving more, the monthly score has much less impact than immediate, driving-proximal feedback. I want people to develop a muscle understanding of what they just did that wasn’t ideal—at the moment, not end of drive or end of month.
Harald
Awesome.
Dan
For example, if a turn wasn’t good, I’d say: let’s do the next one extra well. Also, we don’t have to abandon the “range” idea. Imagine a score from +100 (perfect) to −100 (terrible). Ask people to drive normally, then “best you can,” then “how you’d drive if slightly in a hurry.” You don’t have to ask for dangerous behavior, but you can help them experience a safe range and how to stay in it.
Harald
To wrap up: thank you so much. What I learned: delayed rewards don’t work well, so we must shorten the time between behavior and bonus. Every time I do something wrong and nothing happens, my brain marks it as “not dangerous,” reinforcing the behavior—we need to break that negative cycle. It’s probably about reacting to specific behaviors and giving specific instructions to change behavior over time. Anything to add?
Dan
These are all good. I’d also say we need to understand circumstances and where in the distribution to work. And remember the speedometer idea: provide information compatible with better decision-making.
Harald
Fantastic.
Dan
I have to run. We said we’d talk about health and didn’t get to it, but if you want to continue at some point, let me know.
Harald
We’ll do it next time. Dan, thank you so much. It was a pleasure and an honor. Have a great day.
Dan
Thank you. It was nice to talk to you. Bye.
Harald
Thank you, Dan. Bye.