Picture a web-based quote form for car insurance. One relatively low-priority option — which costs the customer more money — gets significantly higher take-up than industry benchmarks. The insurer is concerned because customers choosing the option will see a higher premium, and that’s likely to mean poorer overall conversion. But the question has been carefully designed and performed well in user testing. What’s behind this form design mystery?
A bit of background
In this online quote form, potential customers provide the details of their car and their situation, and the system tells them how much the cover will cost.
Figure 1: The option as originally presented to eligible users. Click on the image to see a larger version.
The cover has options, one of which is more complex than the others and therefore requires some explanation. Consequently the option gets a section of its own, and this section (see Figure 1) is shown dynamically when the customer meets the eligibility criteria.
As part of the standard design and development process, the form is tested with a representative random sample from the target population. There are no problems with this part of the design and in fact, probing by the test facilitator indicates that participants understand the option correctly. The online quoting form goes live.
Time passes, large numbers of people use the form. Analytics start showing that the proportion of eligible customers choosing the option is much higher than the industry standards. It might seem like this would be a positive thing for the insurer, because the premium is higher. But a higher premium is only valuable if the policy is actually bought! Instead, there’s a greater number of customers potentially comparing more expensive, option-inclusive apples with less expensive, option-exclusive oranges (i.e. competitor quotes or their current policy). Insurance is quite price-sensitive, so this behaviour is likely to equate to lower sales. No insurer wants that.
What’s going on here? In short: we didn’t cater for Rushers.
Three types of form-fillers
In their great book, Forms that Work, Jarrett & Gaffney propose three types of form-fillers in the world:
- Carefully read all the content on the form.
- Begin by completing fields, only reading when they think they have to in order to be able to answer.
- Decline to fill out the form at all.
Readers are the form designer’s ideal, but we must cater for Rushers as well (Refusers are another story for another post). Rushers are not necessarily lazy or malicious. In fact, they are doing what all people do: satisficing.
Satisficing is an subconscious human behaviour, used to conserve energy. It means expending only the amount of effort the individual believes is required, to get something done to an appropriate standard. It makes a great deal of evolutionary sense and this is why we humans do it all the time.
In the case of forms, Rushers are intensely ‘field-focused’. So, when a Rusher comes to our option, they probably don’t read much more than the heading before jumping down to the question and answering it. Chunk of text beside the heading? They think, That’s probably some gaff I don’t need to know, won’t bother with that.. In response to the option being offered they conclude, You bet! After all, it sounds like you’re offering me something, and there’s no big mention of cost, so it’s probably free..
Yet the option does cost extra and this is likely to be a key factor for deciding whether or not to take it up. It’s not possible to give a dollar value at this point in the interaction, so an approximate proportion is given. But this cost information is given in the explanatory text, which our Rusher probably didn’t read.
If they must read it, put it in the question
To solve this problem — by drawing out the cost component — we moved the cost information to the question itself.
We could have added it to the text of the question e.g.:
“Would you like to protect your Rating 1 (adds 10% to your premium)?”
However, the same satisficing behaviour is likely to mean that user stops reading the question as soon as they get to “Rating 1”, figuring that:
- they have enough to answer at that point; and
- what’s in parentheses is just ‘nice to know’.
Instead, we added the note to the answer options, appending “(adds 10% to your premium)” to the “Yes” option (see Figure 2). This way, in the one glance, the user almost can’t help but see that taking up the option will cost them extra. Even if they don’t see “to your premium”, close proximity means that the “Yes” is intimately connected to “adds 10%”.
Figure 2: The option as presented to eligible users, after revision. Click on the image to see a larger version.
Outcomes improve markedly after catering better for Rushers
After making this modification and letting the revised form run live for a while, we reviewed the analytics (comparing the month immediately before the change to the month immediately after the change).
We observed a 32% drop in the proportion of eligible customers taking up the option, putting the rate right back into the typical range. Simultaneously, conversion almost tripled, with no other changes to the product or its marketing implemented during the period.
While we can’t be sure of cause and effect, it is entirely feasible that conversion improved because more people were now comparing apples with apples, thanks to a better understanding of the option.
How can you cater for Rushers more, and get similarly great increases in conversion?