Untangling importance

There's often a need to measure importance. For example:

  • An organisation wants to measure what parts of the website are most important to its users.
  • A government body wants to measure what services are most important to the public.
  • A company wants to measure which potential improvements to a product are the most important for customers.

Knowing the relative importance of different options helps with prioritising resource expenditure. Therefore, measuring importance accurately is, well, important!

So what's the problem?

If we need to know importance, why not just ask people how important things are to them?

History shows that if you ask people directly to rate the importance of a set of options, they often give the same "high" rating to every item. If "10" means extremely important, on a scale from 1 to 10, then having all items rated as "9" doesn't really get you anywhere.

Answering questions

A model from the field of social psychology provides some insight into why people give importance questions a consistently high rating. The model, developed by Roger Tourangeau 1, proposes four stages that people go through when answering a question. The four stages are shown in the diagram below.

Figure 1: Roger Tourangeau's question answering model.

So, if I were to ask you what you did on New Years Eve in 2000, you would need to:

  1. Try understand the question (comprehend).
  2. Search your memory for "New Years Eve in 2000" (retrieve).
  3. Make an assessment about whether what you remember is correct and is the information I was after (judge).
  4. Formulate and provide an answer to the question (answer).

When examining why importance questions fail, the third stage, "judgment", is key. Formulate proposes that the following occurs:

  • The respondent asks themselves "why is this question being asked?" as part of the judgement process.
  • It's logical to conclude that one reason importance is being measured is to determine what goods or services are no longer needed.
  • The thought of losing or missing out on a good or service is not appealing.
  • The respondent gives all items a high rating to reduce the chance that they will be taken away.

Alternatives to the direct question

One key way to improve importance measurement is to require respondents to make a choice - that is, give an indication of relative priority. There are many different ways this can be done:

Ranking

Ask respondents to put items in order from 1 to n (where n is the number of items).

  • Ideal when there is 3 or fewer items.
  • Not recommended when there is more than 5 items, because too many choices can be overwhelming.

Money spend

Ask respondents to allocate money to different choices (e.g "spend" $100)

  • The "realness" of the task helps ground people's thinking.
  • The amount of "division" you allow influences the results. For example, providing only two $50 notes yields very different (and "lumpy") importance data than providing 20 $5 notes.

Likelihood of use

Ask respondents how likely they would be to use the item in the future, on some sort of scale.

Only problem is, research shows that reported likelihood may not be a good indicator of actual behaviour later.

Frequency of use

Ask respondents how often they currently use the item.

Again, not a perfect measure because something may be very important to someone, but used infrequently, e.g. a first aid kit.

Impact of non-existence

Ask respondents what would happen if the item was no longer available.

The answers to this question are more likely to be word-based (i.e. "qualitative" than numerical (i.e. "quantitative"), making summaries and comparisons difficult.

Attitude

Ask respondents the extent to which they agree or disagree with a range of descriptive statements such as "I don't think I would miss <item> if it wasn't around".

Statements need to be diverse and carefully worded.

Reporting importance

No matter how you measure importance, the resulting data needs to be carefully reported and interpreted. This is because the way importance is collected obviously has a great deal of influence over the results. We'll be covering importance reporting in our next article.

References

1 Tourangeau R. (1984). "Cognitive sciences and survey methods". In Jabine T.B., Straf M.L., Tanur J.M. & Tourangeau R. (eds), Cognitive aspects of survey methodology: building a bridge between disciplines, pp. 73-100.
At the time of writing, Roger Tourangeau was based at the Population Studies Center at the University of Michigan.