Judging forms

First impressions and face value

There’s an old saying that advises: “don’t judge a book by its cover”. The idea behind this saying is that initial, outward impressions are not necessarily a good indicator of what’s on the inside.

There is solid reasoning behind this mantra: it is impossible to capture — in the front and back cover — the mood, style, ideas and imagery that a book contains. The cover of my copy of George Orwell’s 1984 has a blurred photo of a guy with a megaphone on it; it gives no indication of the undeniably brilliant writing that lies beyond it.

Still, we humans are only, well, human and so we persist on making judgements based on ‘covers’. We buy wine that has a nice label, trust restaurants with fancy interiors and decide how we feel about a person based on the clothes they’re wearing. Often, we lose out because of these judgements.

Joshua Porter beautifully described an equivalent problem when comparing the judgement of logos — where focusing on just the visual is OK — to the judgement of a website — where it’s not:

“You can’t appreciate a web site in the same way you appreciate a logo […] When a logo works, it makes you think certain things. Makes you think about the company, their influence, their reach. At this point, after you’ve thought these things, you’re done. There is nothing else to do […] When a web site works, on the other hand, you’re using it to do something […] In web design, we can’t pass such sophisticated judgment on a design without having an actual experience with the web application itself.”

The story isn’t any different for forms. When deciding whether a form is good or bad, we all — Formulate included — have a tendency to focus on how it looks. Does it run for lots of pages? Has the designer used a font that we like? Is it so lacking in white space and cramped with questions that it makes us break out in a cold sweat?

Unfortunately, making judgements about forms only at this immediate, visual level is fraught with problems just as with books and their covers. A form may look nice, but be full of ambiguous questions and confusing terms; maybe it’s only one page long, but lumps multiple concepts into one question.

Figure 1 gives an example of a question on a form that looks very nice, but would actually be rather difficult to answer. If someone lives in the Central Business District of Melbourne, do they put Metropolitan Melbourne or Inner Melbourne? What about a suburb like Richmond, which is only about 5km from the centre of Melbourne in a south-east direction?

Figure 1: How a form looks can be deceiving. This question looks simple (and maybe even ‘fun’) but is unexpectedly hard to answer.

Ways of judging

To paraphrase Porter, there are essentially two ways of judging: looking and doing. To assess the fitness for purpose of a form, we need to make sure we’re considering both. For example, we should try filling out the form as well as just viewing it.

Here’s another way to think about it. In a previous article, we presented a four-layer model for forms:

  • Questions and their answers
  • Flow
  • Layout
  • Process.

When we are judging a form just by looking at it, we can only ever hope to assess the Layout layer. It is only until we — or some willing users — try to complete the form, that we discover its answer categories don’t cater for us, the flow is illogical or the process for submitting the form takes days. This is why the involvement of real users is so important in any good form design project.

False judgements and assumptions

Another reason to move beyond ‘face value’ is that our underlying assumptions about how a form should look could be wrong. For example, we might look at a form and decide the design is poor because the form is quite long (e.g. eight printed pages). The underlying assumption here is that the longer a form is, the worse the experience for the form-filler.

Recently, we decided to check a number of such assumptions and was surprised to find that they do not necessarily hold. In fact in some cases, there is an argument for the opposite of what one typically believes.

Assumption 1: Long forms are always bad

First we wanted to explore whether there was evidence that people consistently react negatively to long forms. Surprisingly, the research on this subject yielded mixed results. Some studies found that it lowered response rates whereas others found it did not (Bradburn, 1978).

What the studies did seem to suggest is that length is a much lower priority, from the respondent’s point of view, than things like the relevance of the questions to their situation, whether the questions seem reasonable to them and how much they want whatever filling out the form is going to get them. This is consistent with Formulate’s previous recommendation to focus on clarity and conciseness over length, per se. Two questions are preferred over one if splitting the question up makes it easier for the respondent to understand and answer.

Assumption 2: Better to have fewer rather than greater number of response options

The second assumption we wanted to check related to the number of answer options given as part of a closed questions (i.e. questions where the respondent has to choose between one or more pre-defined answers). Does having a large number of response options have a negative impact on usability?

The common answer to this question is that no more than 7±2 response options should be given (i.e. between 5 and 9 responses is optimal). This is because 7±2 is the ‘magic number’ of items that people can commonly hold in their short term memory (Miller, 1956). There are also strong arguments for using seven-point scales when the respondent is being asked to rate on a continuum (Cox, 1980).

However, this rule only gets us so far in deciding on the impact of the number of answer options. What if I’m deciding between 8 and 6 categorical response options? Many people would argue that 6 is the better approach, because there is less for the respondent to choose from plus the form will be shorter and look less intimidating.

The research, however, suggests that often more response options are better than less, as:

  • there is a greater chance that the respondent will find an answer option that suits them;
  • the presentation of options enhances recall (Belson & Duncan, 1962); and
  • respondents use answer options to help clarify the meaning of the question, so more options means greater clarification (e.g. Tourangeau, 1984, Schwarz & Hippler, 1991 and Clark & Schober, 1992).

Assumption 3: Short questions are better than long questions

Like the number of response options, we may be tempted to keep our questions short to make things ‘simpler’ and the form overall appear shorter. But also like the number of response options, research has shown that more words help, rather than hinder, completion. For example, additional words can allow framing boundaries (e.g. “since 2004”) to be included. This contextual information routinely aids the answering process (for example, see Loftus & Marburger, 1983).

Even if the words do not substantively add to the meaning of the questions, there is evidence that they yield better results. In one health-related experiment, respondents provided more symptoms when a longer version of a health question was asked (Hensen et al., 1979). Note that in this experiment, the additional words were essentially ‘filler’.

Visual is not enough

Hopefully we have demonstrated that when it comes to forms, relying on first impressions, which are focused on the visual layer, is not enough. The evidence comes from other domains, past experience and research showing that at least some aspects of the visual design don’t necessarily impact on usability in the way we would expect.

Assessments of the suitability or otherwise of a form therefore need to explicitly take into account all components of the form’s design. At a minimum, this can be done using Formulate’s model of the four layers of a form. Better still, test the form in a replicated real-world situation, with a representative sample of typical users.


Belson W.R. & Duncan J. (1962). “A comparison of the checklist and the open response questioning systems”. Applied Statistics, Vol. II, pp. 120-132.

Bradburn N.M. (1978). “Respondent burden“. Proceedings of the Survey Research Methods Section of the American Statistical Association, pp. 35-40.

Clark H.H. & Schober M.F. (1992). “Asking questions and influencing answers”. In Tanur J. (ed) Questions about questions, pp. 15-43.

Cox E.P. (1980). “The Optimal Number of Response Alternatives for a Scale: A Review“. Journal of Marketing Research, Vol. 17, No. 4, pp. 407-422.

Hensen R., Cannell C.F. & Lawson S.A. (1979). “An experiment in interviewer style and questionnaire form”. In Cannell et al. (eds) Experiments in interviewing techniques.

Loftus E.F. & Marburger W. (1983). “Since the eruption of Mount St. Helens, has anyone beaten you up? Improving the accuracy of retrospective reports with landmark events“. Memory & Cognition, Vol. 11, No. 2, pp 114-120.

Miller G.A. (1956). “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information“. Originally published in the Psychological Review, Vol. 63, pp. 81-97.

Schwarz N. & Hippler, H-J. (1991). “Response alternatives: the impact of their choice and presentation order”. In Biemer P.P., Groves R.M., Lyberg L.E., Mathiowetz N.A. & Sudman S. (eds), Measurement error in surveys, pp. 41-56.

Tourangeau R. (1984). “Cognitive sciences and survey methods”. In Jabine T.B., Straf M.L., Tanur J.M. & Tourangeau R. (eds), Cognitive aspects of survey methodology: building a bridge between disciplines, pp. 73-100.