Making Sense of the Data, Part 3

Posted inPrint Design Articles

Making Sense of the Data, Part 3

This post is Part Three of a series adapted from my presentation at the 2011 HOW Interactive Design Conference in San Francisco (Part 1 is here and Part 2 is here). I included a variety of slide transitions in my talk, so in some cases below, I’ve consolidated slide groups into animated GIFs. Keep your eye out for those so you don’t miss any details.

Gathering Data from Real, Live Humans

Let’s talk about people, and what they see.

Remember Thing #2 about web measurement? (You know, from Part 1 and Part 2?) Anything can be a source of data. In the last part of this series, I gave you an overview of Google Analytics, one of the most important sources of data you have. But people are another source we should take just as seriously as the analytics bots. We can learn a lot from seeing what other people see. So, let’s talk about how to do that.

(By the way, this is exactly the kind of expression we make when staring at a screen.)

First and foremost, the kind of user testing that I’m recommending isn’t the most scientific process you could pursue. Last time we spoke, David Baker mentioned to me that he thought people were much more interested in a “man off the street” approach to this sort of thing than a “laboratory” approach, and I couldn’t agree more. The amount of work and the high cost to mount in-depth usability studies that make use of heat mapping technology and the like is so great as to be a very real barrier to entry for most of us. But a much smaller version isn’t necessarily going to offer diminished results. In fact, our experience has been that the simpler the process, the greater the insights.

There are two particular types of user testing that you should be doing on a regular basis. If you’ve been keeping up with my posts here at ImPrint, much of this will be review. If not, wake up. This is important stuff:

1. Goal-Focused Testing2. Ten-Second Testing

I’ll explain what each type involves, but first, let me explain how to practically plan for user testing. Fortunately, the setup required for both types is exactly the same…

To do simple user testing, here’s a list of what you’ll need:

1. A quiet space. We’re not talking about a sound-proof chamber here, just a place free of distractions. Since we’re going to be recording our test sessions, it’s important to keep peripheral noise to a minimum both for the volunteers and our future selves who will watch the test footage.

2. A computer. Nothing fancy, just make sure you’re able to run some kind of screen-capture software, like Camtasia.

3. A webcam. Most laptops come with them, but if you’re running something older, you can probably pick up a USB-powered webcam for cheap.

4. A volunteer. Ideally, this person needs to know how to use the web, but not necessarily be an expert in your field or have any deep familiarity with the information or concepts contained by the website you’re testing. The fresher, the better.

5. A moderator. This is you. You’re there to provide the context, technology, and guide the test, but most importantly, to observe. Even though the software will capture the screen your volunteer is looking at, as well as their face in real time, take notes. The more information you have later, the better.

6. A test plan. This is a simple step-by-step procedure to guide your volunteer. The most important thing about these tests is that you’re not just putting a volunteer in front of a website and watching them try to use it. You’re guiding the volunteer through specific questions/tasks that are created in order to prove or disprove the website’s effectiveness.

Goal-Focused User Testing

In a goal-focused test, the first step is to have the volunteers orient themselves on the homepage. You’ll want to give them 2-3 minutes to explore the homepage. Scrolling or interacting with slideshows is fine, but ask them to not click any link that would take them away from the homepage. After 2-3 minutes, as your volunteer to explain the purpose of the website. Their answer will help you evaluate the clarity, or lack thereof, of your website’s purpose to the outside user.

Next, the test plan’s steps should correspond directly with the goals of the website. If a goal of the website is to generate leads through various conversion points, like newsletter signups, webinar registrations, etc., then there should be tasks on the test plan that require the volunteer to do those very things. For example, one task may be to have the user find a particular article on the website, which would test your site’s navigation and search tools, as well as then subscribe to that content, assuming a corresponding call-to-action exists.

These test can really be customized in any way that makes sense for the goals of your website, but should be kept to under ten minutes. As soon as fatigue sets in, the quality of the results will drop considerably.

Here’s an example of what a resulting clip should be like:

Ten-Second User Testing

Ten-second tests condense this approach even more. I came upon this idea after reading an article by Jakob Nielsen in which he wrote that the majority of website users decide whether to stay on or leave a page within 10 seconds of opening it. While that may seem like a tiny window of opportunity, the reality is that we are able to perceive information and make judgements about it in much less time than that. So rather than using this simple testing format to evaluate an entire website, a ten-second test instead should focus in on just one page.

The setup is exactly the same as before, but the procedure is slightly different.

First, give your volunteers ten seconds to view a page. Again, ask that they limit their interaction to scrolling only and not click any links that would take them away from the page. After ten seconds, have them minimize the page. Then, ask them to explain what the page they viewed is about. Like in the homepage orientation step of the goal-focused testing, the volunteers’ answers will help you to evaluate how clear your page is. You’ll also begin to see ways that visual design decisions can either help or hinder a user’s ability to evaluate a page in those first 10 seconds. Other questions you might ask are what stood out most to the volunteer, whether they would return to this page and continue reading it based upon their short experience so far, and also what they would search for in order to find this page again.

Here’s an example of what a resulting clip should be like:

The real value to this kind of user testing is that you can be up and running with both of these procedures almost immediately. The system is structured enough to be reproduced, but flexible enough to be customized to a variety of needs. But the big point is that you are getting exposure to users in a very natural way.

In Conclusion…

This whole system—breaking down measurement into basic questions, pursuing them through specific analytics procedures, and gaining quick and simple user insights—is simple for a reason: Rather than getting bogged down with data—and there is enough of it available to make that very possible—we need to be simplifying our approach to measurement so that we can do it on our own, do it more regularly, and derive meaning from it.

There is, of course, so much more detail you could pursue, whether in terms of analytics or user testing. Once you’re up to speed and successfully reproducing this approach on a regular basis, I urge you to go further. If you want more, stop back here and dig through the archives.


Resources Recommended by Imprint

  1. Available now: Print Magazine’s Guest Art Director Digital Collection

  2. Milton Glaser and Mirko Ilic give a live DesignCast on The Design of Dissent. Register now!

  3. Get an inside look at logo design from Chermayeff & Geismar

  4. Design TV: Improve your design skills with help from experts in the design industry.