Chapter 10 Making Sense of Your Data: Modeling As in qualitative analysis, vary the degree of granularity you're using; comparing behavior by age range, company, or region can be more informative than just totaling up what individuals do. Be aware that if you subdivide your data in ways you didn't anticipate when planning your study, the error and confidence interval you intended to use may no longer apply, since you're effectively reducing your sample size. Don't let it stop you from making a comparison, though, because it's more important to learn from the data than it is to maintain a particular margin of error—you're trying to gain insight, not prove a hypothesis. Graphs and charts People often use line or bar graphs to represent aggregate data (such as the relationship between education and income, or number of customer support calls versus time spent with a product) because their visual nature makes it easier for some people to grasp trends. Scatterplots, which represent individual data points in a graphical fashion, may point to relationships that are particularly hard to see in textual format. If you label each point, you may realize that people of certain types are clustered together and may be related in some way you didn't anticipate. In the example scatterplot shown in Figure 10.10, each respondent is mapped on a grid with the y-axis being the number of photos taken per day and the x-axis being the frequency of sharing. By labeling each dot with a respondent name, the design team can see that certain people are clustered together, which makes it easier to ask what they may have in common that causes that clustering. In this case, Alice, Ben, and Jorge probably show similar behaviors because they all have small children; they take a few photos several times a week, and share them every week or two with out of town relatives. Dan, Li, and Hans are all serious photographers who are motivated by capturing perfect images; they take a lot of images at once, but aren't motivated by sharing. Carla, Ellen, and Peter all take a fair number of photos on trips a few times a year, and share those with friends and family. 300 ]Dan > Hans 200 a. c 2 2 150 - I 0 a. 100 - 1 50- Carla * 'Peter 'Ellen -1-1- Alice •Ben »Jorge 1-1-1-1-1-1-1 10 15 20 25 30 35 40 Times shared in last year Figure 10.10. An example scatterplot. Exercise See what insights you gain from your Lo-calGuide or RoomFinder data (or the Local-Guide data on the Web site) using quantitative techniques. For example, is there any relationship between mental models or frustrations and the frequency of certain behaviors? When qualitative and quantitative findings conflict Qualitative results that seem a bit off from quantitative ones are not necessarily wrong. The explanation for the difference is often that surveys rely on self-reporting and may ask speculative questions, whereas qualitative studies dig deeper into actual behavior and motivations. For example, one company's quantitative survey seemed to 221 Designing for the Digital Age: Creating Human-Centered Products and Services indicate that a large percentage of people would be interested in a particular service. The design team's qualitative study showed that this percentage was probably nowhere near as big as it seemed. When the design team and client did a careful review of both studies, it was clear that the quantitative study was asking people to speculate about their future behavior, whereas the qualitative one demonstrated that most people had no particular need that the new service could address. While there were clearly some people who would be interested, the indication that the percentage might be misleading led the client to reconsider the size of their investment in the new service. Explanations and relationships As you bounce back and forth between types of analysis, you'll begin formulating explanations for why your respondents behaved as they did and how various aspects of their attitudes, goals, and behaviors are related. Drawing meaning from your data is the whole point of analyzing it; mere summary won't accomplish much. Sometimes you can draw those explanations from within a single interview, but in most cases you'll gain more insight from comparing respondents to one another. Of course, there is plenty of room for bias in any attempt to explain what you see. We humans like to believe the world is a systematic, rational place, so we tend to fill in gaps with assumptions and causal structures that may or may not be accurate. This tendency toward explanation exists in nearly every discipline and is in no way "unscientific." Everything you learned in physics class about the nature of gravity or the types and behavior of subatomic particles is simply a theory about causal structure; there is considerable evidence that makes that theory plausible, so until someone disproves it, we rely on it as truth. In any attempt at explanation, all we can do is use techniques that help us be as objective as possible, show our work to others who can tell us whether it also sounds like truth to them, and proceed based on the best theory we can assemble. Any attempt at explanation should rely in part on the classic rules for determining causality: 1. A precedes B. If A is the cause, it exists or occurs before B. In the case of human behavior, this would mean that a goal, attitude, or condition exists before the demonstrated behavior exists. In any attempt at explanation, all we can do is be as objective as possible, show our work to others, and proceed based on the best theory we can assemble. 222 Chapter 10 Making Sense of Your Data: Modeling 2. When A, always B. If A causes B, in theory, B should occur any time A occurs. In reality, human behavior is very complex, so don't stop looking if it seems that B doesn't always follow A; sometimes it doesn't happen because some other condition exists. For example, it might seem that skilled photographers always throw out any photo that doesn't meet their standards, but one respondent kept some bad images of his son. Does that explode your theory? Not really; that photographer kept those awful photos because they were the only documentation of an important event in his child's life, so you can say, "When A, always B, except when C." 3. A plausible mechanism links A and B. If there is a believable connection between A and B and the other conditions are met, it's reasonable to assume that A causes B. "Enjoyment of shopping leads to more frequent shopping" makes sense, but "having blue eyes leads to more frequent shopping" doesn't make sense, so it's probably a coincidence if your data shows that people with blue eyes shop more than others. In practice, design teams aren't making such formalized arguments, but this is essentially the thought process behind the discussion of causes and relationships among different attitudes, goals, and behaviors. When examining the evidence, you must also consider how much weight to give any statement or observation. If a respondent said he does things one way and you observed another, the accuracy of his self-reporting is questionable, so for the sake of your analysis you should rely more on his behavior than on his statement. This is especially important when considering anything that someone says he "wants" to do. If your respondent says a magic tool would let him tag every photo in multiple ways to be able to find each one later, you should take that statement more seriously if he currently spends a lot of time filing copies in multiple folders than if he currently dumps all of his photos in a single folder. In the first case, he cares enough about finding specific photos that he's already investing a lot of effort; in the second, he's just speculating about something that might be nice, but doesn't seem to have much existing pain. Don't expect to identify and explain every relationship right away. You'll want to revisit this step once you have personas, but persona creation will go more smoothly if you've already begun identifying relationships among various goals, behaviors, and characteristics. If a respondent said he does things one way and you observed another, your analysis should rely more on his behavior than on his statement. 223 Designing for the Digital Age: Creating Human-Centered Products and Services Risks and opportunities By the end of your analysis time, you may find that you've uncovered previously unidentified risks or opportunities. This is one way in which design research can form a basis not only for product design, but also for product and business strategy. Opportunities usually appear in the form of unexpected customer and user needs. Sometimes combining what you heard from stakeholders with what you've observed in user interviews can help you make a critical connection. On a health care project when the managed care model was just starting to sweep the United States, I recall an executive stakeholder saying that within a few years, every care provider's biggest problem would be figuring out how to deliver appropriate treatment within the complex reimbursement structures of managed care. Later, during user interviews, my team encountered one facility with mostly managed care patients that had created a role specifically to address that problem; we realized that creating tools for such people would be a huge opportunity for our client to jump ahead of the market. Many of the risks are already evident to stakeholders, but it's not unusual for design research to reveal significant business problems. I've seen cases where stakeholders misunderstood who the potential users were, severely overestimated the demand for a particular type of product or service, or had already invested in technologies that would prove limiting. In each case, the stakeholders weren't happy to hear the bad news, but they were grateful to have intelligence that helped avert potentially costly mistakes. Before jumping to the conclusion that a particular course of action is bad, though, remember that stakeholders may still know something you don't. Pull your project owner aside and discuss anything you believe might be a major problem before taking it to the full product team. It's best to present your findings and help stakeholders see the implications, but let them draw their own conclusions. Preparing to communicate your user findings Just as thinking through the trends and relationships in your data will prepare you to identify behavior patterns and turn them into personas, walking through your findings before you introduce the personas will help stakeholders understand how the persona set emerged from your data and why it makes sense. This also gives you a chance to address questions and concerns based on examples from the data, so that once you introduce the personas, nothing in them should be controversial. As with stakeholder findings, you can communicate user findings in a presentation, in a detailed written report, or both. Try to distill each issue to a simple statement with supporting commentary or text. Here's an example: Buyers chose their automobiles in widely varying ways, but most decisions involved emotion to some degree. Some people selected a car because they felt it expressed something they wanted to convey about themselves, whether because they liked its design or identified with the attributes of the brand. These buyers tended to be decisive and not very research oriented; one said she purchased a Jaguar because she "liked the kitty on the hood." Most people said they choose their vehicles for various practical reasons, such as cargo capacity, safety, fuel efficiency, or budget. Some of these buyers said they may have started with a leaning toward one model for emotional reasons, but would not have bought it if it did not meet their other criteria; they seemed to feel that emotional reasons were not a valid basis for making such a decision. This group varied with respect to the amount of research they did; research orientation did not appear to be related to any particular factor. A small group, 224 Chapter 10 Making Sense of Your Data: Modeling which mostly consisted of men under age 40, was very focused on performance characteristics, such as engine size and handling. You must decide how to focus your findings, since a complete list of everything you learned will bore most stakeholders and cause critical points to get lost among trivial ones. It's fine to talk about how everyone was confused by the instructions on a specific screen if you're doing a constrained redesign. If your scope is more ambitious, though, it's best to focus your findings on things that will have the greatest effect on the product definition or accompanying business strategy, along with anything that will have major implications for design (such as major flaws in the current hardware form factor or software navigation). Common topics for a findings discussion include: — User mental models, especially where they differ from current implementations — An overview of existing processes and major points of pain within them — Trends, behavior patterns, and the factors that influence them — User skills or characteristics, especially if they differ from expectations — Comparison of customer and user needs See Chapter 13 for how the findings come together with personas and requirements in the User and Domain Analysis document. Project Management during Modeling Although you may wish to spend a lot of time working with your data, it's seldom worth more than a few days of effort. On a relatively tight project of, say, eight to twelve weeks total, you may need to limit the pre-persona modeling to no more than a day. Make sure your approach is consistent with the amount of time you have; detailed coding of every interview, for example, could take you several days and is seldom worthwhile for small data sets. Set goals for what you want to be able to articulate at the end of each meeting, so you don't just wander aimlessly through your data. Have patience if big insights don't appear just yet, though; things may be much more clear once you have personas to help structure some of the other models. Move on to persona creation if you don't feel like you're getting anywhere, then revisit these techniques afterward. Avoid including people from outside the design team in these activities, since it's exceedingly difficult to interpret what you didn't observe, and this type of analysis generally looks messy (and not very confidence inspiring) to an outsider. However, all design team members should ideally be present. It's often more efficient for the interaction designers to do most of the modeling by themselves, then get additional input from other design team members. Only do this when your budget requires it, though, because it's less effective in building consensus on the design team, and the other designers' perspectives can add value at every stage. Any writing during this time is generally shared among team members, though the visual designer may be primarily focused on creating polished process or concept visualizations. It's wise to have an informal team check-in once you have some preliminary models and findings outlined. This may just be with the team lead if he or she has not participated much in developing the findings and models. It could also be with the project owner; it's fine to wait until you have draft personas if you don't want to have multiple meetings or feel that the project owner won't respond well to findings alone. Ask whomever is reviewing the work to give you feedback on the following: 225 Designing for the Digital Age: Creating Human-Centered Products and Services — Do the findings and models cover everything stakeholders expect to see? — Have you jumped to any conclusions that seem incorrect or are not supported by what you observed? — Where the findings differ from stakeholder assumptions, have you provided sufficient evidence to be compelling, and have you been clear but diplomatic? — Are your findings detailed enough to make sense, but at a high enough level that it won't take you two hours or 20 pages to explain them? Summary The whole point of modeling is to make sense of your data so you and the stakeholders can understand and use it to make informed decisions. Almost any approach that helps you gain insight will do, so long as it also helps ensure that conclusions come from your data and not from your imagination. Regardless of the techniques you choose, always make sure you have an in-depth understanding of each case before attempting to aggregate the data or draw any general conclusions. Examine your data from multiple points of view. Don't just summarize: Explain, and make sure your explanations account for any outliers. Finally, address any controversy head-on, because disagreements left unresolved at this stage will hinder your progress later on. 226