I lost my address book recently. It was one of those near-death computer experiences where you see your data pass before your eyes and start searching through the trash, then the Web, hoping to find the information you need right now. The experience made me think about blame—and trust.
Here’s what happened. I was running late for a meeting and plugged in my Palm for a quick HotSync. You know the drill: one hand on the mouse, the other stuffing things into my briefcase, all while shrugging on my coat. Then, I got an error message. Something about having too many records and suggesting that I delete a few and try again. Distracted, I try removing old, completed tasks. A few quick clicks, and I’m hotsyncing again. That’s when it all went wrong, and I lost all of the information in my address book.
Champion Advertisement
Continue Reading…
Okay. Before we go any further, did any of the following thoughts pass, however fleetingly, through your mind?
“I bet you didn’t have a good backup.”
“Why would you do anything like that when you are in a hurry?”
“What did you really click? Maybe you made a mistake.”
“Are you sure you don’t have a recent backup?”
If they did, I’m sure you are not alone in having these thoughts. They certainly passed through my mind—along with a few choice curses, imprecations to higher powers, and condemnation of all things silicon.
Before we all get out our hankies, let me tell you how this ended. I did not junk my electronic organizer and go back to paper. And I managed to restore most of my contacts’ names and addresses from a backup, even if not one as recent as I might have liked. We could say that this little episode ended happily.
But did it?
Yes, I got my contacts data back, but two other things happened. First, I endured another episode of “blame the user.” Second, I was given another lesson in why electronic devices can’t be trusted.
Blaming the User
Look at those reactions again. Each and every one of those thoughts blames the user—me—for the problem. But I really didn’t do anything except try to use an expensive piece of electronics for the purpose for which it was intended: carrying information with me in a convenient package. What I didn’t do was make it the center of my attention, so some might attribute this problem to human error.
But why is it that the only humans who seemingly make errors are the people who are trying to use a product? As David Aragon of Voter March said, “All errors are human error.” Why not point the finger at all the other people who had a hand in the situation: programmers, designers, product managers, and quality testers? Whose human error is it, anyway?
My error: I put too much information into my mobile organizer.
In other words, I used the thing. A lot. If it was my old DayRunner notebook, it would have been bristling with slips of paper, addresses scribbled in margins, and directions pasted into the calendar. Instead of all that mess, my data is stored neatly on a chip. But where my DayRunner would literally start to explode when it got too full, there is no visible meter on the Palm to show me how full it is.
I’m sure that, somewhere in the documentation, there is a statement about how many records the device can hold. But what good does it do to have this information buried somewhere? Even if I had read it and remembered the number, what good would it have done me if I were not reminded of it in a timely manner?
Their error: The warning came too late.
The time to warn a user is before a problem happens, not after it occurs. The faster things happen, the earlier the warning needs to be. Think about how early you should warn someone that they are stepping carefully, walking, or running toward the edge of a cliff. If there are any hard-coded limits, warnings should appear when there is room to spare.
My error: I didn’t pay enough attention to the error message.
I treated the message like an annoying child, giving it just enough attention to quiet it, but not enough to really understand what it was saying. But if I had received the message from a person, there would have been some easy-to-recognize change in tone to warn me that this was serious. Instead, the device showed me a little window that looks almost identical to the window that says it has completed its task successfully.
I barely read the message, of course. I just glanced at it and clicked OK in the same instant. So, by the time I realized that I had not really gotten the message, it was too late. The message was gone forever. If there was another way to find out what had happened, I missed it, too.
Their error: Important information that should be visible wasn’t.
If it’s important for users to be aware of a condition, it’s essential that they have a way to monitor it. We figured this out with battery status warnings. Why not with disc or memory space? This applies to messages as well. If an error message contains critical information, don’t make it look just like the message that says everything is okay.
My error: I tried to fix their problem without really focusing on it.
I remembered “remove some records” and tried to find something I could remove quickly. The calendar offered me only the options of removing items from one week, two weeks, three weeks, or a month ago. Not far enough back. I decided to remove completed tasks. I think I did it right, but who knows. I didn’t spend much time on this decision and certainly didn’t look up any instructions.
Their error: Failure to protect user data.
I don’t really know what happened here, but I’ll bet it’s a bug. Perhaps the error was in not testing boundary conditions carefully enough—or in assuming something will never happen. But when it does happen, don’t punish the poor human who did nothing more than buy your product and try to use it.
It will, of course, take a complete culture change to make the humans on the product-creation side take responsibility for their human errors and the product defects they cause. This change, however, is critical if we are going to create good user experiences rather than just user experiences that work okay as long as nothing bad happens.
This brings us to the second lesson: trust.
Learning (Not to) Trust
When a product—computer, mobile device, or whatever—is just a toy, it doesn’t matter so much if it works well. It might be annoying if your favorite game dumps your playing history just as you’ve reached the highest level, but not much more. However, if you’ve stored all of your financial data in an application, losing it has real consequences.
The more we rely on our electronic devices, the more we are trusting them to be there when we need them and to safeguard our information and our privacy. And the more we rely on them, the greater the consequences of any failure.
I don’t know anyone who has not been through at least one catastrophic failure. Some have long-lasting consequences, other results are more short term, but the pattern of what follows is the same. After going through denial and anger, we make a bargain with ourselves: we will never let this happen again! During the depression phase that follows, we are more conscientious. We back up our data. We don’t push the system so hard. But, over time, the memory softens, we accept what happened, and we fall back into our old habits.
I’m not talking about “average” users, but people who work with computers regularly and have a good enough understanding to know their limitations. We have a strong affinity and start with a high degree of trust, so it takes a lot to whittle it away.
I just trusted that the Palm would not trash my data without warning me. After all, this is one of the few user interfaces that doesn’t ask me if I want to save the data I’ve just put effort into creating, but assumes that, of course, I want to keep my work. This incident left me a bit shaken, but in the end, I kept on using my Palm. I back up my data a bit more often, and I don’t trust HotSync not to destroy both sets of data, but I can already feel myself slipping into resigned acceptance.
There’s a saying: “Fool me once, shame on you. Fool me twice, shame on me.” How many times will people be fooled by technologies before they give up and decide that they can’t be trusted? Or will we make them trustworthy before that happens?
As a general rule, I don’t trust technology. I tried storing all my contacts, passwords, login info, etc. in electronic form, but then what to do when the power goes out? What to do when the computer dies? HD failures? Etcetera? Little post-it notes work for me. Among my peers I stand out as the only one who doesn’t use a Palm Pilot/PDA, nor do I use my cellphone for anything other than making/receiving phone calls. The only drawback is that I have this 1.5” thick stack of post-it notes lying around, and I have to keep moisture and the meddling hands of my one year old away from it.
WelI, I am in a similar situation right now…but mine has lasted many months…maybe a year.
I stopped using Outlook and started using Thunderbird for email. I didn’t trust Microsoft…this is less true now, but Outlook is still a clunky beast…
Then, I stopped using my palm pilot, because I mainly wanted to be able to call my contacts directly from the device where I held all their information and so, I switched to a Treo for contacts, phone, all-in-one deal, etc., but its phone reception was too poor so I switched again to a Blackberry…its keyboard and interface confuse me so I never brought the contacts over or set up the mobile email…its alarm clock is very good though…
Still…I needed a reliable calendar that I could share and which would be electronic and notify me of things, so I use yahoo for that…but do not keep addresses there either…
This littany of partial data-trust has left me with a string of personal information and no centralized personal information management.
My point is that we can trust much of the technology that we have - maybe 98% of each device never fails us, but just as we might see with user behavior on the Web, one single, small bad experience, and we’re ready to chuck it all in!
Time for the world’s most universally usable PIM I’d say…but nothing seems to be on the horizon…I mean half the requirements are listed in the preceding paragraphs…c’mon engineer-types…what do you say???
I’d welcome a silver bullet at this point, but I may well use it on some one of my electronic-servants to put it out of its misery.
I bought a second hand Palm Pilot to see if it would be something for me. It was quite interesting. I used it mostly for taking notes. I never bothered to move them to the PC because I could always do that at a later moment and also I wasn’t using it for synching. Well, the batteries died. And unlike my mobile phone after recharging, everything was gone. It is now someone else’s third hand Palm Pilot.
Two interesting points about technology raised above:
Leopold has tried to reduce his exposure to technology by minimizing the number of functions he combines in one place, while Gordon bemoans the loss of a single centralized repository for information. But both (and me) are intrigued with the possibilities of technology - just with slightly different levels of willingness to dive into the morass.
The other is that there’s an unstated assumption that paper is less durable, more easily damaged and less organized. None of these are necessarily true: You have to keep moisture and the grubby hands of small children away from your Palm just as much as any stack of paper. In fact, you are more likely to recover useful information from a pile of post-its on which you have dumped a glass of water than from an electronic device.
Paper can be well organized (and e-data disorganized). Companies like DayRunner and Fil-o-Fax did very well with their organizing systems. (And they still seem to be selling well in office supply stores in NYC’s downtown interactive corridor.)
What you can’t do…and what I think keeps us all coming back to try again…is that electronic information offers more possiblities for interconnected usefulness, multiple uses, and revision.
Which brings us around to Gordon’s point that most of the things he sees as requirements are easy and well known. Clearly we are not being very successful at understanding and envisioning the future use of new ideas… or communicating this as engineering requirements. Why?
I don’t store attachments that are part of Outlook as it’s known that, if the PST file grows too big, Outlook becomes unstable. The question is: why should I have to change my saving habits, because the software can’t cope? How many other habits do we change to allow for software limitations?
Whitney,
Enjoyed your story. You ultimately ask, “will we make them trustworthy before…” we give up? I think the answer is yes, we are constantly improving their trustworthiness. MS Word, though often the justifiable object of derision, is a model of several layers of backup (though few on earth intimately understand its timed autosave and manual recovery from it). For the vast majority of electronic devices, there will always be just one flaky, static-y, magnetic bit separating efficiency-nirvana from mission-critical, catastrophic failure… redundant systems notwithstanding. But when all backup functionality is commonplace, big losses will be rare. You’ve prompted me to flesh out all of those functions soon in my software function tree: http://usabilityinstitute.com/resources/functionTree.htm
Regards, Jack
Whitney is an expert in user research, user experience, and usability, with a passion for clear communication. As Principal Consultant at Whitney Interactive Design, she works with large and small companies to develop usable Web sites and applications. She enjoys learning about people around the world and using those insights to design products where people matter. She also works on projects with the National Cancer Institute / National Institutes of Health, IEEE, The Open University, and others. Whitney has served as President of the Usability Professionals’ Association (UPA), on the Executive Council for UXnet, on the board of the Center for Plain Language,and as Director of the UPA Usability in Civic Life project. She has also served on two U.S. government advisory committees: Advisory Committee to the U.S. Access Board (TEITAC), updating the Section 508 regulations, and as Chair for Human Factors and Privacy on the Elections Assistance Commission Advisory Committee (TGDC), creating requirements for voting systems for US elections. Whitney is proud that one of her articles has won an STC Outstanding Journal Article award and that her chapter in Content and Complexity, “Dimensions of Usability,” appears on many course reading lists. She wrote about the use of stories in personas in the chapter “Storytelling and Narrative,” in The Personas Lifecycle, by Pruitt and Adlin. Recently, Rosenfeld Media published her book Storytelling in User Experience Design, which she coauthored with Kevin Brooks. Read More