Title graphic of the Moonspeaker website. Small title graphic of the Moonspeaker website.

 
 
 
Where some ideas are stranger than others...

ALLOCENTRIC PERCEPTIONS at the Moonspeaker

The Moonspeaker:
Where Some Ideas Are Stranger Than Others...

A COMMENTARY ON "INFORMATION TECHNOLOGY" BASED ON 15 PROMPTS FROM COOPER AND REIMANN'S ABOUT FACE 2.0

Introduction

In the course of a foray into a substantial section of used books in a thrift store much frequented by highschool and undergraduate students, I found an unexpected trove of items probably from a serious office clean out in a computer science department. The person who provided these books evidently specialized in user interface design and implementation, and the oldest books were fascinating, and several of them old friends. Among these were the first two editions of Alan Cooper and Robert Reimann's book About Face: The Essentials of Interaction Design. For my purposes the first edition was uninteresting, but the second, which was part of my own foundational readings in the subject was one I had to pick up and reread. It has joined my second hand and still relevant and much enjoyed copies of The Humane Interface by Jef Raskin and another of Alan Cooper's books, The Inmates are Running the Asylum. Solid information stands the test of time, and what wouldn't be included in any of those books now usually wouldn't be because the points have been integrated into best practices, or the approaches they related to have been dropped. The result is still very good for comparison and contrast, and I could see an instructor having a lot of fun coming up with short essay questions by providing selections of "what the experts said then" and "what is happening now" for students to think with. I found my reread of About Face 2.0 unexpectedly rewarding, because it was more than an indirect stroll down memory lane. It raised a great many intriguing questions to consider, especially in this absurd era of attempted "universal graphical user interfaces" that are supposed to be applicable on large screens, small screens, touch screens, and ordinary screens we navigate with a pointing device and/or the keyboard. In this essay I will take up just fifteen of them, using a selection from About Face 2.0 as the take off point for each one.

1. "We're Ignorant About the User"

It's a sad truth that the digital technology industry doesn't have a good understanding of what it takes to make users happy. In fact, most technology products get built without much understanding of users... But does any [amount of marketing information] tell us how to make them happy? Does it tell us how they will actually use the product we're building? Does it tell us why they are doing whatever it is they might need our product for, why they might want to choose our product over our competitors', or how we can make sure they do? Unfortunately, it does not.

I suggest the truth is that this segment reflects a combination of category errors and the contradiction between what Cooper and Reimann understand about user interface design and their role as consultants to businesses claiming to sell "digital technology." It is an understandable fix to land in, and indeed the next point is headed "We have a conflict of interest." By "make users happy" they mean something like giving the people using the software a sense of satisfaction and the other positive feelings that go with that. A good example is the experience associated with a stable word processing program that includes a well-thought out keyboard shortcut assignment and macro-recording facility. For a person who spends many hours using that program to draft and polish documents, the program is stable so it doesn't get in the way by crashing all the time, and many tasks can be done without taking their hands from the keyboard. The program serves as what it is supposed to be: a tool for helping compose and edit documents. The tool does not get in the way, and it facilitates developing a smooth workflow that allows the person to spend little time thinking about the tool after an initial training period. That's a very practical, measurable sort of happy. But "digital technology" sales are not about "digital technology," and the marketing push behind proprietary software depends upon the person who uses the software actually not being very happy at all, but not too unhappy. That person has to be kept hooked by the belief that there are no better options than to buy the "upgrade," or new version, or renew their subscription.

This last point is the biggest category error, because the way the proprietary software industry has developed is towards a model based on rent seeking. Even the most plausible and ethical option of providing subscription software support and bespoke plug ins has fallen by the wayside because today's "digital technology companies" are run by people focussed on maximizing profits while skimping on everything else. Their ideal is to invest nothing in labour and tighten the screws and drive up the costs by offering software that only runs remotely on their servers and/or fooling their customers into saving their data on those same servers instead of their own. After all, once dependent on proprietary software on a remote server and storage on a remote server, leaving that company's grasp is not so easy anymore. The changeover to this sort of extreme predatory model was picking up momentum in the early 2000s. The next category error is that even Cooper and Reimann keep talking about "technology products" and the like, which is simply a meaningless term. If technology is "the application of scientific knowledge for practical purposes," it is hard to see what mass-produced or mass-distributed product isn't a "technology product." They are striving to refer to a more narrow category of course, that of proprietary software. Cooper and Reimann don't want to discuss say, stainless steel kitchen utensils, refrigerators, or internal combustion engines.

No, what they are striving to discuss is software, and in light of their concern about user interfaces, especially software that serves as a tool. Maybe "software tools" would be a better designation, even though it would enforce recognition that there are at least two categories of software at all times, "software tools" and "software games." These two types of programs have very different user interface design paradigms, since the tools are supposed to fade into the background after a learning period, while the games are supposed to keep a person's attention. Therefore it seems more effective to begin with the premise that in order to create a product people will buy, it is necessary to make sure that the product either serves as an effective tool, or provides entertainment. I say "seems" because actually, the starting point goes back further. After all, at one time nobody expected there to be a market for computers beyond the military and central offices for larger businesses. But once it turned out that there could be not just a niche hobbyist market, but a household market for computers, hopeful salesmen knew they needed to persuade people who did not work in academia, the military, or data processing that computers could be of use to them. Of course, they started with the "business market" and that is why the core programs available to this day that most of us use are for writing documents, filling out spreadsheets, and the like. So what even as late as the turn of the twenty-first century professionals like Cooper and Reimann were ignorant about was how to persuade "consumers" to see general purpose computers as capable of providing tools relevant to their lives even if they did not work in jobs that included mass data processing, even if they still did not have a computer at home.

Truth be told, I am not convinced that professionals in software interface design are any more knowledgeable about these things today. In many ways the ongoing attempt to shove computers into everything and produce the ultimate surveillance and insecurity web marketed as the "internet of things" plus the proliferation of so-called "smartphones" has distracted from the key questions. Instead the focus has narrowed to striving to force people to have computers in their lives whether they want to or not, and to find ways to render a general purpose computer into a crippled device instead so that to be able to use "more features" a person would have to pay another fee or buy another subscription. In other words, Cooper and Reimann's constructive focus on respecting and meeting the requirements a person using software may have is now replaced with a view that user interfaces should be hostile and disrespectful. (Top)

Excerpt from the cover of *About Face 2.0* by Alan Cooper and Robert Reimann, illustration by Anthony Bunyan, circa 2003. Excerpt from the cover of *About Face 2.0* by Alan Cooper and Robert Reimann, illustration by Anthony Bunyan, circa 2003.
Excerpt from the cover of *About Face 2.0* by Alan Cooper and Robert Reimann, illustration by Anthony Bunyan, circa 2003.

2. "Implementation Models and Mental Models"

The computer industry makes frequent use of the term computer literacy. Pundits talk about how some people have it and some don't, how those who have it will succeed in the information economy, and those who lack it will inevitably fall between the socioeconomic cracks. Computer literacy, however, is nothing more than a euphemism for making the user stretch to understand an alien logic rather than having software-enabled products stretch to meet the user's way of thinking.

This brief paragraph distills most of what is wrong with the term computer literacy so well that it ought to be cited everywhere computers are discussed. The term is also a marketing ploy, one intended to generate "fear of missing out" or "fear of looking old-fashioned" and thereby adding pressure to purchase a computer and find some use for it. There were after all only so many of us who bought a computer as soon as finances and good sense allowed because of being a terrible typist and the availability of good electric typewriters for use at home had collapsed. The notion of an "information economy" sounds particularly strange now, since it is literalized today. Massive corporations mine us for information and dump it into massive databases that they speculate on and sell copies of without any sense of concern about the people whose privacy and security are breached by the practice. The "information economy" today is just another speculative bubble, and in the meantime people cope with the insertion of computers into their work and home lives as best they can.

The "alien logic" of so much early software is not really surprising if we look into the actual origins of computers in the form that became a mass consumer product. Their earliest predecessors were devices intended to replace actual people, usually women, carrying out tedious computations to produce tables for astronomical, navigation, and military purposes. Women were supposed to be incapable of doing real mathematics but perfect for mindless, rote arithmetic. No one, woman, man, or child is going to flawlessly repeat a simple and therefore boring calculation for hours a day. This was the main concern of men like Charles Babbage, with little to no evidence of concern that such work could well drive a person half-crazed with boredom. The vast majority of mechanical and then electronic computer development happened via government funding of military research. Today we know that a significant portion of that investment went into finding ways to make codes and ciphers, and subsequently to break codes and ciphers used by others. In any case, these were machines developed for a small group of people whose access to the machines was closely monitored, and the uses of the machines strictly limited. Once a successful design for a general purpose computer was built, and opportunistic developers saw a chance to recreate the result into a marketable product, it must have seemed like the primary model for what to do next already existed. The IBM model of business machine leasing with profits deriving primarily from support and punch card sales probably seemed more than adequate. But to keep expanding computer sales, computer interfaces had to be redesigned so that people other than a specially trained group could run the machines, and it turned out that took more than adding a keyboard and better forms of data storage. Yes, the hardware needed to change, but so did the software. (Top)

3. "Mechanical-Age Representations Degrade User Interaction"

We encounter a problem when we bring our familiar mechanical-age representations over to the computer. Simply put, mechanical-age processes and representations tend to degrade user interactions in information-age products. Mechanical procedures are easier by hand than they are with computers. For example, typing an individual address on an envelope using a computer requires significant overhead compared to addressing the envelope with pen and ink (although the former might look neater). The situation improves only if the process is automated for a large number of instances in batch – 500 envelopes that you need to address.

Yes, how do we make a machine fundamentally implemented to automate a batch process intelligible and practical for daily tasks instead? There are at least three entangled problems to manage apart from the issue of scale. Cooper and Reimann's discussion of an address book rendered as software is a canonical example today, because it illustrates the difference between the affordances of the hard copy address book versus the earlier software versions. Programmers got caught up in reproducing a virtual address book, something that today is probably being recreated again in one or another "virtual reality" world as a means to test haptic feedback that doesn't depend on mimicking pushing physical buttons or firing a gun. Today in part due to books like About Face, we know that it is necessary to separate the task the physical item performs from the way it was implemented. The address book is designed to provide a means to make an ordered, easy to navigate, and personalized directory of contact information. There is no reason for software to try to recreate the affordances of paper, which doesn't make sense even on devices with touch screens. Cooper and Reimann thoroughly discuss these first two problems. The third one is left in an implicit form, and it seems worthwhile to make it explicit instead.

The third problem is resisting the pressure, whether it be from temptation or from management, to push certain forms of representation because they take advantage of expanded graphical capacities of the computer. Programmers and designers who implemented the first software address books as partly animated representations of little books weren't just taking a bad shortcut to create what they thought the "general public" would be willing to learn to use. Even as some of the more experienced programmers belittled graphical user interfaces as exemplars of slobberware, equating non-programmer adults using computers with toddlers, the technical challenges of implementing those interfaces was still irresistible. Besides expressing a certain level of contempt for non-programmers, making an application that coordinated a brief animation with sound to simulate flipping a paper page was also a way to show off. As an application of multimedia it was a real technical achievement, but as a matter of day to day use, it was an example of something initially perhaps entertaining, but mostly just gratuitous. Hence even today many people end up searching out how to turn off the descendants of "mechanical-age representations." Yet there has also been an excessive reaction that falsely equated those representations with such items as the original "grab bars" and corners on windows in earlier versions of apple system software. Perhaps the naming was unfortunate, but these were far easier to use than the later "brushed metal" versions. The fundamental underlying metaphor of graphical user interfaces, as Bret Victor notes, is "pictures under glass." Removing or obscuring the interface elements that enable a person to manipulate the pictures does not remove a "mechanical age" representation, but often reflects a desire to somehow use a new whizz-bang graphics feature. (Top)

4. "Avoiding Blank Slates"

Just because we use the word think in conjunction with a program doesn't mean that the software needs to be intelligent (in the human sense) and try to determine the right thing to do by reasoning. Instead, it should simply do something that has a statistically good chance of being correct, then provide the user with powerful tools for shaping that first attempt, instead of merely giving the user a blank slate and challenging him to have at it. This way the program isn't asking for permission to act, but rather asking for forgiveness after the fact.

Today the dreaded utterly blank file or pop up form demanding information and submission to get anything done have mostly vanished, except for the evil and inappropriate subspecies of web applications. More often than not today most of us will only encounter the occasional operating system installer that needs some explicitly entered information. Nowadays program installers usually pick up snippets such as the default user name, the proper time zone and similar from the operating system. Unfortunately the implementation of powerful tools for making necessary changes is not so widespread or effective yet, and in some cases the effective implementation is strange or indicates an inappropriate decision. An example of a poor implementation of facilities to correct autofilled data that comes to mind is the terrible "options" tab in microsoft word. Unlike its libreoffice counterpart, this tab is poorly organized with a now fossilized pseudo-three dimensional stack of sub-tabs with strange resorting behaviour. Multiple similarly named options, many of which seem to have no effect no matter whether they are on or off adds insult to injury. Meanwhile, libreoffice provides a far more effective option that takes advantage of a now familiar interface based on a left hand collapsible menu with links that summon a specific named screen with a group of related settings in it. The result brings together a massive number of options in a well-organized way, every option does something and reversion to default settings is simple on a screen by screen basis.

The example of an inappropriate decision is one current in the GNOME 3.0 desktop I was setting up on a machine running a GNU/linux system based on debian. The trouble with living and working in a north american country other than the united states is that software defaults for date, time, and units formats are always for what people and businesses in the united states generally use. However, canada switched to the metric system in the late 1960s, including changes to number separators. Date formats are more fraught, because most federal, provincial, and local governments persist in using the ambiguous short format that lists the digits of the year, then the day, then the month. This creates enough day to day confusion that myself and many of my colleagues change this setting by hand to a short date format that runs either year-month-day or day-month-year. Such a adjustment is usually made through desktop or system-wide settings, but it turned out that this was not possible under GNOME 3.0, which surprised me. (Yes, I did test whether the issue could be resolved using dconf.) The obvious workaround was to change this format as a display and auto-insert feature on a program by program basis. I started with a program where the short date format is notably visible, the local email client. On this specific debian variant, thunderbird is the default email client, and there is a way to change the short date format when displaying email to the desired setting in its about:config tab. Imagine my surprise, when it turned out that changing this setting in thunderbird also altered it in GNOME 3.0 generally. So in this case while yes, the slate was not blank, neither was it accessible in the usually defined way, nor was it alterable in the ways we see so often. (Top)

5. "Hiding Ejector Seat Levers"

The program must adapt, but such adaptation can be considered a one-time procedure, or something done only by the corporate IT staff on rare occasion. In other words, ejector seat levers may need to be used, but they won't be used very often.

There are times when this recommendation is emphatically honoured more in the breach than anything else. Cooper and Reimann are focussed in this section on actions that may trigger confusing layout changes or cause an irreversible action. The most recent example of this I encountered was one triggered by a shortcut key in one application after a recent "update" that produced an unexpected change in behaviour when the keyboard shortcut was used. Prior to this update, the "ctrl-W" shortcut worked as it should, closing the current active tab when in tabbed browsing mode, or closing the active window when in window browsing mode. After the update, "ctrl-W" became equivalent to "ctrl-Q" instead. I did find a way to restore the correct behaviour, but only by disabling the "ctrl-W" shortcut for closing active tabs or windows in the general desktop manager settings. Of course this is not a true ejector seat lever in the Cooper-Reimann sense, but a bug that happens to mimic an inappropriately camouflaged ejector seat lever. Yet it produced a level of surprise and disruption that is typical of actual ejector seat levers when they don't cause data loss. Thankfully today there are few irreversible options in software due to robust undo implementation and that it is now common to learn early on from program documentation to save right before attempting an operation that can't be undone or work on a copy. Then if things go wrong, at least it is possible to revert to the last saved version or delete the copy.

The worst breaches of Cooper and Reimann's ejector seat lever recommendation I am aware of today are embedded – of course – in the default behaviour of microsoft word. In this case the trouble resides in display of tracked changes and comments. For decades, microsoft had quite reasonable default behaviour for these (which must be admitted), and turning their display on or off was a simple one click operation. The settings applied came from the default template on the local machine. I have only used this program on corporate work stations, where this default template is locked down and centrally managed. If perchance working on a document somehow made changes to the default template, microsoft word would ask to save the changes. The first time this question popped up, I shrugged my shoulders and clicked the equivalent of "save changes." No changes were visible to me, but I interpreted this as a mysterious microsoft word behaviour and that it had updated something that needed updating. The result was an error message to the effect that the changes were not saved because the default template was locked. This was a great relief the first time a shared document with a number of idiosyncratic and confusing to me display settings triggered the "changes to default template" dialogue, because I could be confident the strange settings and whatever other invisible goings on would not propagate into other documents I edited. These were settings I neither wanted for myself nor to accidentally impose them on others. However, the advent of subscription-only versions of microsoft office with the programs all run remotely on microsoft servers has apparently changed how this all works. While now I still habitually cancel attempts to save changes to the default template, unusual or confusing alterations to track changes and notes settings now propagate silently onto individual settings. This is a nightmare when working with co-edited documents. I am not sure whether this is a major bug allowed to go into the production version or if microsoft claims it is an interface improvement. (Top)

6. "A New Name for the File Menu"

Now that we are presenting a unified model of storage instead of the bifurcated implementation model of disk and RAM, we no longer need to call the left-most application menu the File menu – a reflection on the implementation model, not the user's model. There are two reasonable alternatives.

We could label the menu after the type of documents the application processes....

Alternatively, we can give the left-most menu a more generic label such as Document....

This particular recommendation has always caught my attention, because I am not convinced by the original premise behind it. The starting point is that in terms of how a programmer has to implement creating, editing, and saving documents on a computer. A person who has spent time working with emacs will find it more familiar even if they don't program, because in emacs the user edits a buffer and then saves to disk. The buffer is a copy of the document in RAM that can be edited. It is not possible to directly edit the more permanent version on disk. Cooper and Reimann are of the view that the "File" menu is named and populated in a manner that reflects this sequence of "load buffer, edit buffer, save to disk." It never occurred to me that there was more than one copy of the file until I began learning how to program. I simply could not fathom what problem Cooper and Reimann were seeking to solve with this one, even though the idea of having the "File" menu actually refer to documents when the active program is a document editor rather than a file browser did make sense to me. Now I know that this was the very point that they were making, it is the fact the name of the menu does not change with program type that lets slip the editing buffer versus stored file relationship. Except, no it doesn't, unless you're a programmer or use emacs as a text editor. What smoothed over the issue then and still does now, is the desktop metaphor enacted in the graphical user interface itself, and it turns out that the "two copy" model is not so alien to non-computer users as Cooper and Reimann seemed to believe in 2003.

Across operating systems that I have used, now even including the dubious versions on "smart phones" and tablets, files are stored in folders. The folders are visually represented by the familiar icon shaped like an office manila file folder, usually coloured yellow, or blue by default. Open the folder, and there are files in it, each with their own icon. I am old enough that my early office work days included working with the firmspace foundations of this metaphor, including the common experience of storing multiple types of document in the same folder. To file a document is to set place it in order in the right folder, usually inside a larger cabinet of some kind. On checking my trusty OED, it turns out that "file" comes from the french word for "thread" because papers were originally kept in order by stringing them on a thread or wire. In typical english fashion, the noun has been repurposed as a verb. But let's take a moment and parse out a physical example as done when computers were not part of the sequence, taking minutes without electronic assistance at a short meeting. The basic steps are:

  1. Set up a sheet of paper to write the minutes on.
  2. Take point form minutes.
  3. Verify basic contents with attendees before the meeting ends.
  4. Type (with a typewriter!) a final copy of the minutes on a fresh sheet of paper.
  5. File the minutes.

I selected this example because it is one of the simplest tasks in an office environment that generates a document that will eventually be filed. In my professional experience, staff were not allowed to file original handwritten notes, we were required to make a typed version. As it turns out, there would temporarily be two copies of the minutes, the handwritten draft and the typed final copy that would go into a file folder for longterm storage. The draft would go into the recycling bin, as paper recycling was already an established practice in the office when I entered the office workforce.

On one hand, the constant File menu can be read as a give away that the programmer had to implement electronic file management on the computer in a certain way. On the other, the paper-based equivalent is very similar, and it too involves two copies of a document, one temporary and the other more permanent. It is easy to miss all these parallels once a document workflow is mostly or wholly computer-based. I missed them myself before sitting down to describe this example. Thinking the matter through in this way, it is no longer surprising that in general no one has renamed the File menu. It's a pity that so many of Cooper and Reimann's suggestions for revising the contents of the File menu remain unapplied though. (Top)

7. "The Computer Does the Work, and the User Does the Thinking"

There is some confusion about smart software. Some naive observers think that smart software is actually capable of behaving intelligently, but what the term really means is that these programs are capable of working hard even when conditions are difficult and even when the user isn't busy. Regardless of our dreams of thinking computers, there is a much greater and more immediate opportunity in simply getting our computes to work harder.

While I am very suspicious of "dreams of thinking computers," the notion of making the best use of the capabilities of the computers we have makes excellent sense. Cooper and Reimann note how much it is possible for the computer to do just in the pauses a person naturally makes while typing, besides rather longer pauses to look up bits of information. In the early 2000s, RAM was still an extremely expensive thing, and having more than one central processing unit was an unheard of luxury in day to day computers. For example, the laptop I had around that time was a mid-level item in terms of price and hardware, one of the last powerPC apple powerbooks acquired on heavy discount as the new intel machines were just hitting the market. It was still a reasonably up to date machine for the time, and had 512 MB of RAM and a 60 GB hard drive, a single 1.5 GHz powerPC processor, and could run macosx-tiger plus the "classic" environment. Overall it had plenty of zip, even though it was heavy and the network card was struggling with the higher speed ethernet networks and as I recall support for secure wi-fi access died long before the laptop did. For all its quirks, the new operating system was fast despite the fancier graphics, and a single program crashing no longer stopped everything. Due to its capacity for creating at least the appearance of multi-tasking, the new operating system also felt faster and more convenient to use, as I could run more applications concurrently. Those "wasted cycles" were no longer quite so numerous. It was even possible to do once impossible things like burn a cd in the background while working on something else, and the burn operation itself now took ten to fifteen minutes, not thirty to forty-five. My colleagues observed similar performance improvements in their newer machines running microsoft windows – at that stage I did not know anyone personally who ran GNU/linux. These were impressive developments, and perhaps incremental improvements on using the available capacity could have continued much further than they did.

Instead, as the world wide web was hijacked for corporate advertising and people began to protest the way the advertising slowed down their computers and that the burgeoning niche of online audio and video distribution was inaccessible for the same reason, the data throughput demands outran what steady code optimization could do quickly. Soon hardware makers turned to adding central processing units, graphics cards, and more and more and more RAM. Now we have the surreal situation of having the equivalent of a supercomputer on our desks hardware optimized at least for the minimum wants of online advertisers and the latest video streaming services, therefore for graphics and internet access, and other programs that perform no better than they did between roughly 2003 and 2010. Even with two central processing units, 8 GB of RAM and nothing else running, as soon as an application like libreoffice has to cope with a text document approaching a megabyte in size, it can take up to three minutes for it to save. While it is saving, it is impossible to do anything else in libreoffice. This is madness. While it may necessarily take longer to save a larger openoffice document format file than a plain text one, it should not be a problem for the save operation to run in the background since the program has a background autosave function already. In a much newer machine with more RAM and more processing power, certainly this issue will seem to disappear, but the underlying issue remains. meanwhile, on the same machine, audio and video played locally or viewed via the web browser all run smoothly with few minor hiccups. For the online case, the only video that remains unplayable now is anything "high definition," which my internet speed and data plan can't cope with either. Overall this strikes me as an annoying cousin to what Maciej Ceglowski referred to in 2015 as the Web Obesity Crisis. (Top)

8. "Giving Software a Memory"

You might think that bothering with memory isn't necessary; it's easier to just ask the user each time. Programmers are quick to pop up a dialog box to request any information that isn't lying conveniently around. But as we discussed in Chapter 9, people don't like to be asked questions. Continually interrogating users is not only a form of excise, but from a psychological perspective, it is a subtle way of expressing doubt about their authority.

There are many paragraphs in About Face 2.0 where Cooper and Reimann write with wry humour and striking diplomacy. This is one of those paragraphs. From what I have read on technical fora and heard during in person interactions with professional programmers, they do tend to doubt the authority of users. I don't think this doubt is as prevalent as it used to be among programmers as it used to be though. The old hacker term luser is a fossil rather than in active use, for example, and error messages are generally of far better quality and tone than they were when I wrote a thoughtpiece titled Programmers, Have You Considered What Your Error Messages Say About You? in mid 2015. Programmers may have come around, but management has not. If anything, the management team with their eyes fixed on maximizing profits seem to have a bipolar and deeply disrespectful view of anybody who would be so foolish as to use their products. Such persons are treated as enemies one moment, complete fools the next. Hence all the obsession with imposing digital restrictions in hopes of constantly extracting more money and criminalizing reasonable actions like format shifting and sharing while also striving to infantilize users. There are many ways to do the latter, from reproducing the grotesqueries of "social media" websites in the desktop interface including advertisement insertions to making personalization almost completely impossible even for non-centrally managed machines. I have always despised the "are you sure?" tag question when it appears in a pop up warning box, and am always pleased to see a button for "continue" and a button for "cancel" instead. Error messages and warnings are precisely where nobody wants to encounter a real life though thankfully underpowered version of HAL from the movie version of 2001: A Space Odyssey. Today the big question about general purpose computers is who is in charge of the many home computers out there. The policies and behaviours of corporations and governments strongly suggests that in the view of high level military and corporate officials, they are the proper authorities over all computers because they claim no one else is responsible or knowledgeable enough to prevent computers being used to cause harm. Errant and self-serving nonsense has a terrible half-life. (Top)

9. "Data Integrity Versus Data Immunity"

Underneath the rhetoric of data integrity – an objective imperative of protecting the user and computer with sanctified data – there is a disturbing subtext: that humans are ill-intentioned screw-ups and that users will, given the chance, enter the most bizarre garbage possible in a deliberate attempt to bring the system to its knees. This is not true. Users will inadvertently enter erroneous data, but that is different from implying that they do it intentionally. Users are very sensitive to subtext; they will quickly perceive that a program doesn't trust them. Data integrity not only hampers the system from serving the user for the dubious benefit of easing the programmer's burden, but it also offends the user with its high-handed attitude. It's another case of requiring users to adapt to the needs of the computer, rather than the computer meeting the needs of users.

Speaking of authority and the implied model of the user and a diplomatic mode of expression, it is fascinating that this quote is from a caption, a category of text I am not sure people always read. Skipping over captions is not necessarily a habit to be faulted, as all too often captions only echo the main text immediately around them, and readers understandably resist spending time on such redundancies. This is a meaty caption though, and I have often wondered whether the decision to anthropomorphize the computer program wasn't a mistake, diplomatic though it is in tone. Computer programs are not independent beings, but they are expressions of the programmers who work on them. Programmers are prone to mistrusting users who are not themselves, and this leads to programs that present interfaces and handle data in ways that reflect that mistrust. Yet programs written for programmers are as notorious for unfortunate interface design and hostility to the general user as they are famous for their openness to modification and scripting. Emacs has a 695 page manual for itself and a 1 375 page manual on programming emacs with lisp. It is reasonable to ask just how programmers can be expected to better serve a very different group of people, a portion of whom may well become programmers themselves to a modest degree. The primary method is for the programmers to be part of the design team for the program at hand, applying the methods described by Cooper and Reimann, of course. It may also be helpful to encourage programmers to reframe the people who will use the program as members of a different culture of similar complexity and creativity to their own, albeit with that complexity and creativity devoted to other pursuits.

Scene from the 'Mr. Boole Comes to Tea' episode of *The Thrilling Adventures of Lovelace and Babbage* by Sydney Padua, published in 2015. To actually read the episode, visit Padua's site 2dgoggles. Scene from the 'Mr. Boole Comes to Tea' episode of *The Thrilling Adventures of Lovelace and Babbage* by Sydney Padua, published in 2015. To actually read the episode, visit Padua's site 2dgoggles.
Scene from the 'Mr. Boole Comes to Tea' episode of The Thrilling Adventures of Lovelace and Babbage by Sydney Padua, published in 2015. To actually read the episode, visit Padua's site 2dgoggles.

By this I don't mean only such extreme programming challenges as handling name entry and storage via dialogue boxes, nor such shrieking horrors as trying to use online international order forms. Boolean logic is wonderful, and it is excellent for specific tasks in logic and programming, but it is not a daily life tool, for good reason as Sydney Padua illustrated beautifully in Mr. Boole Comes to Tea. A key difference between many programmer cultures and their many non-programmer culture counterparts, is the root expectation of software and computers. Programmers expect to either adjust themselves to the computer, as Alan Turing did when he learnt to calculate in hexadecimal numbers to ease certain programming tasks, or adjust the computer by modifying its software. The level to which programmers are willing to adjust to the computer before finally making the computer do the right thing is often far higher than a non-programmer would ever tolerate. For the majority of non-programmers, computers are simply tools, and a tool isn't supposed to need much adjustment to get things done with them. For many programmers, computers are not just tools, they are in fact instruments. To make a more prosaic analogy, the pencil I pick up to make a list before going to the grocery is a tool. But that pencil can also be used for all manner of creative work, from composing essays or fiction or music to drawing or designing building plans. Similarly, with additional basic training and deeper interest, a person may apply a computer to far more than watching online videos and writing emails. Software designed to be started and used without getting into programming or scripting needs to have an interface that honours the higher priority just the job done has for the majority of users. That people who do not see computers as instruments does not mean they have contempt for software and seek to wreck it and crash the computer anymore than merely using a pencil for basic writing tasks means they would break them all because they aren't making elaborate artworks or composing novels. (Top)

10. "What About Lost Data?"

It is obviously counter-productive to treat all your workers like idiots to protect against the few who are. It lowers everybody's productivity, encourages rapid, expensive, and error-causing turnover, and it decreases morale, which increases the unintentional error rate of the clerks who want to do well. It is a self-fulfilling prophecy to assume that your information workers are untrustworthy.

Under the present conditions in so-called "western" economies, it seems a great many people who consider themselves leaders and skilled managers are relearning this point the hard way. I have direct experience of an organization in which the top level of management imposed massive policy changes because this was supposed to fix the "cultural" reasons that the organization kept catching flak in public. As is typical, the actual boots on the ground staff knew good and damned well what the problems were and had constructive ideas and proven techniques to improve in a measurable and effective way. According to words, everybody, management or not, had the same goals. However, according to deeds, what management wanted was to continue on in the same manner for the most part in terms of external action, while treating staff as stupid and willfully destructive children who should be put under constant supervision. What came of the effective policy changes? More public flak combined with a complete collapse in quality of output as staff dealt with being told not to spend time making sure the work was done accurately and well but just get it out the door faster because it was "taking too long." Except the delays were not from staff obsessively copyediting themselves into a stupor after which somebody else could finally kick the work item out of the door. The delays came from a new layer of extra management scrutiny imposed before work goes out the door, because they did not trust established means of correcting either honest errors or deliberate misbehaviour by staff. As the pendulum lurched between allowing staff to apply their expertise and demonstrate their continued adherence to reasonable criteria for work quality hence meeting service standards the majority of the time to hogtying staff and missing practically every deadline, a horrible positive feedback loop began to set in.

I'm not sure if "western" countries have much leeway left to rediscover that technocracies don't work because technocrats by nature are prone to complaining that everything would work out perfectly if the real world behaved like their textbooks and computer models demanded. There is no way to torture the world to fit the model, and no way for a minuscule number of self-proclaimed leaders to absolutely control and succeed at completing a complex project. We see this is even more true of societies than software. In software, one of the best interface adjustments to help with catching and correcting errors ever devised is a sensible combination of highlighting plus tool tip on hovering over the highlighted item with a pointer or on navigating the insertion point back to the item. Best of all is the version that includes support for having an alert sound play in lieu of highlighting or as well as highlighting. Personally I can't stand sound plus visual indicators where I type, others love to have both, and more importantly people who have vision problems are already provided for. The hardest part is the tool tip, because it needs to say something more informative than "invalid entry," otherwise the person working with the program gets stuck throwing whatever they can at it to see what sticks, or giving up. In some cases, we need to accept less than perfect data to continue working, another tough case to program solutions for. I do appreciate that the siren call of perfection in software or in life is loud and difficult to resist, especially if it seems so very close and so very possible. Alas, it probably seems closest when the people doing the practical work and the people in management roles are perhaps saying the same thing, but in fact acting on entirely different motives. (Top)

11. "Good Idioms Must Be Learned Only Once"

We are inclined to think that learning interfaces is hard because of our conditioning based on experience with implementation-centric software. These interfaces are very hard to learn because you need to understand how the software works internally to use them effectively. Most of what we know we learn without understanding: things like faces, social interactions, attitudes, melodies, brand names, the arrangement of rooms, and furniture in our houses and offices. We don't understand why someone's face is composed the way it is, but we know that face. We recognize it because we have looked at it and automatically (and easily) memorized it.

Here is another great illustration of a common difference between the type of mindset common among programmers. At least when it comes to computers and software, programmers often cannot accept either the hardware or the software as a mere black box. They insist on understanding how it all works because that supports fixing hardware if it breaks and software if it crashes or otherwise misbehaves. I think it is fair to consider this a core value for many programmers, even those who resist the free/libre conception of software. The programmers who think proprietary software is better or similar seem to feel it is just fine to impose black boxes on others rather than themselves. Computer programmers may extend this desire to understand to other things or they may not, and seem to find it hard to believe that anyone would want to treat computers or software as things that need not be understood in their internals and principles. There are genuine reasons to wish that everyone who has to deal with computers and software in their lives had at least enough grounding in the basics to better protect themselves against surveillance, security breaches, and data loss. That is a goal to be sought after, but beyond that it is not respectful or constructive to try to force everyone to think like a programmer or understand computers and software to the level of detail a programmer does.

There are certainly a lot of great idioms implemented in software and hardware alike, some of them truly and unfairly under-appreciated. As brilliant as the famous scene from Star Trek IV when Scotty tries to talk into a computer mouse is, both for showing how non-obvious its form factor is and how easy it is to learn to use, it distracts us from better examples. Certainly the desktop-file-folder idiom discussed above is one, as well as pictures-under-glass. How many of us appreciate that when we scroll through a list of hyperlinks or a list view of files and folders that we are actually using a card catalogue idiom, or the clever way the responsive scroll box finesses one of the problems with the scroll idiom? One of my favourite hardware idioms is the familiar keyboard and number pad, both reuses of the ordered key concept from musical instruments. Hence it is common to refer to regular typing sequences as chords. For all the mixed feelings about apple computer's marketing over the years, they got it write when they chose to market their laptops as "something-books" rather than "something-pads" or by model numbers. They wanted an idiom from which potential customers could readily surmise that they were looking at something meant to be portable, unfolded for use, used with absorbed attention, and kept away from such dangers as water and dust. Like it or not, that was a great decision, even if the real intention was to present the machines as approachable items versatile enough for use at home or the office. (Top)

12. "Ballon Help: A First Attempt"

Balloon help doesn't work for a good reason. It is founded on the misconception that it is acceptable to discomfit daily uses for the benefit of first-timers. The balloons were too big, too obtrusive, and too condescending. They were very much in the way. Most users find them so annoyingly in-your-face that they keep them turned off. Then, when they have forgotten what some object is, they have to go up to the menu, pull it down, turn balloon help on, point to the unknown object, read the balloon, go back to the menu, and turn balloon help off. Whew, what a pain!

Thankfully the "first attempt" referenced here is not an attempt to create the rightfully despised "Clippy," one of the most astonishingly terrible interface decisions somebody at microsoft ever made, but an attempt to adequately label toolbar controls. Now that toolbars are so commonly used and so consistently have tooltips enabled, I am at a loss why there is still so commonly a massive space wasting option to have both toolbar buttons and labels. Then again, it seems to be low hanging fruit to maintain the option, so it is embedded in the way toolbar options are implemented today. Looking back on balloon help today, it seems rather tragic that not only was it so bad for its intended purpose, there was apparently no beta-testing or adequate analysis of beta-testing results to catch on to the fact it was not going to work. It still took the "Clippy" debacle to improve beta-testing unfortunately, and in the meantime toolbar handling seems to be going backwards as programmers implement versions of microsoft's damnable ribbon.

I admit to not liking to having to concede that the designers and coders at microsoft did the right thing in the way they developed the toolbar idiom in the office suite. But I do admit it, and am glad their superior design has been widely adopted. It is a shame that rather than learn from the libreoffice team how to make it easier to adjust the toolbars and save personalized sets, they decided everyone must have giant screens now and created the ribbon. A shame, even if it is consistent with microsoft corporate behaviour. In my first encounters with the ribbon, it was set to "smart mode," meaning it hid items according to how recently they were used, resulting in a confusing and constantly disorienting mess. At least the "smart" behaviour could be turned off. After some experimenting I found at least for myself that it was best for the ribbon to stay put, and had to admit that its organization by tabs was good, although only necessary because of abandoning the more space efficient toolbar buttons. At some point microsoft got the feedback that the ribbon was taking up too much room. Rather than making it easier to switch back to toolbars if desired, they decided to make the ribbon collapse into its tab bar as soon as the user clicked away from it. This could be almost bearable is say, clicking the "windows" key forced the ribbon to show itself. It doesn't, the user has to use the mouse to go back and activate a tab to reveal the ribbon again. At least the tabs don't change order, that would be even worse. Kudoes to the libreoffice developers, who make it relatively simple to switch between ribbon and toolbar layouts. (Top)

13. "Whose Mistake is it, Anyway?"

Again, squeals of protest: "But the area code field is only big enough for three digits! I can't fit 213/310 into it!" Gee, that's too bad. You mean that rendering the user interface of your program in terms of the underlying implementation model – a rigidly fixed field width – forces you to reject natural human behavior in favor of obnoxious, computer-like inflexibility supplemented with demeaning error messages? Not to put too fine a point on this, but eror message boxes come from a failure of the program to behave reasonably, not from any failure of the user.

At this point in About Face 2.0 there are less than a hundred pages left to go, and it is hard not to imagine Cooper and Reimann speaking in utterly exasperated tones here. And who can blame them, when nobody would settle for a paper address book actively designed to prevent a person from leaving an area code blank or noting two options in pencil to allow the right one to be confirmed and filled in with ink later. To be sure, it is more work to code an entry field for phone numbers with enough flexibility to allow it to accept no or a couple of alternate area codes in it. Automatically validated entry fields are a hard problem, even when they are selected and defined in a graphical database application, and indeed novice database makers using such applications may fall back on a text field in such cases. Novice such a fallback may be, but it is better than a bone-headed, three character wide entry field. Yet I could see another person protesting that it is a mistake by the user to insist on trying to enter something other than three digits in such a field when a north american telephone area code is supposed to be there. If it isn't all the same, and it isn't something to be explained away as programmer obtuseness or laziness, it is still necessary to say what it is.

Cooper and Reimann are making a point about how there is more to a graphical user interface than what it looks like and how error messages don't help users avoid making mistakes, except for their only being triggered after something happens that the program can't handle. Here I am thinking more about what it says about the way a programmer is thinking, if they are so sure a given entry field can only be three characters long or some other restriction allowing none of the flexibility real life forces humans to have. To me it says the programmer in question has little or no experience of any culture besides their own, and/or little to no experience of actually doing the task the program is meant to support themselves. It wouldn't take too many hours of data entry for a programmer to discover that there are many good reasons for data to be less than perfect on entry, and there are more of them than there are human errors. Astonishing really, but true. Important as it is to carry out observational studies of people actually using software, preferably beta-testing before the software goes into the wild, I think it is more important that programmers and designers actually attempt to perform the task by hand using physical examples. Many tasks at least companies selling software hope to make into things we can do with a computer actually entail a great deal of knowledge learnt by doing or by watching somebody else do the task. This is why so much of the best software started out as a frustrated non-programmer's effort to build the program they needed that didn't exist or existed in such hopelessly bad forms they had to do something about it. (Top)

14. "Confirmations"

As a program's code grows during development, programmers detect numerous situations where they don't feel that they can resolve issues adequately. Programmers will unilaterally insert buck-passing code in these places, almost without noticing it. This tendency needs to be closely watched, because programmers have been known to insert dialog boxes into the code even after the user interface specification has been agreed upon. Programmers often don't consider confirmation dialogs as part of the user interface, but they are.

Perhaps the most convincing evidence that programmers are not as notoriously literal minded as they claim and have tried to encourage us all to believe because let's face it, it allows them to tease and prank everybody else mercilessly, this tendency and what it reveals about their understanding of the term "interface" should do it. By definition an interface is after all, where two systems meet and interact. If a computer program produces something that a user is expected to respond to, then it is producing an interface element. The dialog box example cited by Cooper and Reimann is interesting, because it immediately reminds me of a hilarious example once hidden away in the applescript editor under macosx. Presented a ridiculously long variable name during the syntax checking sequence, applescript editor would throw an error that generated a warning pop up instead of a blasé report in the "Results" tab of the editing window. The warning declared, "Way too long, dude." This example is popularly considered an easter egg, and therefore would constitute an exception that proves the rule. A case so wildly unlikely as to tempt a programmer to have it trigger a buck-passing code should most of the time be reduced to its most practical equivalent. For the ridiculously long variable name in the applescript editor, combining the standard failure to compile indicator plus a warning printed in the result, "Line n: Maximum variable name length is 252 characters." Not as fun as the easter egg, but that would be the buttoned up equivalent.

Now this leads me to think again about the warning pop up in microsoft word about unsaved alterations to the default template file. I actually left some detail out in describing this warning, because it is not quite so forthright in its wording. It actually refers to the "default.dot" file, which mystifies anyone on first encountering it who has never created templates in that program. In the case of most people using microsoft products on a corporate-managed workstation, the likelihood is that they never create templates, instead selecting from a few mandatory ones displayed in a gallery or menu that may hide the extension. The warning in question pops up, not in the context of a person actively opening the "default.dot" file and altering it, but on opening and sometimes editing a file, saving any changes, closing open files, then attempting to shut down microsoft word. It is baffling what is actually going on if the user is not a serious microsoft word or microsoft windows expert, and I am not convinced it isn't a mystery even then. It would seem most reasonable for a programmer to back up and figure out under what conditions the "default.dot" template is changed in the background and put a stop to that. Second best would be if the user has not opened the "default.dot" file and edited it nor made a new template and tried to save it as "default.dot" to simply throw away the changes and shut down the program without a complaint. It is the right thing to have the program catch that something has changed that hasn't been saved, but the follow up is poorly done. (Top)

15. " 'Intelligent' Agents"

A significant issue with "intelligent" animated agents is that by employing animated anthropomorphism, the software is upping the ante on user expectations of the agent's intelligence. If it can't deliver on those expectations, users will quickly become furious, just as they would with a sales clerk in a department store who claims to be an expert on his products, but who, after a few simple questions, proves himself to be clueless.

Here Cooper and Reimann have described a best case scenario, where a broader risk of trying to using animated agents is their focus. There were a few more years to go before examples of mechanical turk abuse would run rampant online as corporations sought to maximize job precarity and employee exploitation while creating an appearance of "artificially intelligent" programs. By best case scenario, I mean that the user is not already furious because the animated agent is designed in the mode of a cartoon character, implying that only a developmentally delayed child would be using the program. This has the double-edged quality of expressing contempt for both the program and the user, making the encounter with such interface elements even worse. Unfortunately, even the animated agents made up of dialog boxes that start with a declaration that "It looks like you're writing a letter!..." engenders annoyance and a quick resort to the "cancel" button. In my experience, animated agents are what drive the person otherwise most determined to learn the least amount possible to do the tasks they need to on a computer to learn the powers of the "escape" key and dig furiously in option and settings windows to make the animated agent stop and go away and never come back. Rather than insist on such an awful solution to the laudable desire to provide at least situationally aware assistance, there are at least three tried and true techniques that are far less likely to infuriate or alienate users that I am aware of. There are probably many others more specific to the type of program and the tasks it is meant to facilitate.

First of course, is the old-fashioned help menu. Good help documentation is hard to write and keep up to date, but when done well is not only a boon to the user but also one of the few ways to persuade a user to actually recommend the program to others. The help documentation should always be locally stored, which means it needs to be simple to update. The programs that do this best in my experience use the same approach for applying incremental program updates on the fly to update the help documentation. In fact, the help documentation is systematically updated every time the program is. For local help documentation provided in html or xml format in a specialized viewer in the system software, this approach does not necessarily work well, and sometimes the viewer is grievously slow starting up and rendering pages. The most effective, maintainable, and cross-platform approach provides basic help in that form, and more detailed and thoroughly updated information in a pdf user manual. Second, which I have seen implemented only a few times even though it is so promising, is to integrate documentation via a reference or help pane that can be toggled on and off. That pane could allow ready access to look up and read the html/xml (or even markdown formatted) documentation, which can then be part of the standard program resources and therefore more easily updated. For this approach pdfs are an ill-suited option, as the text cannot resize and reflow sensibly to fit a resizable pane. Third is tutorial-based, which generally can't stand alone, it must be accompanied by at least one of the other two methods. Tutorials are difficult, because they must either be written for a narrow subset of very simple tasks, or they expand wildly as the developers find themselves trying to implement a sort of "choose your own adventure" approach to develop the modules. All three have in common that they never start up on their own, the user makes the decision to invoke the help, select which option to use if there is more than one, and then pick the topic or topics they need. Like it or not, the program and all its parts are supposed to stand in the role of helpful servants, not busybodies who try to be helpful by interrupting. The only "good" interruption by a program is the one that says, "This program needs to restart due to an unrecoverable error. Emergency saving all open documents." (Top)

Conclusion

The latest edition of About Face came out in 2014, and it has four authors now: Alan Cooper, Robert Reimann, David Cronin, and Christopher Noessel. By quirk of circumstances, although I have read the third edition I have not had a chance to read the most recent one, and it has been long enough that it is already due for either a fifth edition or retirement in favour of other books. While many others who write and blog about user interface design would simply declare the older editions obsolete, it seems to me that is not strictly true. Rereading an edition that is five or more years old is well worth it as a means of developing a broader perspective and a critical consideration of newer trends. While the few illustrations and diagrams may reveal the age of the book, they also reveal what has become standard practice and what seemed like it should have been a hit and adopted by everyone and yet never took off. One chapter I found myself wishing for was an explicit discussion of help systems and documentation, because even though they are important, they are also not trivial to design, build, and maintain. Help materials are in many ways a parallel interface to the main program interface, and it is no small task to make sure that the two work together and meet the needs of the user for assistance solving a problem getting a task done rather than the programmers getting a task done. (Top)

  1. Cooper, Alan and Robert Reimann. About Face 2.0: The Essentials of Interaction Design (Indianapolis: Wiley Publishing Inc., 2003). Page 8, para 2
  2. Stevenson, Angus and Christine A. Lindberg, Eds. New Oxford American Dictionary, Third Edition. (Oxford: Oxford University Press, 2010), technology, n.
  3. Cooper and Reimann 2003, page 21, paragraph 1.
  4. Cooper and Reimann 2003, page 28, paragraph 5.
  5. Cooper and Reimann 2003, page 130, paragraph 2.
  6. Cooper and Reimann 2003, page 133, paragraph 3.
  7. Cooper and Reimann 2003, page 177, paragraph 2.
  8. Stevenson and Lindberg, file, n.
  9. Cooper and Reimann 2003, page 191, paragraph 3.
  10. Ceglowski, Maciej. The Web Obesity Crisis. Web Directions conference. Sydney, australia, 20 october 2015. This page is a transcript and includes a link to the video of the talk.
  11. Cooper and Reimann 2003, page 193, paragraph 2.
  12. Stanley Kubrick's 1968 version of Arthur C. Clarke's novel still stands alone, even in this era of obsessive remakes and sequels.
  13. Cooper and Reimann 2003, page 208, caption 17-1.
  14. If you have not yet read Sydney Padua's brilliant graphic novel The Thrilling Adventures of Lovelace and Babbage, it is worth at least checking the episode cited and exploring her website.
  15. Cooper and Reimann 2003, page 210, paragraph 1.
  16. Cooper and Reimann 2003, page 251, paragraph 3.
  17. The "ithing" series does not have the same qualities, and there is a strong argument that the egoistic connotations of those names have long overrun the "internet" ones.
  18. Cooper and Reimann 2003, page 384, paragraph 3.
  19. According to the IT department at my day job, it is possible to revert to the toolbar layout and make other changes to the ribbon besides making it "dumb" again. I am sure they are right. Perhaps one day they will realize they have locked down the settings so that staff can't make that type of personalization on an individualized workstation or for their login profile.
  20. Cooper and Reimann 2003, page 438, paragraph 5.
  21. Cooper and Reimann 2003, page 448, paragraph 3.
  22. Often to genuinely funny and entertaining effect, even to non-programmers.
  23. Stevenson and Lindberg, interface, n.
  24. To see an independent proof including screen shot, see AppleScript Easter Egg: Way too long, dude at the automated workflows blog.
  25. Cooper and Reimann 2003, page 464, paragraph 4.
  26. The 2018 thoughtpiece Increasing Job Precarity and Hiding the Bots reflects on this phenomenon and includes some more recent references in updates.
  27. There is an excellent list of books, webpages, and other resources on user interface design curated at uxdesigneducation.com.
Copyright © C. Osborne 2024
Last Modified: Monday, January 01, 2024 01:25:33