On an otherwise ordinary, sunny Saturday morning, at 8:07am on January 13, 2018, Hawaii residents were on the receiving end of the shock of their lives.
An alert, issued by an employee at Hawaii Emergency Management Agency, was issued to scores of residents going about their usual routine. The alert read “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”
Given the current political climate and recent, well-publicized tensions with North Korea, residents rightly flew into a panic. People rushed to shelters, crammed highways and scrambled to get in touch with their families.
“We fully felt like we were about to die. I drove to try to get to my kids even though I knew I probably wouldn’t make it, and I fully was visualizing what was happening while I was on the road. It was awful,” resident Allyson Niven, who lives in Kailua-Kona was quoted as saying by the New York Times.
It was all a mistake. About 30 minutes later, communications about a false alarm were issued:
Pressured to make a statement, Richard Rapoza, a spokesperson for the agency, said that the alert was issued mistakenly as a result of human error. “Someone clicked the wrong thing on the computer,” he said.
. . .
“Human error” is something we hear about a lot when testing software. It’s an all too-easy excuse to make that puts no onus on the designer of the system, whose job it is to anticipate and prevent these errors from happening.
Good system design and usability are especially important in high-leverage situations such as this one, and can save users and system stakeholders from potentially far-reaching consequences.
Three words that can potentially reframe how we think about solutions that can prevent this kind of accident from happening down the road: BLAME THE DESIGN.
Now, it’s easy to throw stones at the Hawaii Emergency Management Agency from a distance. We don’t have all the details. However, mock-ups of the offending interface were Tweeted by Honolulu Civil Beat:
Both of the shared screenshots are facsimiles of the actual system screen (the actual interface hasn’t been shared with the public citing security concerns), but if either of them are even close to the real thing, they reveal a shocking lack of attention to good — or even adequate — design.
In seeing these screenshots, several UI-related problems jump out immediately:
- Poor labelling: the labels of the text links are confusing and overly-esoteric. This is especially problematic when a system user needs to make a selection during a real crisis, under duress. The extra second or two required to decipher the options could mean the difference between life and death — literally.
- Disorganization: non-drill alerts are included within the list alongside drill alerts, seemingly in an arbitrary sequence. Again, this means that under duress the operator needs a few precious moments to even locate the correct operation.
- Non-differentiated visual treatments: Non-drill alerts aren’t differentiated and are given the same visual prominence as drill alerts. Typically, more serious operations with a high recovery cost are highlighted as big red buttons that scream “only press me if you’re really sure.”
- Error prevention: Apparently, the system did show a confirmation message upon clicking a link — it was the classic “Are you sure? Yes/Cancel” interruptive dialog — which is the right idea. However, this simplistic implementation lacks crucial context, rendering it nearly useless. The most important information to display in this situation is making clear to the user the ramifications of what is about to happen if the operation is executed, while giving them an obvious choice to back out.
Some people might categorize this as UI (user interface)-only design problem, however, I see it more broadly. The system design must take into account the user’s mindset when interacting with this screen, either in a run-of-the-mill drill situation or a real-life emergency where the operator is likely under an extreme amount of stress. Being empathetic to the user in both these scenarios helps guide critical UI design decisions. This is a great example, by the way, of how UI design is a subset of the broader practice of UX (User Experience).
Here’s a mock-up of an improved interface that could have prevented the erroneous issuing of real alerts. Note the “mode” toggle between Test Mode and Live Mode:
You’ll notice many of my gripes are fixed with the above design approach. Obviously these mock-ups were done in a vacuum, without the full context of system constraints or organizational considerations, but it’s a strong start.
This specific example of error may be an extreme one — at Klick we’re fortunate enough to not have to deal with missile alerts — but you can see how design issues can easily cross over into the health sector. For instance, there are studies devoted to the ramifications of how poor design can impact electronic health record systems, leading to unintended errors that can seriously jeopardize patient safety and severely hinder effectiveness of care.
Implementing the correct design requires an investment in time and effort, but getting it wrong is exponentially more costly.
Heck, even the Ms. Universe pageant suffered from the same ailment.
Again, it’s easy to blame users when mistakes like this happen. We’re all human. This is exactly why the burden should be shifted to designers. It’s our job to anticipate and prevent errors, an especially important responsibility in high risk situations.
An excellent User Experience is rooted not just in quality design outputs, but in the process as well. This means investing in the proper discovery and research activities that provide user context, goals, expectations and organizational framing— setting the stage for interface designers to succeed.