Single User Interface
Originally posted on my blog.
If you’re interested in UX, you know that you should keep your products simple. It seems that every time this topic is brought up, someone feels the need to use iPhone or iPad as an example. “It’s so simple, a 3-year-old can use it.” Certainly this is a compelling argument, but it is rarely backed up by real data. I have a 3-year-old and an iPad, and I must say emphatically that iPads are not simple enough for a 3-year-old. Granted, my 3-year-old doesn’t get to play with the iPad very often, but when she does, I always have to be nearby because she is constantly exiting the app she wants to use, or accidentally bringing up the keyboard, or clicking on some add. The shape sorting apps are the worst (drag the green triangle into the triangle hole), because 3-year-olds do not understand multitouch.
“It’s not WORKING!”
“That’s because you’re[1] holding the iPad with your thumb on the screen, and the app doesn’t know which touch event to use to …”
The iPad seems simple enough for a 3-year-old because you don’t have to type to use some of the functionality, and though 3-year-olds can type, it usually doesn’t make any sense[2].
Less Interface is Better
I’ve had a notion for over a year that the future of the User Interface, and perhaps of computing in general, is less User Interface. In a previous (fictional) post, I hint at a future where user interfaces are essentially eliminated. In my book on learning to program I talk about the power of the command line, a single user interface to handle a shocking large set of computer interactions. Learning to do something new on your computer does not require learning a new interface; all you have to do is learn a new command. The command line has some serious UX issues, but the fact that it has survived so long is a testament to it’s usefulness and power as a user interface.
At the beginning of this year I discovered a device that really is simple enough for a 3-year-old. My in-laws got us an Amazon Echo for Christmas. I was hesitant at first, having experienced a few Amazon Fire devices. I was wrong to doubt. The Echo is amazing. I have used (and used to enjoy[3]) Google Now’s voice commands. I’ve seen people use Siri with varying degrees of success. But none of those has felt more natural than talking to Alexa (Amazon Echo’s human-like persona). She is always there, and she’s ready to help. She doesn’t always understand, but the accuracy is impressive. Most impressive, though, is how my kids have interacted with her. The other day they spent 20 minutes having Alexa tell them jokes. She understands them, and they understand her in a way they have never understood an iPad.
Single User Interface
Like the command line, Alexa has a single user interface. The only way to interact with her is by saying “Alexa” then some command[4]. In that sense, Alexa and the command line are shockingly similar. They both accept commands as input (in the form of text or the spoken word) and they return some output in the same form (text or spoken word). The command line is much more powerful than Alexa, but Alexa is much more forgiving if your command isn’t exactly right. As improvements in artificial intelligence (especially natural language processing) converge with User Interface design, I think we will see more devices and applications with a single user interface.
Staff Software Engineer @ LinkedIn (CoreAI, Agents Platform)
9yI really like thinking about what "UX" will mean in the far future. I enjoyed this post a lot! The investments into NLP and AI made by large companies (Apple, Google) is a testament to the value they see in the "Single UI" you're describing here.
User Experience Research
9yI absolutely agree that having a single entry point or paradigm in your UX is preferable to having multiple - people get extremely caught up on having multiple ways to do something and tend to get mixed up very easily. The comparison of voice-command tools to a command line is not one I've heard before, and is very intriguing. And I think as you touched on in the post, the room for error with the Echo is what really makes it shine. I would push back on the thought that command lines have serious UX issues, because I would actually say they have no UX at all! The power is amazing, but entering in commands that operate in the same syntax as the product is itself written in lacks the abstraction that constitutes an actual interface. To extend your thoughts on Alexa being an excellent Single User Interface, I would imagine the output is as crucial here as the input. My biggest challenge / frustration when using command lines (other than when I forget the proper syntax) is that the output is often difficult to interpret at best, and a worthless jumble of codes and alerts at worst. I think this speaks to a major challenge for Single User Interfaces as you describe them - the simplicity of the input must be balanced against the robustness of the output. It works well with Alexa and other voice-activated devices because often they are answering a question (as far as I know, I do not own one myself). What's the weather like? Where can I find chinese food? Tell me a joke? And so on. It's when the task becomes more complex and necessitates more intensive action on the part of the end user that the limited input starts to become a problem. For instance, if it were a search tool (something we both know all too well), a Single paradigm for voice could work if the input and output was simple enough, but if it was anything more complex or outside of NLP (A query with symbols, for instance) it'd break down. A major challenge is communicating the limits of a system like that up front to the user. A great real-world example of Single Interfaces that DON'T work is the telephone voice menu system that every company uses for their customer lines - by the time the voice has finished reading the menu of complex responses the user has forgotten the first options. Really interesting take on the merits of single user interfaces, Stephen - thanks for taking the time to write out your thoughts.