My awesome advisor Dr. Ann Hill Duin brought to my attention this afternoon a quick entry from Penn State’s Teaching with Technology blog that discusses how wearables might change web design. While the “One thing, in one moment” observation about Google Glass’s affordances is indubitably intriguing, I was struck by the opening of the piece:
Sometimes when you try something new, you see some immediate use for it. Kindergartners, when handed a hammer, will intuitively start to bang on things. I’d love to say the Glass was like that, but it wasn’t.
The author was pointing out something that most Glass users would concur: The device is not at all intuitive.
Given my months of exploration and experimentation with Google Glass, I have experienced both excitement and frustration in using such extravagant device. While Glass certainly retains a cool factor — it’s a really slick technology with never-seen-before design and user interface — it messes with our conceptual model of a mobile device and smart computing experience.
A conceptual model is, according to usability design expert Donald Norman, mental model that people have of themselves, others, the environment, and things they interact with, which allow for a mapping of relationship between the operating controls and the functions. In other words, conceptual models are our go-to mental simulation when we see an object or system as a way to figure out how the object or system works before even operating or using it.
Here’s an example used in Norman’s The Design of Everyday Things that illustrates the principle of conceptual model:
As users of everyday designed objects, we get clues from the appearance of the object or the indicated ways (information) about the operations of the object in order to use it. We rely on a psychology of causality that tells us if we are doing something right or wrong. For example, if we happen to run a software just before the computer freezes, we are apt to believe that the running of the said software has caused the failure, though the failure and action might be related only by coincidence. The way we cope with everyday designed objects are based on the feedback we received from our actions as well as the visual cues we get from the appearance of the object. These cues guide our actions and ultimately formulate our habits in using the object.
When conceptual models are not clear, or create conflict with existing models, the user becomes confused or unsure of the operations and thus is more likely to make errors — which will eventually lead to dissatisfaction and abandonment of the object or system.
This is the problem with the existing design of Google Glass. Many of us new Glass explorers and users have no idea how to appropriately handle Glass because we don’t have a good conceptual model to consult. Specifically, we are misguided by the verbal association to glasses, forced to learn a new set of commands, and are unsure of the designer’s model coming from the user’s perspective. Based on my personal experience, I try to elaborate on these three factors below.
1. Misleading Verbal Association
Given the verbal association to the word “Glass,” our first modeling of Google Glass might be of a pair of glasses/spectacles that someone with visual impairment wears to improve vision or gain clarity through corrective lenses. We know Google did not design a pair of corrective glasses but rather a computer that has an overhead optical display mounted on a spectacle-like frame. Such conceptual modeling fails to give users an acute imaging of the purpose of Glass but rather sways them from the smart, hands-free computing purpose in the device.
Other wearables like the smartwatches and health trackers such as FitBit have less problem with their conceptual modeling because they thrive on the existing mechanisms of watches and wristbands, creating less trouble for users to conceptualize their design and operations.
2. All-New Operative Requirements
Because Glass requires the user to learn a new set of gestural commands to operate the device, users may feel alien at first learning to control and use the features of Glass. While first-time Glass wearers will see a brief instructional video on how to use tapping, swiping and voice commanding to operate Glass, the video doesn’t reveal the complexity in the Glass operating system (OS). Unlike an Apple or Android OS, the Glass OS is layered, featuring a timeline and menu that looks identical to each other. When deploying Glass in my first-year writing course, I had to map out these layers to help students understand the dimensionality in Glass OS.
Without prior models to reference, the Glass OS is confusing. It was not uncommon to see first-time users giving up on trying within the first 10 minutes of wearing Glass because they couldn’t map the underlying physics and relationships between the controls and outcomes on Glass.
3. Disconnections between Designers and Users’ Models
Despite the futuristic depiction of Glass as a potential everyday technology on Glass’s (now shutdown) promo website, there’s little to no connection between Glass designers’ modeling of the device and the user’s conceptual model — that is the system image that results from the physical structure that has been built. The following framework illustrates the relationship between the designer and user’s models through the system image:
What successful tech companies like Apple and Microsoft do well is defining any new system models as clearly as possible for their typical users. This is especially important when the system or object appearance doesn’t make the operative procedure clear and intuitive, such as the case of Google Glass (as a pristine innovation). For instance, when Apple first rolled out its voice-activated intelligent assistant feature, Siri, Apple invested tremendous amount of time to demo and make sure users understand the operative model of natural language knowledge navigator. Making the operative model visible to users who do not have prior experience, training, or instruction to the new design is crucial to the sustainability of the device.
So, What Now, Google?
Due to the lack of a clear conceptual model, we end up asking ourselves: Is Google Glass a pair of reality-augmenting glasses, or miniature computer, or smartphone-on-the-head? No doubt, Glass strives to achieve all of these affordances. Yet there are still work to be done to establish a discernible system image and operative modeling. Though, this doesn’t mean we should dismiss such innovative endeavor. As I acknowledge in my upcoming STC Intercom article, wearable technology is not just a popular fad but the next step in personal and smart computing. As a cartel of futuristic technology, Google is in the best position to define what it means to reconstruct our computing experience and relationship with machines. In order to achieve this goal, the folks at Google need to first identify the complexities in human-computer interactions and devise new inventions that provide increased benefits to our everyday lives.