When we were developing Digital Directories at Mappedin, we faced a unique design challenge. It’s what I like to call a problem of “fuzzy constraints” (more on that later).
Over the years, we had built several custom touchscreens in shopping centres to help visitors find what they were looking for on a map. These screens had various sizes, heights, and resolutions, and we designed each interface so that it would work under those unique conditions.
The first generation of these interfaces came at a time when touchscreens were rare in malls. Visitors required a great deal of walkthrough and explanation. Later, as touch directories became commonplace, our designs took advantage of visitors’ familiarity with frequent interactions like search.
We knew we had to update our first-generation designs to keep them modern, but each screen had its quirks to design around. Developing so many new interfaces while meeting the demands of new business would be impossible. The solution would have to be one format and set of components that would serve many unique challenges.
One of the most challenging concerns was that these interfaces were installed on various types of hardware at different heights, sizes, and resolutions. Each piece of hardware had its own quirks and the existing designs reflected that. It seemed impossible that we might find one format and set of components to serve all of the many design challenges.
I like to think of this type of challenge as a problem of fuzzy constraints. There were no clear rules to follow that ensured a solution would work in every scenario. With so much of our effort driven by intuition, it was hard to communicate why a design would work (or not work).
A designer might spend hours on a new feature only to realize in testing that an essential element was out of reach or less visible on one obscure device in some particular venue. This solution wasn’t going to scale, and we were quickly reaching our breaking point.
Finding the Edges
When the constraints you’re working with are too “fuzzy,” the goal is to define them.
In many cases, we can bound constraints in just one direction. For example, there is a minimum scale at which letters will be legible for the vision impaired, but there is no maximum. The larger the text, the more readable it is (in most settings).
But, when designing a big interface, you’re managing many different types of users and displays. What’s out of view in one scenario could be just right in another. The ideal guidelines exist within a fuzzy cloud of boundaries and dependencies, and it’s often difficult to know where to start defining them.
As with most design problems, the best place to start was in the field. We studied a range of different users on the many setups we had. On careful study, we noticed a theme developing in our research. There was one standard limitation in large interface design: visibility.
When an interface is scaled up, you can only focus on a small portion of it at one time. The sensation of using a screen so large that it extends beyond your central area of vision can feel jarring and unfamiliar, and a change or alert in one area of the screen can go entirely unnoticed by a user depending on their size and position.
As designers, we had to think more like choreographers: attracting the user’s focus and directing it fluidly around the screen. With this in mind, we set out to define meaningful boundaries that resolved the issue at the core of our research: visibility.
Human vision seems complex, but there’s an easy way to think about it. The visible region comes in two parts: one resembles a pair of binoculars, the other a target.
Central(ish) Vision — The two rings in the centre of the target account for your “central” to “near peripheral” vision, where elements tend to be in focus.
Peripheral Vision — The outer binocular shape is your “far peripheral” vision. This area is not great at detecting changes in colour, but it’s fantastic for detecting motion.
Side Note — I heard this referenced in a talk once in a way that’s made it simple to remember: think in terms of evolution — it’s more important to see something is coming at you than to know whether it’s a lion or a tiger.
We’re used to designing interfaces contained entirely by the central area of vision. For example, if you view your phone from a comfortable distance, the whole screen will be in your central vision.
On large screens, we want to present essential elements in the user’s central vision. We can still make use of the other space, but if we have to direct a user’s attention to a new area of the screen, we should use motion to introduce it.
Building Our Model
To build our model, we projected these two main fields of view onto many of our different screens from standard viewing angles and distances. You’ll see two examples above. Still, we considered users many other users, including those with limited mobility (e.g., users in wheelchairs) and users who would be standing closer to the screen due to vision impairment. This process revealed some of the most extreme cases that we’d have to consider in our model.
When we expanded all of the many possible boundaries with visibility, we started to notice a pattern: some areas of the screen were particularly risky for displaying information and functionality. For example, the very top of the screen was almost entirely out of sight for some users viewing the screen at a comfortable distance — especially those wearing baseball caps!
With that, we had the constraints we needed to build a model. Inspecting the many different models, we sectioned the screen vertically into six equal zones. We identified each zone’s limitations in two scenarios: one for standing users and another for seated users. These zones would become our “Visibility Model” and would inform all of our design exploration and reviews.
As we established our guidelines, we thought carefully about how each section of the screen would operate under different conditions. For example, while the top of the screen was out of sight during regular operation, it turned out to be a great place to put information like the time and opening hours, which could be viewed by many visitors from a distance.
Out of this process came a simple set of guidelines that would keep our interfaces in check without an overload of working knowledge.
By considering the diversity of visitors and screens, we developed a set of components that could easily be re-arranged to suit everyone’s needs. When we encountered a new screen format, we could simply check it against our model and adjust to fit if needed.
Finally, we could use a simple pair of overlays in all of our interface design projects representing each section of the screen and its purpose. This tool made it simple to implement, review, and communicate the model. We could ensure that directories kept important cues and elements comfortably within view for any user. Those outside these areas would be larger, introduced with movement, and not required for interaction.
It’s amazing what a good set of constraints can do for your design collaboration and reviews. Knowing the boundaries of a problem can be the difference between a design that fails in testing, and one that passes with clear and meaningful justification.