Category Archives: [ interface design ]

10 Interface Design Things You should know

1. Know your user

“Obsess over customers: when given the choice between obsessing over competitors or customers, always obsess over customers. Start with customers and work backward.” – Jeff Bezos

Your user’s goals are your goals, so learn them. Restate them, repeat them. Then, learn about your user’s skills and experience, and what they need. Find out what interfaces they like and sit down and watch how they use them. Do not get carried away trying to keep up with the competition by mimicking trendy design styles or adding new features. By focusing on your user first, you will be able to create an interface that lets them achieve their goals.

2. Pay attention to patterns

Users spend the majority of their time on interfaces other than your own (Facebook, MySpace, Blogger, Bank of America, school/university, news websites, etc). There is no need to reinvent the wheel. Those interfaces may solve some of the same problems that users perceive within the one you are creating. By using familiar UI patterns, you will help your users feel at home.

Graphic comparing an email inbox with CoTweet’s inbox
CoTweet uses a familiar UI pattern found in email applications.

3. Stay consistent

“The more users’ expectations prove right, the more they will feel in control of the system and the more they will like it.” – Jakob Nielson

Your users need consistency. They need to know that once they learn to do something, they will be able to do it again. Language, layout, and design are just a few interface elements that need consistency. A consistent interface enables your users to have a better understanding of how things will work, increasing their efficiency.

4. Use visual hierarchy

“Designers can create normalcy out of chaos; they can clearly communicate ideas through the organizing and manipulating of words and pictures.” – Jeffery Veen, The Art and Science of Web Design

Design your interface in a way that allows the user to focus on what is most important. The size, color, and placement of each element work together, creating a clear path to understanding your interface. A clear hierarchy will go great lengths in reducing the appearance of complexity (even when the actions themselves are complex).

5. Provide feedback

Your interface should at all times speak to your user, when his/her actions are both right and wrong or misunderstood. Always inform your users of actions, changes in state and errors, or exceptions that occur. Visual cues or simple messaging can show the user whether his or her actions have led to the expected result.

Screenshot of BantamLive’s interface showing that it provides feedback with a loading action
BantamLive provides inline loading indicators for most actions within their interface.

6. Be forgiving

No matter how clear your design is, people will make mistakes. Your UI should allow for and tolerate user error. Design ways for users to undo actions, and be forgiving with varied inputs (no one likes to start over because he/she put in the wrong birth date format). Also, if the user does cause an error, use your messaging as a teachable situation by showing what action was wrong, and ensure that she/he knows how to prevent the error from occurring again.

A great example can be seen in How to increase signups with easier captchas.

7. Empower your user

Once a user has become experienced with your interface, reward him/her and take off the training wheels. The breakdown of complex tasks into simple steps will become cumbersome and distracting. Providing more abstract ways, like keyboard shortcuts, to accomplish tasks will allow your design to get out of the way.

8. Speak their language

“If you think every pixel, every icon, every typeface matters, then you also need to believe every letter matters. ” – Getting Real

All interfaces require some level of copywriting. Keep things conversational, not sensational. Provide clear and concise labels for actions and keep your messaging simple. Your users will appreciate it, because they won’t hear you – they will hear themselves and/or their peers.

9. Keep it simple

“A modern paradox is that it’s simpler to create complex interfaces because it’s so complex to simplify them.” – Pär Almqvist

The best interface designs are invisible. They do not contain UI-bling or unnecessary elements. Instead, the necessary elements are succinct and make sense. Whenever you are thinking about adding a new feature or element to your interface, ask the question, “Does the user really need this?” or “Why does the user want this very clever animated gif?” Are you adding things because you like or want them? Never let your UI ego steal the show.

10. Keep moving forward

Grandpa Bud: If I gave up every time I failed, I would never have invented my fireproof pants!
[Pants burn up, revealing his underwear]
Grandpa Bud: Still working the kinks out a bit.

from Meet the Robinsons

Meet the Robinsons is one of my all time favorite movies. Throughout the movie Lewis, the protagonist, is challenged to “keep moving forward.” This is a key principle in UI design.

It is often said when developing interfaces that you need to fail fast, and iterate often. When creating a UI, you will make mistakes. Just keep moving forward, and remember to keep your UI out of the way.

User Learning and Performance with Bezel Menus

User Learning and Performance with Bezel Menus

Abstract

Touch-screen phones tend to require constant visual attention, thus not allowing eyes-free interaction. For users with visual impairment, or when occupied with another task that requires a user’s visual attention, these phones can be difficult to use. Recently, marks initiating from the bezel, the physical touch-insensitive frame surrounding a touch screen display, have been proposed as a method for eyes-free interaction. Due to the physical form factor of the mobile device, it is possible to access different parts of the bezel eyes-free. In this paper, we first studied the performance of different bezel menu layouts. Based on the results, we de- signed a bezel-based text entry application to gain insights into how bezel menus perform in a real-world application. From a longitudinal study, we found that the participants achieved 9.2 words per minute in situations requiring minimal visual attention to the screen. After only one hour of practice, the participants transitioned from novice to expert users. This shows that bezel menus can be adopted for realistic applications.

Conclusion

Bezel menus enable interaction with a touch-screen phone with minimal visual attention, along with solving the occlusion and mode-switching problem. They ameliorate the fat- finger problem. Marks do not have to be very precise. Bezel menus can work under direct sunlight, when it is difficult to access the on-screen controls. They can make the display icon-free, resulting in more screen space for the actual con- tent. Complex realistic applications such as video editor, word processor, text entry, which requires numerous controls along with large content viewing area can take ad- vantage of bezel menus. One of the demerits is that the number of menu items is limited to 64, and only 32 for best performance, but we believe that 32 menu items is a reasonable upper limit for most mobile applications. Also users would need to learn different command sets for different applications, but with regular practice, accessing frequently-used items eyes-free would be achievable.

The study shows that highly accurate eyes-free interaction is achievable with L8x4 layout. To gain insight into the performance of a bezel-based system we developed a bezel- based text entry technique. We found it to be competitive with existing techniques in terms of speed, accuracy, and ease of learning and usage. This shows that bezel-initiated marks can be used to interact with realistic touchscreen applications, while paying minimal visual attention to the screen. While encouraging, these results must be interpret- ed with caution. The small sample size, non-native speakers as participants, limited our analyses. More participants are required to make a stronger claim.

As the accuracy of originating the mark from the correct bezel is very high, different variations of bezel menu such as (a) both level-1 and level-2 marks starting from the bezel similar to simple marking menus [38], and (b) marks starting and ending at the bezels, are worth exploring. Bezel menu can provide a 2-layer interaction on a touch-screen phone, as the first layer can be on-screen controls, and the second layer of menus can be pulled out from the bezel. The obtained results are not limited to text entry, and can be readily applied to other applications. We hope that our work will inform future designers to design better bezel- based interaction techniques.

My thoughts

This study, although based upon the use of on iOS device, actually describes the workings of a Blackberry Playbook which has these bezel features built it. Swiping off-screen to gain on screen menus and actions does have its merits, and this research fails to point out the sense of reward a user can gain by simply swatting an application back to the menu or swiping to gain new controls. Bezel menus are an interesting concept for GUI design and although not directly relevant to my research area it is one that has some implications on the design phase of a new interface or feature.

Imaginary Interfaces: Touchscreen-like Interaction without the Screen

Imaginary Interfaces: Touchscreen-like Interaction without the Screen

Abstract

Screenless mobile devices achieve maximum mobility, but at the expense of the visual feedback that is gener- ally assumed to be necessary for spatial interaction. With Imaginary Interfaces we re-enable spatial interac- tion on screenless devices. Users point and draw in the empty space in front of them or on the palm of their hands. While they cannot see the results of their inter- action, they do obtain some visual feedback by watching their hands move. Our user studies show that Imaginary Interfaces allow users to create simple draw- ings, to annotate with them and to operate interfaces, as long as their layout mimics a physical device they have used before.

Conclusion

While our main goal is to create and explore ultra- mobile devices, Imaginary Interfaces and interfaces designed for the visually impaired have interesting similarities and differences worth exploring. In particu- lar, we plan to explore the value derived from the extra feedback users obtain from watching their hands inter- act. Exploring this and related questions will help us better understand Imaginary Interfaces and at the same time it will allow us to discover which aspects of our technology can inform the design of interfaces for the visually impaired.

My thoughts

Wearable computing has often been predicted as the next big thing in computing, and yet users seem reluctant to adapt to its requirements. This research proposes that users can create and adapt there own interfaces by using gestures which are captured by a chest worn camera pendant. Wearing a pendant is much less obstructive than, say, a coat or other piece of clothing. Translating hand gestures into GUI commands could have many widespread uses, and very many relate to my area of concern: audio production.

Designing for Low-Latency Direct-Touch Input

Designing for Low-Latency Direct-Touch Input

Abstract

Software designed for direct-touch interfaces often utilize a metaphor of direct physical manipulation of pseudo “real- world” objects. However, current touch systems typically take 50-200ms to update the display in response to a physical touch action. Utilizing a high performance touch demonstrator, subjects were able to experience touch latencies ranging from current levels down to about 1ms. Our tests show that users greatly prefer lower latencies, and noticeable improvement continued well below 10ms. This level of performance is difficult to achieve in commercial computing systems using current technologies. As an alternative, we propose a hybrid system that provides low-fidelity visual feedback immediately, followed by high-fidelity visuals at standard levels of latency.

Conclusion

In this paper, we have described sources of latency, and demonstrated how several of these can be eliminated in building a demonstrator system capable of 1ms touch latency. Further, we have described the results of tests which showed that users were able to perceive order-of magnitude improvements in latency over current-generation hardware. Our results suggest that performance beyond 1ms may still yield improvement that is perceptible to users.

We have constructed a prototype Accelerated Touch system, wherein a traditional direct-touch layer is paired with a low-latency layer that displays nearly immediate visual feedback on user interaction, independent of application logic, but visually tied to the underlying UI widget. We have further described the design of this visual language to satisfy the various constraints of a dedicated low-latency touch processor, and we have described a potential architecture for a direct-touch system that pairs Accelerated Touch with more traditional touch interaction.

A common complaint that we heard from people who used our system extensively was that it “broke” them – that they now find the latency of current generation devices completely unacceptable. The implication is that improving latency might be an effective competitive strategy for de- vice vendors. It is our hope that this paper will spark innovation in the design of hardware and software capable of lower latency of response to user input.

We see a wealth of future work in further investigating the limits of human perception of touch-screen computer systems, and better understanding the effect of performance parameters such as latency on the usage of touch-screens. For example, are there performance benefits for input under reduced latency? Further, we have conflated latency and frame rate, future devices may decouple these two parameters, and could optimize one or the other; further investigation is needed into the effects of such a change.

My thoughts

To gain increase in latency requires that all three elements in a system (the sensor, the software and the display) are addressed. Obviously only the software can be looked at in the case of user design as attempted to redesign capacitive screens and operating systems is way beyond the scope of my research. Interestingly this research does touch upon Card, Mackinlay and Robertsons earlier work that suggests 10Mhz and 100ms are the limits of acceptability. With advances in technology and with my focus of designing interfaces for use with audio applications then I would suggest that the 100ms limit be at least halved to be seen as acceptable. Definitely an area which requires more research and is of interest to me.

Patterns for the design of musical interaction with everyday mobile devices

Patterns for the design of musical interaction with everyday mobile devices

Abstract

The growing popularity of mobile devices gave birth to a still emergent research field, called Mobile Music (music with mobile devices). Our particular research investigates such re-purposing of ordinary mobile devices for use in musical activities. In this paper we propose the use of patterns in the design of musical interaction with these devices. We introduce the musical interaction patterns that came out of our investigation so far, and describe the exploratory prototypes which served as inspiration and, at the same time, as test-bed for these proposed interaction patterns.

Conclusion

In our work we identified four musical interaction patterns that can be implemented in common mobile devices. This small, initial set of patterns obviously does not mean to be a thorough taxonomy of musical interaction in general. We are also still on the process of compiling other related pattern sets: for interactions made possible by musical ubiquitous computing environments (i.e., involving cooperation, emergence, location awareness, awareness of contextual sound/music resources, etc.) and for musical interfaces (which instantiate musical interaction patterns, possibly using existing UIDPs). Nevertheless, the four patterns listed here already account for musical interaction in ubiquitous environments when a single mobile device is the user interface, plus they suit designs that need to ensure that music can still be made with a mobile device even with no access to pervasive musical resources (in case those are not available or are unreachable, e.g., due to connectivity limitations).

We have also been conducting preliminary tests on patterns comprehensibility, to observe if the proposed patterns can be learned quickly by designers from outside the CM area. Some other tests are being made to confirm the independence of patterns in relation to different types of musical activities, e.g. by comparing user performance and quality of use when carrying out the same musical activity following two different interaction patterns. These tests and their results will be the subject of forthcoming papers.

A pattern-oriented approach for interaction design in mobile music is an effort towards a necessary switch from the current technology-oriented perspective to a more user-centred perspective of CM as a whole, and this paper is just a step towards this goal. However, much work is still needed in order to extend the scope of current CM research to cope with many well-known HCI concerns. We are convinced that a better understanding of HCI issues in CM research and development is a good starting point, not only to identify the capabilities and limitations of future work, but mainly to establish a common ground for discussing several interesting questions that are still open.

My thoughts

Main Argument

Interaction patterns should be developed to allow conceptual designs to be implemented on mobile devices in the field of mobile music. These patterns allow for metaphor construction better suited for musical interaction in the context of ubiquitous musical activities. An interaction pattern language can demonstrate how design problems may be solved according to sound user-centred design principles.

Secondary arguments

Interfaces for musical performance or interaction should be designed with human input centred design rather than traditional computer centred design. Interfaces for musical performance should fit to our physical, embodied natures, rather than only operating in the realm of symbolic processing.

Interim conclusion

The need exists to apply HCI methods for interaction design of these solutions or as a way for improving user experience.

Main conclusion

Four main patterns of musical interaction for musical interaction in ubiquitous environments when a single mobile device is the user interface have been established as taxonomy of musical interaction. Further work to observe patterns outside of these areas have been established as well as comparing user performance and quality of use when carrying out the same musical activity following two different interaction patterns.

My research aims to further Fores by establishing which situations specific interaction patterns can be used in when used as a human input centred design element (ergo the traditional mixer element of a midi setup).

 

 

Gaze-supported multi-modal interactions

Gaze-supported multi-modal interactions

Abstract

While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze- only approaches are investigated.

Conclusion

This paper presented a detailed description of a user-centred design process for gaze-supported interaction techniques for the exploration of large image collections. For this purpose, gaze input was combined with additional in- put modalities: (1) a keyboard and (2) a mobile tilt-enabled multi-touch screen. The integration of user feedback at such an early stage of the design process allowed for the development of novel and more natural gaze-supported interaction techniques. While gaze acted as a pointing modality, the touch and tilt actions complemented the interaction for a multifaceted interaction. Based on user-elicited interaction techniques we developed an extended multimedia retrieval system, Gaze Galaxy, that can be controlled via gaze and touch-and-tilt input to explore large image collections. First user impressions on the implemented interaction techniques

were gathered and discussed. Results indicate that gaze in- put may serve as a natural input channel as long as certain design considerations are taken into account. First, gaze data is inherently inaccurate and thus interaction should not rely on precise positions. Using the gaze positions for setting a fish-eye lens and zooming in at the point-of-regard were described as intuitive. Secondly, users should be able to confirm actions with additional explicit commands to pre- vent unintentional actions.

My thoughts

This research could be easily adapted for use in other media applications and situations. Browsing and selecting sound files in a DAW; editing parameters in a DAW; selecting from a list of users or contacts and many other possibilities. Infancy technologies like Gaze will require lots of research and development to bring to market (so to speak) but high risk brings high reward in such cases. This is definitely an area of research that interests me and I will be actively looking at some demo’s or applications that can be brought to a small control group.

New Website

Welcome to the new Audioedge website. This is the place I will post stuff relating to the things I find interesting in life, namely decent dance music, interesting mountain bike trails and interface design. It’s also where I will upload any images too, as I would prefer to keep control over that sort of thing. I’m still on facebook, as you can see on my contact page, but it’s more for direct messaging people which is handy. Anyway enjoy the website!