Eye Tracking the Future – Mixed Reality

Following our previous blog post on the Virtual Reality, we are taking a step ahead to explore Mixed Reality (MR, sometimes also known as Hybrid Reality).

What is Mixed Reality (MR)?

Essentially, MR refers to the merging of both real and virtual reality to create an environment which enables physical and virtual objects to co-exist and interact in real time. Traditional MR has been the main driver for Simulation-based Learning (s-learning), whereby it is used to train apprentices in technical domain, often involving high real-world risk, such as medical procedures, pilot training and military training. MR allows substantial replication of certain aspects of the real world, thus providing a safe, yet realistic, environment to acquire the necessary skills that would be otherwise difficult to acquire in real world settings. If by any chance that you are confused by what is VR, AR (Augmented Reality) & MR, click here (or here) to untangle yourself from the technology jargons.

Current state and the future of MR

Thanks to the publicity and accessibility of current VR technology (notably cheaper and lighter VR Head Mounted Display (HMD) such as the Oculus Rift), MR has been gaining more public attention, as it can provide a more realistic immersive virtual environment than just VR alone. Building on the advantage of using MR in skills learning and findings from scientific research, technology companies has been building “mixed reality classroom” systems (see this article also) to penetrate the rapidly growing EdTech market. Other than learning, MR would also inevitably be the next big platform in the video-gaming industry, which was projected to be the largest market for MR in the next 10 years by Goldman Sachs. The more interesting potential for MR actually lies in the workplace setting, in which Microsoft and Object Theory are working together to build an MR system with the HoloLens for business-related remote collaboration. This not only marks the start of a new form of communication, but also a new form of workplace in the future.

Eye Tracking & MR

Eye tracking is one of the most important research tool that is used in researching driving behaviours and identifying potential hazards that would affect driving safety. Coupled with driving simulators, an eye tracking study can help to study driving behaviours (e.g., visual scan patterns, hazard perception) in risky situations which are impossible to assess safely in real world driving study. Eye tracking in driving simulator studies can also be used as an objective form of comparison with real world driving, enabling designers and engineers of the simulator system to assess whether the driving in the particular simulator indeed resembles real world driving, and whether simulator training indeed translates to real world benefits.

Likewise, eyetracking can be incorporated into other forms of MR simulators easily to help study human behaviours in other potentially hazardous situations. Below are some other examples where eye tracking is used in simulators for various other domains.

Pilot Training Simulator

Flight Control Simulator

Retail Environment Simulator

With the advent of new MR technology and systems, eye tracking can be a powerful tool that can be easily incorporated not just for scientific and market research but also to offer insights into system improvements for better experience.

As you may have known from reading the articles in our blog, eye tracking can be used in a wide variety of research. Check out Tobii Pro’s youtube channel, or continue reading our articles on this blog for more ways you can put your eye-trackers to good use!

If you are interested in how eye tracking can help you and your business, drop us a line at infosg@objectiveexperience.com or +65 67374511. The Future is Now.

 

Designing for universal accessibility – Colour Blindness

Ever wonder whether your website or app is accessible to those who are colour blind? No? Then most probably it isn’t.

Colours are one of the most fundamental design elements that every designer relies on. From colour theory to colour psychology, most designers make use of colours to create the biggest impact on their website or app based on how colour affects user’s attitudes and behaviours. However, many do not consider the fact that approximately 8% of men and 0.5% of women are affected by some form of colour-blindness. That would mean an 8% loss of conversion rate if your website or app is not accessible to them!

People that have colour blindness see the world a little bit differently than the majority of us – they do not have the ability to perceive the differences between some of the colours that we normal people can distinguish. That said, most people with colour-blindness are not as “blind” as the term would suggest. People with complete colour blindness (monochromacy or achromatopsia), who can only see the world in shades of grey, are exceedingly rare.

The most common type of colour-blindness is dichromacy in which dichromats have only two cone photopigments, compared to the rest of us which have three. It means that normally most of us can distinguish colours based on a mixture of the three primary colours, while dichromats can only rely on two colours. The picture below shows the 3 subtypes of dichromacy.

It is important to note that red/green colour blindness is the most common (notice that colours look very similar for both deuteranope and protanope). People with red/green blindness have troubles distinguishing reds, greens and yellows of similar values, hence they might not be able to make out words or images in these colours with low value contrast.

So how does the knowledge of how colour-blindness affects users inform design considerations for this group of people. Here’s the main concepts that you need to know:

  • Don’t rely on colours to convey important information

original mrt.PNG     red green blindness mrt.PNG

Since people with colour-blindness have difficulties distinguishing between colours, they would also have trouble making out information that is conveyed based on comparison between colours. A very salient example would be that of the Singapore MRT map (above left) which have their different routes colour-coded. Now if you are suffering from red/green blindness, you will see something like the right picture above.

Someone with colour-blindness might still learn to read the MRT map after a while, but just imagine the amount of cognitive effort they have to make! Based on psychology and UX principles, cognitive strain can create negative feelings and kill conversion rate.

Colours should never be the sole or primary means of communicating information. It may be more appropriate to provide additional means (e.g., patterns, symbols, text) of obtaining the same information conveyed by colours, especially if the information is going to be important.

  • If you really need to use colours to convey important information, increase contrasts between colours to increase visual accessibility

This article outlines various ways to ensure maximize contrasts in order to ensure accessibility for people with colour-blindness. In general, the advice is to use colours with sufficient hue, value (lightness), and chroma (colour purity) contrasts to ensure all essential elements are visually distinguishable and readable.

The easiest way is to lighten light colours and darken dark colours. If not, staying away from reds and greens might be helpful, since red/green blindness are the most common.

 

  • Use a Colour Blindness Simulator to see how your webpage or app looks in the perspective of people with colour blindness

Design Thinking is all about empathy. Without first putting yourself in other people shoes, you will not understand how differently others are perceiving the world as you do. Using a Colour Blindness Simulator like Coblis is the closest way you can experience how your website or app appears to people with colour-blindness.

Ultimately, listening to your users is still the key. Having the goal to design for accessibility is good, but you will never be sure whether you are doing it in a way that will affect the user experience for your main target audience. Using an eye-tracker like the Tobii Pro X2-60 is a powerful method to get feedbacks from both your main target users and users with colour blindness. By analyzing the differences in the way both population are looking at your website or app, we can then use a design that works for both population so that we can maximize conversion rate.

Keep accessibility in mind and we can help those with disadvantages navigate the world easier with design thinking. Let’s design for everyone and make the world a better place to live in for all!

 

The Emergence of a New Medium – VR and its UX considerations

A very Happy New Year from everyone here at Objective Experience! Hope you guys had a wonderful 2015, and continue to stay awesome. Let’s share the joy and love to everyone by making the world a better place every day and aim towards an even better 2016!

In 2015 we have seen some interesting new trends happening in the technology and UX scene. Notably, we see the emergence of a familiar medium that we are so used to see in science fiction movies – Virtual Reality (VR). Although VR has been around for quite some time now, it was a niche technology that were mostly used as a research tool, as it has been far too expensive and bulky to enter the mainstream market.

In the consumer market arena, Samsung has already announced their new Samsung Gear VR and its corresponding lineup of games and apps, however many are still unaware that VR indeed exists because they have not been educated on it. VR is predicted to be pushed into widespread adoption via the gaming industry, with Sony spearheading the charge. Facebook’s acquisition of Oculus last year could also probably mark the start of an era where VR could replace many of our real-world interactions.  VR could very well be the next computing platform, akin to how smartphones is starting to replace our desktop/laptop computers, as well as change how we live our lives over the last decade.

With the possibility that VR will become the next computing platform that will become increasingly prevalent and integrated into our lives, it seems that the UX community needs to get acquainted with this new medium, because it is our job and passion to make interactive experiences pleasurable. 2016 is poised to be an exciting year for VR, and UX designers can expect to have projects working on the VR medium.

2388290.png

Source: Back To The Future Part 2 (1989)

As the medium is still relatively new, it could be still quite difficult to find good resources on the internet, but fret not, Github user omgmog (Max Glenister) has compiled a really comprehensive list of resources (to date) on UI/UX design considerations for VR. Below is the 3 fundamental UX considerations that is important for a pleasurable VR experience:

1. Immersion / Presence – Perhaps the most important concept that is associated with the UX of VR is “immersion” (or “presence”), so much so that design on the VR platform has been coined “immersive design”. Basically, “immersion” is the extent on how the virtual environment faithfully reproduces experiences in which users believe that the virtual environment is physically real. There are many factors that can “break” immersion, for example, if interaction with a virtual object does not result in any effects, it violates our mental model for object interaction and hence breaking immersion. Unrealistic positional sound effects and model details would also make the object interaction seem less realistic.

 

2. Spatial Disorientation / Virtual Reality Sickness – Research has shown that virtual reality sickness is a major barrier to using VR. The cause behind Virtual Reality Sickness is still not fully known yet, but sensory conflict during movement seems to be the primary cause. In natural navigation, we use a few of our senses in tandem to makes sense of the environment, especially the eyes and the ears. However in VR, this job become primarily subserved by your eyes. The mismatch from the information going into your eyes and the other of your senses creates discomfort and symptoms that are similar to motion sickness. However, the solution to this apparently inherent problem to the VR platform can be as simple as adding virtual noise or twerking virtual reality motion parameters.

 

3. Comfort – Although comfort mainly depends on the hardware design, the design of the software applications contribute to comfort as well. For example, physical movements should be consistent with human ergonomics. If a particular action forces an unnatural twist to the body (e.g., overturning your head while sitting still), it is uncomfortable and can be potentially dangerous. Illegible text (which is pretty common in VR) and overly bright scenes will also impose additional stress on the eyes, causing eye fatigue.

Other than putting the focus on assessing the UX for VR applications, VR can also be a useful tool for general UX research. As mentioned above, VR technology started out mainly as a research tool, thanks to fact that it can handle research that requires ecological validity in a controlled environment. Before VR existed, many research are conducted in a lab-based setting which cannot really be generalized to the “real-world”. With VR, you can attain both criteria by constructing an artificial environment resembling the real-world within a controlled environment. With this in mind, undoubtedly VR can also be useful for UX and market research, specifically in assessing user experience in an unbiased, controlled setting.

VR can also be combined with eye tracking technology to provide more ecological-valid insights to UX research. For example, Tobii Pro offers VR integration with the Tobii Pro Glasses 2, providing an easy way to combine both VR and eye tracking technology into a powerful research tool.

Click here for more details!

 

Emotional UX – Techniques for measuring user’s emotions

Emotion, the very spark of feeling that makes our heart flutter, eyes tear over and our hands clench in fear. No doubt, we are all controlled by emotions. It is the primary instinct that drives us to feel and act. In UX, people are paying more attention upon the skill of empathy, plus emotion. But, as far as we can see, no-one has defined a standard on how emotion in UX/usability should be measured. A designer’s gut feel, their previous mistakes and experience, mostly does it. Trial and error with an agile process is ok, but can it be measured?

Since, by nature, emotions are intangible, there isn’t a definite method to measure emotion yet. We have written this summary so that we can work out what would be best to do as a consultancy right now. 

Neurometrics, Biometrics AND Eye Tracking

Andrew Schall (the principal researcher and senior director at Key Lime Interactive) has written a comprehensive article suggesting various new methods on how emotions can be measured more accurately and objectively, along with their pros and cons. We briefly review some of the methods mentioned in his article below. Then we will focus on some techniques you can adopt right now in your UX practice.

BIOMETRICS

Facial response analysis 

Traditional facial response analysis involves a few researchers observing participants, and coming to an agreement on what emotions are being elicited by the participants. In recent years, software and algorithms have been developed to recognize facial expressions of different emotions, just with a simple webcam set-up. However, the current state of this technology only recognizes a limited set of emotions (e.g., anger, fear, joy), and are only accurate when the emotions are overtly expressed. An example for such a software would be AFFDEX by Affectiva. You can also check out this Tedtalk here by Affectiva’s Chief Strategy and Science Officer, Rana el Kaliouby. Other similar software includes Noldus’ FaceReader and ZFace. Despite the limitations, deeper and more precise algorithms are rapidly being developed to raise the accuracy of the analysis.

Electromyography (EMG)

EMG is able to accurately measure more subtly expressed emotions by measuring signals from specific muscles known to react to specific emotions (check out this Scholarpedia article for a simple introduction). However, EMG is obtrusive and only works if you know which facial muscles to measure beforehand. It is also impossible to put electrodes across the entire face of the participants; but again, this is too intrusive for everyday usability testing.

Another limitation for using facial response analysis and EMG is that they can only measure overt emotions which are often under conscious control. As such, these emotions can be highly influenced by social settings. For example, humans tend to show stronger facial expressions if they believed that they are being observed.

One of our UX consultants trying out the Empathica E3 Wristband

One of our UX consultants trying out the Empathica E3 Wristband

GSR (Galvanic Skin Response)

GSR technology have been traditionally used to measure physiological arousal. It can accurately measure intensities (e.g., arousal, stress), but not emotional valence (positive or negative). Although some computational algorithms can be applied to the GSR data to measure valence (Monajati, Abbasi, Shabaninia, & Shamekhi, 2012), it is still far from being able to measure specific emotions.

Other limitations include a delay of 1- 3 seconds (maybe more, depending on the equipment used), and can be affected by external surrounding conditions (e.g., temperature, humidity) as well as internal bodily conditions (e.g., medications). We have a GSR unit and tried experimenting with it, but we found that it was rather difficult to correlate spikes in GSR with UI interactions. The temporal resolution of GSR is too crude to measure emotional response to individual events.

NEUROMETRICS

Electroencephalography (EEG)

EEG is a neuroimaging method used to measure real-time changes in voltage caused by brain activity. Measuring brain activity means it has a much larger arsenal of measures for emotional responses as compared to biometrics. Its excellent temporal resolution also means that it has the potential to measure real-time changes in emotional responses that would be very useful for UX research. However just like physiological response patterns, brain activity patterns are affected by many external and internal factors. Well-designed computational methods and trained algorithms are needed to extract information from the “noisy” EEG data. For example, movement can cause bunches of artifact that are not related to experienced emotions. Research into EEG as a measurement for emotions are still in early stages, but it has showed more promising results than the GSR in measuring emotional states.

EEG technology is now becoming increasingly accessible (check out this list on Wikipedia), and companies like Emotiv are already starting to produce lightweight and wireless EEG equipment for a simpler and less obtrusive set-up. It means, however, that there will be lesser electrodes to precisely and reliably transform the data into meaningful insights. It is a trade-off between obtrusiveness and data sensitivity.

EYE-TRACKING

Eye-tracking is not obtrusive and can measure arousal from blink activity, pupil size and dwell times, however pupilometry like this suffers from the same problem of being affected by many external and internal factors. Thus, the environment must be well-controlled to avoid disturbance that may contribute to changes in pupilometry data of the participants.

With eye-tracking we can measure people’s unconscious eye gaze response to an interface they are using. Specific emotions, however, cannot be measured using eye-tracking alone, and instead are discovered only in the Retrospective Think Aloud (RTA) Interview afterwards, which is susceptible to suggestibility effect.

Despite eye-tracking’s inability to measure emotional states meaningfully on its own, its main advantage lies in its flexibility to combine with other research methods and measurements to gather powerful insights. Eye-tracking aids us in determining the user’s attention, focus or other mental states. Using other devices, we are potentially able to pinpoint specific events or touch points that cause a change in emotional states during testing sessions. The usage of lightweight eye tracking equipment, such as our Tobii Glasses 2, also enables the flexibility of the research objective to test in their own environment if they were to require more ecological validity.

How do we use all this?

One important piece of advice from Andrew Schall’s article is that EEG and GSR are not for everyone, as there can be potential for misinterpretation and misuse of the data. We believe that there is a need to understand the science behind the complexities of the technologies beforehand in order to avoid misusing them. This also applies to the eye-tracking technology, even if you are using it as a complementary research method to pinpoint specific events as mentioned above.

Andrew also warned that it is often insufficient to measure emotions just with a single technique, as the neurometrics and biometrics measurements for emotions described above are not fully matured yet. Using a variety of methods to complement each other would obtain a better accuracy in identifying users’ specific emotional experiences. There are, however, still significant challenges to implement a standard for measuring emotions using these technologies, especially in terms of economy and practicality. Given neurometrics and biometric measurements still have some way to go, is there any other way to measure emotions more economically and practically?

What else can we do to measure emotion?

We believe the answer to this question could be good old self-report questionnaires.

Questionnaires, unlike user interviews, are more objective and standardized, hence results can be compared across different context and projects. Our clients always want to compare scores like NPS or SUS for themselves against other projects across their organization. Although questionnaires still suffer from the same problem of having a reliance on a user’s recall (which could be mitigated by the use of the eye-tracking + RTA research methodology), it is simple to implement and you do not need to be a neuroscientist to analyze the results. There might be countless questionnaires available online, but fret not, we have done a little research to identify the following that are designed and empirically tested to measure aspects related to emotions

1. Geneva Emotion Wheel

This is an empirically tested instrument to measure emotional reactions to objects, events, and situations, based on Scherer’s Component Process Model. It assesses 20 emotions and can be used in 3 different ways, depending on your objective. You can download a standard template to use at their website, provided it is for non-commercial research purpose.

2. Plutchik’s Wheel of Emotions

70cb81fe1b87d2703d5c2f127841efad.jpg

Source: Author/Copyright holder: Machine Elf 1735. Copyright terms and licence: Public Domain.

Plutchik’s wheel of emotions is an early model of an emotional wheel that was constructed based on 8 “basic” emotions and their “opposite emotions”. It was further expanded to include more complex emotions that are composed of 2 basic ones. Even though this model lacks empirical testing, some UX designers and researchers use it from time to time to map out user journeys, because it provides an organisational structure (e.g., intensity, complexity) when measuring emotions.

3. Self Assessment Manikin (SAM)

This is a questionnaire that uses pictorial scales to measure 3 dimensions of experienced emotions: pleasure, arousal and dominance. It has been often used in evaluations of advertisements and increasingly in product evaluation. Because it is pictorial-based, it is compatible with a wider range of population (children, or participants from different language/cultural background).

4. PrEmo

This questionnaire also uses pictorial scales, but it is designed to measure more specific emotions for product evaluation purposes. It uses a set of 7 positive and 7 negative emotions to measure the emotional impact on users. Like eye tracking, PrEmo can be used either as a quantitative tool by itself, or as a qualitative tool to complement user interviews. Although PrEmo is available for academic (non-commercial) usage free-of-charge, there is a charge in using it for commercial purposes.

5. AttrakDiff

The Attrakdiff does not measure specific emotions, but it includes an assessment of emotional impact on product evaluation. It measures attractiveness of a product based on 2 sets of scales:

  • Pragmatic scale – basically usability, e.g., usefulness of a product
  • Hedonic scale – this is measuring emotional reactions. It is not measuring the distinct emotions itself, but the user’s needs and behaviours arising from the emotions, e.g., curiosity, identification, joy, enthusiasm

Their website offers a pretty comprehensive overview of what is it about and you are able to have a go at the demo on their website too.

 

6. youxemotions

tablet2.png

Source: http://emotraktool.com/en/why

youxemotions offer a simple and easy-to-use solution to measure emotions. Users will choose what they felt from 9 emotions and 5 levels of intensity. Turning results into charts for presentation is extremely easy as well. It is currently in beta, and is free for use till the end of the beta period.

Even though there are various ways to measuring emotions is UX, it is important to understand the benefits and limitations to each method. After all, research methods are only useful if it can help you answer your research question or design objective.

If specific emotions are too complicated for your needs, maybe an analysis of how users are using their mouse would be a good enough  tool to infer negative emotions when users are browsing websites.

-Ying Ki, Shermaine & James

References

Monajati, M., Abbasi, S. H., Shabaninia, F., & Shamekhi, S. (2012). Emotions States Recognition Based on Physiological Parameters by Employing of Fuzzy-Adaptive Resonance Theory. International Journal of Intelligence Science, 2, 166-175 .

What is Agile User Research?

User research in a tight timeline and budget is not impossible. In fact, it is already happening now. All you require are the quality voices of a handful of customers to test and validate your work using an agile user research method.

So what is the core difference between agile and a full user research method? Fewer number of participants are being tested in agile as compared to the full method. But does that mean lesser quality data? No.

One of the early usability gurus, Jakob Nielsen’s research suggested that with only 5 users, 85% of usability problems can be found. For a full user research method, 12 users can find almost 99% of the usability problems. For those who think that user research is too costly and elaborate, a small and agile user research method with frequent testing suits better (as many as the budget allows).

The other difference between agile and full user research is that there will less tasks covered during the testing. To overcome this, test and iterate the product’s features and functions in smaller chunks until it achieves its bigger goal, which is part of the agile manifesto.

Planning and communication are the keys to conducting a great agile user research. Early strategizing occurring at the previous development cycle helps. All of these information and ideas in the early planning phase should be communicated frequently to the user research team so that any issues can be ironed out quickly and for resource management to occur efficiently.

Here in Objective Experience, the entire testing to reporting phase takes only 2 days.  The planning beforehand from the kick-off workshop takes 2 days. Ideally, everything happens within 4 days as illustrated below.

Agile user testing in Singapore

For agile user research, there is no need for testing a large number of users as then it defeats the purpose of the word ‘agile’, which means quick. Testing 5 users who are selected carefully and thoroughly screened to ensure the best participant quality of the targeted user segment is good. Each testing session covers around 2-3 main tasks or user flows within 45 minutes.

It is compulsory for the product design and development team members to sit in and observe the testing sessions as it goes on. Why? To immediately get a sense of what users actually need and iterate on the spot or the next day.

In our agile user research sessions, we also use eye tracking as a way to gain direct insight into how the product is used and what users struggle with. Eye tracking allows observers of the testing to see users’ unconscious behavior in real time, and enables stakeholders to make instant decisions about solutions to interface problems.

At Objective Experience, we have the facilities for team members and other stakeholders to observe the live sessions in person at our viewing room or remotely via a web link. The remote viewing link is great if you have overseas members interested in observing what goes on during the user research. We’ve got a really comfortable space complete with refreshments too!

OESGviewingroom

Take a peek at our viewing room!

After all the testing sessions are done, a brief workshop with the research moderator is held with the observing team members to discuss the key findings from the users, brainstorm some solutions together and actualize the results for the next development. The next day, a report cementing the top 10 most impactful findings with the actionable design recommendations will be produced.

oesg_agilereportsampleSLIDELet us help you make incremental improvements to the user experience of your products, thus driving business growth. Drop us a line at infosg@objectiveexperience.com or call +65 67374511 to discuss your needs now.

User interviews Demystified

Often user research is associated with a whole lot of complex sounding methodologies, like

Anecdote circle
Behavioral mapping
Body storming
Cognitive walkthrough
Context mapping
Ethnographic interview
Focus Group interview

But what does this jargon mean at the end of the day? Steve Portigal, author of “Interviewing Users” emphasizes that, no matter where you are in the design process, there really is just one “methodology” that you ought to follow. Speak to users.

When should I speak to users?
Newsflash! You can speak to potential users anytime during the design process! There have been times when our clients came over with a bunch of wireframes. The client wanted to test an idea with those wires. In other instances, we’ve had designers over with a few unrelated screenshots of proposed mobile applications. The user interviews that followed, resulted in design decisions that shaped products.

How do I speak to users?
So, do you need to have a formal education in design or psychology to interview users? Nada. Anyone, be you a designer, developer, a business user or a product owner can be an interviewer. Good interviewing skills come from experience.  To help you along the way, I would recommend the TED Talk by Julian Treasure on how to listen better. Julian speaks about the acronym RASA, to use as a guide in communication. In Sanskrit, the ancient Indian language, RASA means “Essence”. It is completely relevant to interviewing users. So, here’s my take on the acronym.

Receive:  When you are an interviewer, watch AND listen to what the participant is doing and saying. Don’t assume you know all the answers.
Appreciate: Respond to the participant with “hmm”, “ok”, “Interesting” to show that you ARE listening. Be genuinely interested.
Summarize: Summarize themes from the research. Build a story so your stakeholders can empathize with users. Use short clips of the interview to demonstrate your themes.
Ask: Take on the role of a student rather than that of an interviewer. Ask the participant questions like, “Would you explain to me what you meant?”

What do I do with all the insights after the interviews?

Getting insights across are just as important as the interviews themselves.  The onus lies on you to get the word across to the stakeholders.  YOU are the voice of the user, so SPEAK.

There is however no such thing as an ideal report. A good report identifies issues and provides actionable recommendations.

Summarize:  Make the connections. Tell a story.  Summarise your research so that you tell a compelling story. Managers and VP’s seldom have the patience to go through pages and pages of your report, no matter how passionately you wrote it.

Include videos: Nothing is more impactful than seeing the users speak about pain points themselves. Use the videos to substantiate your story.

Include quotes: Participant quotes are more colourful and poignant than a researcher’s description of an issue. They usually drive a point home and sometimes even provide comic relief in a conference room full of grim faced executives.

Deliver the results in person: Always present your findings to your stakeholders personally.  If you spoke to participants, you will be able to effectively become the voice of the user to stakeholders.

Well like I mentioned earlier, good user interviewing comes with experience, so, don’t hesitate to get out there and ask the right questions.

Gowri Penkar