From the Series “Managing your UX Research Remotely”. This is the third article in our series, which is concerned with the transforming of UX research workflows into digital processes. After explaining why and how we work with Dovetail and Miro in the first two articles*, this article is all about how to turn research insights into frameworks such as personas, when working in a completely digital and remote setup.
Even though Miro helped us a lot for managing the synthesis phase and conducting interviews, it couldn’t help us with usability tests.Of course, it is probably somehow possible to integrate prototypes in Miro and have participants use them. However, Miro is not best suited to display click events, especially because in usability tests, the scenarios should be as realistic as possible. That means that a usability test for an app should be conducted with a smartphone to have the same features and possibilities but also the same disadvantages and restrictions a user would experience in real life situations.
Therefore, usability tests are usually set up in an environment where you have a specific test device on which the software is tested — a laptop, a smartphone, or even some specific product with a user interface if we’re talking about embedded software — and the participants are given some common tasks to perform with the device.
Meanwhile, the researcher takes a close look at how participants interact with these prototypes and where they experience problems. To do so, it is not only relevant to observe which elements participants (try to) click, or how they navigate through the product. It’s also important to see how participants react and what their facial expressions look like. This gives researchers additional information about how participants are experiencing a product. In a physical usability testing setup, in which we sit right next to the participants, these additional information are quite easy to catch. For a remote situation, however, this means you do not only have to be able to see the screen of the prototype the participants are interacting with but also their faces.
This is where lookback (https://lookback.io) stepped in. Lookback is a user testing tool that captures all screen interactions, plus the voice and even the face of the participants while they are interacting with a prototype — not only for prototypes that run on a computer, but also for mobile devices. In addition to that, it also stores all recordings online, so you can always re-watch them later.
A great benefit of Lookback is that additional observers can join the interview, but won’t be able to speak with the participants, and the participants won’t notice them at all. That means that multiple observers are able to join a usability testing without distracting or intimidating the participants. As an interviewer, you get a shared view where you see both the prototype and the participants’ interactions with it, as well as their faces and reactions. Moreover, you can communicate with other observers in the integrated chat area, or leave time-stamped notes for each other.
Participants only see the interviewer at the beginning of the interview. Once the interaction with the prototype is started, participants only see the prototype and can use it without any distractions. For mobile scenarios, Lookback comes with an App (called Participate) that participants need to install on their devices and that allows for prototypes to be tested on mobile devices with the face of the participants being recorded with the front camera.
Lookback is a super cool tool, and the great thing about it is that it fully focusses on being a user research tool, so the usability is well-thought-out. With all its benefits, the tool works so well that we also use it when conducting usability tests in person, because it allows us to include other observers and to produce recordings of the prototype interactions and the facial expressions of participants.
However, there are also some disadvantages, especially when it comes to mobile testing: Participants need to be technically experienced to set up the app correctly. Also, we experienced some major issues related to internet connection quality, since the app seems to require an even stronger internet connection than normal video calls. However, in total, the benefits of Lookback outweigh the disadvantages, and we will definitely continue using the tool after COVID.
So, as this article series is coming to an end, it’s time to draw a conclusion on what the COVID situation has taught us about remote UX research.
All in all, Corona put us in a place where we needed to re-think a lot of our work and processes. And even though Corona itself obviously isn’t something to be thankful for, I am thankful for the change that came with it — or to be even more precise — for the speed in which we were able to achieve this change. I’m sure that sooner or later we would have thought of ways to improve or enhance our working process, either because we were confronted with situations in which we would have needed solutions like that even without COVID, or just because we at COBE think of ourselves and our working processes as a constant beta setup and always look for ways to improve ourselves.
However, the situation that we and everyone else are currently facing has accelerated this process of improvement a lot and I’m very thankful for the things we’ve learned along the way. I’m curious to see how the tools we now use will develop and how we will further integrate their various features — and I’m also sure that the next tool is just around the corner waiting for us to discover the many opportunities it has to offer.