Usability testing: a question of priority

KnowledgePoint is now at the stage of adapting our platform and processes to support the pilot studies that are taking place.
As the KnowledgePoint platform gets better, both our list of bugs and new idea generation is growing. This is all healthy stuff – the result of being tested by our early adopters. But several problems have begun to arise:
- For new features, the list of ideas grows from a manageable 10 or 20, to 50 and beyond. We are aware that it’s too easy to get trapped into a bug-hunt, with the risk of creating a perfectly working platform that no one is using.
- It’s increasingly difficult to be sure, particularly in a collaborative programme, that everyone’s needs are being fairly met.
- The question of what gets prioritised has added weight to it: with the pressure of live pilots, we can’t afford to be addressing the wrong priorities. Priorities for improvement might be pretty obvious at the early prototype stage, but now we need to make sure the platform is behaving as our users need it to.
So what can we do to see the wood for the trees, to separate the must-have features from the nice to haves?
There are a number of empirical methodologies that we are looking at to support decision-making in the development of the site. And the process that we used recently to identify issues and set priorities is known as usability testing.
We began by identifying a list of tasks that are important for users to perform (e.g. a user can search previously answered questions).
Having agreed the tasks, we then developed these into scenarios that we would describe to a test participant (e.g. “Say you want to find out more information about water quality testing. How would you do this?”).
Finally we found two willing participants who were not familiar with the site to sit down for half an hour and carry out the tasks described in these scenarios.
Observers from the other organisations could watch and hear what was happening through a screen cast, and a voice recording allowed us to make sure we can check any uncertainty about what users actually said afterwards without interrupting the test.
The participants were encouraged to say what they were thinking, such as: “This page looks good, but it’s a bit busy”; or “I want to change the order recent questions are listed in but I’m not sure how”).
Each of us observing the test wrote down our perceived top three priority issues that were coming from the test, and then condensed these to a top three from the whole test.
It was fascinating how many things we thought might be problematic weren’t noticed, and certain things we thought might be clear were not in fact sufficiently intuitive.
At the end, we were left with a clear top three things-to-do, from all the myriad of possible alternatives and ways of prioritising.
The experience was a good reminder not to invest in tackling problems before you know what they really are.
Something else these tests highlighted is the changing nature of our group’s relationship with the user, as we move being authors of change to interpreters.
We will be repeating these tests for different types of users and different organisations, and, as mentioned, we also have plans for using other methodologies for making KnowledgePoint deliver the best experience for the user. We’ll bring updates as soon as we can on how we are getting on with these, and plans for incorporating more data into our development decisions.
Stay updated
Sign up for our newsletter to receive regular updates on resources, news, and insights like this. Don’t miss out on important information that can help you stay informed and engaged.
Related articles



Explore Elrha
Learn more about our mission, the organisations we support, and the resources we provide to drive research and innovation in humanitarian response.