top of page

Usability Testing

College is the client

To conduct usability testing on the St. Lawrence College BlackBoard menu for students and iterate a new design and validate it with another round of testing.

1.jpg
Picture2.png

Roles
UX researcher & UX designer

Team size
2

Time frame
2 weeks

Tools
Figma, powerpoint

3.jpg

Figure: Defining the scope of the project 

Scope

To see how the users are currently using the BlackBoard (BB) menu and what are the challenges faced and areas of comfort. Furthermore, based on these observations propose a new BB menu and test it again with users for the same set of proposed metrics. Lastly compare the two datasets and propose design recommendations. Primarily the scope of this usability testing was the BB course menu labels, there IA and user navigation through these menu options.

Scenario and persona

To see how the users are currently using the BlackBoard (BB) menu and what are the challenges faced and areas of comfort. Furthermore, based on these observations propose a new BB menu and test it again with users for the same set of proposed metrics. Lastly compare the two datasets and propose design recommendations. Primarily the scope of this usability testing was the BB course menu labels, there IA and user navigation through these menu options.

1.jpg

Figure: Participants were identified based on the persona and scenario created

Overview

Test metrics

To conduct a successful the most important area to be defined is the metrics. It is against these metrics that all evaluations of user testing will be done. The metrics will be divided as qualitative and quantitative. While the qualitative will focus more on participant's reaction and emotional responses during task completions. The quantitative will focus on data points that can be counted and collected

Qualitative metrics

  • Pre-test questions

  • Post-test questions

Quantitative metrics

  • Single ease questions

  • ​Number of errors
  • Time on task
  • System usability scale

Test script

A script was created to test the participants. During the test the script was read as it is with no changes at all. Following the script the user will perform the tasks and observations will be made. The script introduces the participant to the test details and how they can perform. This eases the participants and helps in getting more natural responses from them. The script comprises of the introduction, persona and scenario, tasks and all the pre-determined metrics questions.

3.png
2.png
5.png

Figure (from left) : Participants were introduced to the persona, the terms of the test, the metrics and the tasks.

Data template

The script was based off of the areas that were identified in the BB menu as potential areas of user testing . Next, in order to make our data collection more organized and easy to compute we designed a template in the excel format. This template had areas pre-determined for qualitative and quantitative metrics that were placed in a fashion that would make note taking organic and smooth for the tester. 

6.png
7.png

Figure (from left) : Areas to be tested were identified and the note taking template

Testing Phase I

Qualitative analysis

Analyzing the qualitative metrics is a bit more complicated. Based on the pre-test and post-test questions answered by the participants, the pain-points were identified , the highlights were noted. Sometimes, the participants say one thing but their actions say otherwise and it is where it becomes tricky to analyze the data set. But even then the participants were open abut their opinions about their experience with the BB menu and it was all collected and analyzed with great satisfaction.

Quantitative analysis

Unlike the qualitative analysis, the data for quantitative metrics is easier to compute and measure. There are fixed formulas and methods to compute satisfaction levels and they were employed. But that is not all, in order to convey the findings more effectively and to be able to identify trends in user behavior, each of the quantitative metrics was plotted on a graph. This kind of representation makes it easy for anyone to go through the data and ebbs and tides of the user behavior.

8.png
12.png
10.png
9.png
11.png

Figure  : Various metrics being represented on graphs and tables to analyze the data collected.

Analysis Phase I

Proposed design

Based on the analysis of the phase I testing, we designed a new BB menu. This new menu was designed based on the pain points that testing in phase I revealed. Tasks which had more errors, more time taken and where users says they experienced those tasks as high difficulty level.

13.png
14.png

Figure (from left) : The new proposed design to be tested and the sampe data template for phase II.

Test metrics

As a rule of thumb of usability testing we test the new proposed design of the BB menu on the same metrics. This simply done because then we can compare if the new design has improved the user experience or not. Thus, this was not much to think about and the same metrics were repeated as in testing phase I.

Qualitative metrics

  • Pre-test questions

  • Post-test questions

Quantitative metrics

  • Single ease questions

  • ​Number of errors
  • Time on task
  • System usability scale

Test script

Because we had no major changes in the design of the BB, thus the script was left unchanged. Again as we would be testing some participants which participated in the phase I, so it was interesting to observe how the participants responded to the same script for a different design.

Data template

Because the same data metrics were being noted for the same script, hence it made sense that the same data template was used.

Testing Phase II

Qualitative analysis

Similar to phase I analysis, analyzing the qualitative metrics proved more complicated. Based on the pre-test and post-test questions answered by the participants, the pain-points were identified , the highlights were noted. Sometimes, the participants say one thing but their actions say otherwise and it is where it becomes tricky to analyze the data set. But even then the participants were open abut their opinions about their experience with the BB menu and it was all collected and analyzed with great satisfaction.

Quantitative analysis

Unlike the qualitative analysis, the data for quantitative metrics is easier to compute and measure. There are fixed formulas and methods to compute satisfaction levels and they were employed. But that is not all, in order to convey the findings more effectively and to be able to identify trends in user behavior, each of the quantitative metrics was plotted on a graph. This kind of representation makes it easy for anyone to go through the data and ebbs and tides of the user behaviour..

15.png
19.png
17.png
16.png
18.png

Figure  : Various metrics being represented on graphs and tables to analyze the data collected.

Analysis Phase II

Comparing two sets of data collected

After successfully conducting the testing phases, it was time to compare the two data sets. This was a simple exercise in determining if we had made any improvements in user experience through are design changes. Comparing the datapoints on graphs clearly pointed out where the test succeeded and where it failed. It was evident if efficiency had improved or not and it was also very easy to identify to see the changes in the satisfaction levels of the participants. This comparison gave us insight in the user behavior towards the changes made in the BB menu, thereby validating our decisions taken to make those changes and pointing where we failed in our changes. One thing to be noticed is that this is strictly a comparison of the quantitative data only and not the qualitative responses. Because comparing emotional responses is not a valid way of identifying user behaviour change unless it is the same participant.

20.png
21.png
22.png
23.png

Figure: All quantitative metrics were compared on graphs and tables

Comparing Datasets

Final design recommendatinos

Now that we have conducted two phases of user testing, have analyzed the data collected independently. We have compared the two datasets and identified failures and success of the modified design on the BB menu. Based on the analysis of the entire usability test, we give in our final design recommendations to the clients. This includes the areas in the BB menu should be retained from the original menu and the ones which should be included from the new proposed menu. This is a sample of the test and it was recommended to the client that in order to confirm these design recommendations, usability test should be conducted for a larger number of group set.

24.png
25.png

Figure: All quantitative metrics were compared on graphs and tables

Design Solutions

Thank you

previous case study

next case study

bottom of page