[Please complete the assignment on page 337 of the Set Book]
- Reconsider the HutchWorld design and evaluation case study and note what was evaluated, why and when, and what was learned at each stage?
- How was the design advanced after each round of evaluation?
- What were the main constraints that influenced the evaluation?
- How did the stages and choice of techniques build on and complement each other (ie triangulate)?
- Which parts of the evaluation were directed at usability goals and which at user experience goals? Which additional goals not mentioned in the study could the evaluations have focussed on?
Reconsider the HutchWorld design and evaluation case study and note what was evaluated, why and when, and what was learned at each stage?
There were several rounds of evaluation. The design was evaluated in terms of usability and scope. Evaluation of the first two prototypes, including the first series of usability tests, led the team to learn that there were problems with the initial scope of the project, so they changed the scope, limiting the 3D virtual world to just Reception, but adding support for asynchronous messaging, games, and locating approved medical sites. When the application was redesigned as a portal it was re-tested and further refined, allowing the team to learn about more specific usability issues. One goal of the new testing was to ensure that the system supported multi-user interactions.
In general evaluation seems to have started with requirements and scope, then moved on to usability and user experience, in other words from whether the project had the right aims and goals to whether these goals were being reached.
How was design advanced after each round of testing?
There were two main rounds of testing. The first round of testing led to a fairly radical advancement of the scope of the design, from an intensively 3D virtual reality experience to a portal with support for email. games, medical queries and rather limited 3D functionality. The second round was more focussed on detailed usability testing and led to incremental improvements in the design.
What were the main constraints that influenced the evaluation?
The team found it difficult to arrange testers rapidly for reasons inherent in their choice of user group - the potential users were sick and had limited energy and availability.
How did the stages and choice of techniques build on and complement each other (ie triangulate)?
The evaluation techniques ranged in nature from quite open and general (interviews, focus groups) to very specific (scripted usability tests). The more open evaluation techniques provided aims and scenarios which provided the context and goals of the more specific evaluation techniques
Which parts of the evaluation were directed at usability goals and which at user experience goals? Which additional goals, not mentioned in the study, could the evaluations have focussed on?
The initial interviews and resulting scope analysis were largely expressed in terms of user experience - one over-arching requirement, for example, being to reduce the social isolation of patients and care-givers. The formal tests were more focussed on usability, but some of the results are again best described in terms of user experience - for example, the problem with the purely synchronous early prototype never reaching critical mass is most simply explained by the fact that it is no fun being in a chat room on your own. Similarly there are UX explanations for the patients' preference for online games (fun), the search for recommended medical sites (helpful) and email (emotionally fulfilling). In addition to the carefully scripted usability tests in each round, there was a short questionnaire which asked both usability and user-experience type questions.
I think that the project could have focussed on the user-experience goals of motivating and satisfying, since these would highly relevent to socially isolated users suffering from energy-sapping conditions.