Describe the approaches taken to user involvement in the Tokairo case study and discuss these using the issues you identified in Review Question 5 [List the (eight) issues you need to consider when deciding on the appropriate level of user involvement]. What alternative approaches might they have taken? You should refer to the case studies in Section 9.2.1 of the Set Book as examples.What advantages and disadvantages might these approaches have had?
The set book lists the following issues that needed to be considered when deciding the appropriate level of user involvement:
Can you identify the users, or are they the open market?
In this case the users were already identified, as being the drivers.
How many users are there? Tens or Thousands?
We don't know exactly how many users there are, though common sense suggests it might be tens or maybe low hundreds. In any case, given that the users are, by the nature of their job, never in the same place at the same time, it's too many people for them all to be consulted easily or cheaply.
How long is the project expected to take?
Do you want a major contribution from users, or just advice and guidance?
There seems to have been a clear assumption, initially from the client but accepted by Tokairo, that the real users' main contribution should be to the requirements process, to alpha-testing the form and to beta-testing the entire system. Rachel, the systems analyst, acted as a proxy user for reviewing the design, but her contribution (apart from suggesting that the buttons be colour-coded) seems to have been mainly evaluation.
How many users do you want involved with the project?
The team didn't want all users to be involved but consulted user and stakeholder representatives widely during the requirements phase. For the main design options, design and evaluation activities they mainly used Rachel, the user proxy. The notable exception is that they tested the form design on one driver area before beta-testing the entire solution, presumably having identified this as being at higher risk of failure (eg due to the driver's environment when filling in the forms) than the kiosk design.
Is consistency of user input important?
There was no continuous user involvement apart from that of Rachel, the user proxy, so consistency of user input does not seem to have arisen as a question involving the drivers. The consistency of Rachel's involvement is likely to have been helpful to the design process.
How important is familiarity with the system?
Given the general non-involvement of the drivers, familiarity with the system under development does not seem to have been a significent issue. Rachel, as the sole proxy user, was familiar with the system as it was designed, and this would have helped her contribution.
How important is it for involved users to be in contact with the user group they represent?
As there does not seem to have been any change in the drivers' environment or practices during the project, this was probably not an issue for Rachel with respect to the drivers.
Comparing this approach with the Microsoft or OU case studies in the Set book, Tokairo could have co-opted a driver to work with the team, presumably part time. This would have been useful if the team had worries about the correctness of their scope and requirements (as the Open University appears to have had) but would have involved disrupting the availability and working practices of the driver in question, and would have risked inappropriate feedback due to lack of appropriate user motivation. In fact the team seem to have regarded this as tightly scoped exercise with well-understood requirements (especially given the comprehensive requirements phase), so the advantages of this step would have been low.
They could also have conducted workshops and prototyping sessions, as the OU did. This would have had the advantage of reducing the risk of "requiring more changes during the prototype ... and even [the] pre-live stage", but it would have been disruptive to operations, the drivers and their management, and premature exposure would have risked "destructive feedback".
They could also have performed lab-based usability testing, as Microsoft does. This would have reduced the risk of the overall system proving unacceptable during beta-testing but the team seem to have felt that the risk of this failure was not sufficiently high to warrant the required level of disruption and expense.