Wednesday, December 09, 2009

Saturday, October 17, 2009

M364 - lessons learnt

So, I did my M364 Fundamentals of Interaction Design exam yesterday. I've got to the end of the course, and it's a good moment to look back and take stock.

First, I discovered that I can actually study something properly. I remember my academic career as process of scrabbling through tests and exams despite having dreamed my way through classes and prep, until coming a cropper at uni. For this course I sat down and read the scheduled materials and scribbled away in the margins and attended the tutorials and got good marks on the assignments. That was very satisfying (to use a User Experience goal).

Second, I can do something about my wandering attention. I've taken to using the Pomodoro Technique, and I feel I've come a long way, even if I still have a long way to go.

Thirdly, I've developed a good way of organising my notes for something like this - I used the free mindmap option at The good news is that this is a very effective way of reviewing material which forces you to resolve all those niggling little questions (what's the difference between ubiquitous computing and wearable computing?) that you can so easily skate over when merely reading. The bad news is that I didn't take up mindmapping until the revision phase, and didn't complete this activity in time to move on to the next, which would have been to extract hand-written revision cards from the mindmaps. So, next time, mindmap as I go through materials for the first time!

So, what next? I hope to find ways of using what I've learnt about Interaction Design at work, which was supportive about the course. No luck so far, but having made this investment I also hope to find a mentor through, perhaps, the IXDA, or, more informally, through the UX book club. I'm also considering whether I can do some IxD on the side, getting involved with friends or an open source project.

I'd like to build on what I've learned. Now I've finished the course I'd like to use some of the time this releases to do some Google analytics, and also to do some Arduino hacking - very different directions, I know!

Further down the line, I'm considering taking the Open University's Design Diploma. It seems pretty practical, and I suspect that the various digital design labels (Interaction Design, User Experience and Information Architecture) are going to end up merging with an updated version of Product Design, many of whose wheels we are probably re-inventing. And, above all, I like designing things. This is probably what took me into software development, and as the process became more and more productionised, and involved less and less design, it may be what took me out of it too.

Enough reflection, now for action...

Tuesday, August 18, 2009

A thin line between triumph and tribulation

Johnny needed a bath yesterday morning - but he wasn't in the mood. At all. So I (ready to catch my train) got called in, and found him sitting on the sofa, uninterested in either persuasion or authority.

My efforts simply make him whine and wave his fists, but he holds off from hitting me (good start). I resist the probably futile and counterproductive temptation to drag him upstairs and offer him a hand to thump instead, providing some helpful martial arts grunts to add to the action. Several thwacks later and I see a smile, then hear a little "OK", so light I almost miss it.

Parenting, eh? Memo to self - rule 1: be there. Autopilot won't hack it.

And let's celebrate the occasional triumph - the tribulations can look after themselves...

Wednesday, April 15, 2009

M364 Block 1, Unit 4, Activity 1

[Please complete the assignment on page 337 of the Set Book]
  1. Reconsider the HutchWorld design and evaluation case study and note what was evaluated, why and when, and what was learned at each stage?
  2. How was the design advanced after each round of evaluation?
  3. What were the main constraints that influenced the evaluation?
  4. How did the stages and choice of techniques build on and complement each other (ie triangulate)?
  5. Which parts of the evaluation were directed at usability goals and which at user experience goals? Which additional goals not mentioned in the study could the evaluations have focussed on?
Reconsider the HutchWorld design and evaluation case study and note what was evaluated, why and when, and what was learned at each stage?
There were several rounds of evaluation. The design was evaluated in terms of usability and scope. Evaluation of the first two prototypes, including the first series of usability tests, led the team to learn that there were problems with the initial scope of the project, so they changed the scope, limiting the 3D virtual world to just Reception, but adding support for asynchronous messaging, games, and locating approved medical sites. When the application was redesigned as a portal it was re-tested and further refined, allowing the team to learn about more specific usability issues. One goal of the new testing was to ensure that the system supported multi-user interactions. 

In general evaluation seems to have started with requirements and scope, then moved on to usability and user experience, in other words from whether the project had the right aims and goals to whether these goals were being reached.

How was design advanced after each round of testing?
There were two main rounds of testing. The first round of testing led to a fairly radical advancement of the scope of the design, from an intensively 3D virtual reality experience to a portal with support for email. games, medical queries and rather limited 3D functionality. The second round was  more focussed on detailed usability testing and led to incremental improvements in the design.

What were the main constraints that influenced the evaluation?
The team found it difficult to arrange testers rapidly for reasons inherent in their choice of user group - the potential users were sick and had limited energy and availability.

How did the stages and choice of techniques build on and complement each other (ie triangulate)?
The evaluation techniques ranged in nature from quite open and general (interviews, focus groups) to very specific (scripted usability tests). The more open evaluation techniques provided aims and scenarios which provided the context and goals of the more specific evaluation techniques

Which parts of the evaluation were directed at usability goals and which at user experience goals? Which additional goals, not mentioned in the study, could the evaluations have focussed on?
The initial interviews and resulting scope analysis were largely expressed in terms of user experience - one over-arching requirement, for example, being to reduce the social isolation of patients and care-givers. The formal tests were more focussed on usability, but some of the results are again best described in terms of user experience - for example, the problem with the purely synchronous early prototype never reaching critical mass is most simply explained by the fact that it is no fun being in a chat room on your own. Similarly there are UX explanations for the patients' preference for online games (fun), the search for recommended medical sites (helpful) and email (emotionally fulfilling). In addition to the carefully scripted usability tests in each round, there was a short questionnaire which asked both usability and user-experience type questions.

I think that the project could have focussed on the user-experience goals of motivating and satisfying, since these would highly relevent to socially isolated users suffering from energy-sapping conditions.

M364 Block 1, Unit 3, Activity 9

How user-centred was the approach taken by Tokairo? Start by listing the main stakeholders - beneficiaries, decision makers, gatekeepers and workers - and discussing their roles in the development process.

Then analyse Tokairo's approach against each of the five principles of user-centered development given on page 286 of the Set Book
The main stakeholders in the project were, at a corporate level, Excel plc as parent of Tankfreight and Shell, the project's client. Shell were responsible for allowing Tankfreight and Tokairo access to their depot. Tankfreight were probably the main beneficiary since they commissioned the project, and automating the delivery reporting process should reduce costs and increase accuracy.

The more immediate stakeholders were Tankfreight project management (Hugh Rowntree and Rachel Darley), Tankfreight's account manager for Shell, the driver foremen, trade union representative and the drivers themselves.

Hugh and Rachel were gatekeepers to the drivers and Hugh was presumably a decision maker in commissioning the project, and, together with Rachel, signing-off design options and prototypes.

Tankfreight's account manager may have been a gatekeeper for access to the Shell depot, and would have had some involvement in high-level project decisions.

The driver foremen and union representatives are not only gatekeepers to the drivers but possibly themselves users of the system, as workers, and possibly beneficiaries, either as users or because it makes the drivers happier.  

The drivers themselves are the primary users of the system, possible beneficiaries (if the forms are more accurate and they get fewer forms being returned to them) and are involved as workers.

Presumably others are also involved, either as workers or beneficiaries - those who used to enter the drivers' forms manually, those who manufacture and sell the kiosks, and those who service them.

Now let's evaluate Tokairo's approach to the five principles of user-centered development.

Users' tasks and goals are the driving force behind the development
Tokairo's initial input comes from Hugh Rowntree and Rachel Darley, who know the drivers, their tasks and their environment well, and Tokairo had already had access to the users before the project even started. They appear to have a good understanding of users' tasks and goals. But satisfying the drivers' tasks and goals is probably a necessary rather than a sufficient condition for the project's success, the final criterion presumably being that the system reduces costs and increases accuracy. 

Users' behaviour and context of use are studied and the system is designed to support them
As part of the system audit, "Treve actually went initially to the oil terminals and depots. Treve met the drivers and the driver foremen". The team appear to have a detailed understanding of what the drivers do, and of where they do it, in the cab and at depot reception, and it is clear that design decisions directly reflect these factors.

Users' characteristics are captured and designed for
Once again the team appear to have a detailed picture of their user group, of what they have in common (being literate, non-academic males, short of time, primarily interested in earning a living and going home) and where they diverge (level of interest in computers). And again, this is clearly reflected in the team's account of how they made design decisions. 

Users are consulted throughout development from earliest phases to the latest, and their input is seriously taken into account
This principle was defintely observed during the requirements phase of this project. It is clear that Tokairo would normally prefer to consult users during the design and implementation phases but felt that there were specific reasons they shouldn't do so in this case, although the form did get some early testing, which is a kind of user-consultation.

All design decision are taken within the context of the users, their work, and their environment
This principle seems to have been observed very thoroughly. In fact it seems that the team ascribe the success of the project to this factor.

Given that the project followed at least four of the five principles, and that the remaining principle was partially missed for specific reasons rather than lack of interest or lack of user focus, I would say that the approach to this project was quite highly user-centered.

M364 Block 1, Unit 3, Activity 8

Describe the approaches taken to user involvement in the Tokairo case study and discuss these using the issues you identified in Review Question 5 [List the (eight) issues you need to consider when deciding on the appropriate level of user involvement]. What alternative approaches might they have taken? You should refer to the case studies in Section 9.2.1 of the Set Book as examples.

What advantages and disadvantages might these approaches have had?

The set book lists the following issues that needed to be considered when deciding the appropriate level of user involvement:

Can you identify the users, or are they the open market?
In this case the users were already identified, as being the drivers.

How many users are there? Tens or Thousands?
We don't know exactly how many users there are, though common sense suggests it might be tens or maybe low hundreds. In any case, given that the users are, by the nature of their job, never in the same place at the same time, it's too many people for them all to be consulted easily or cheaply.

How long is the project expected to take?
Again, we don't know how long the project was estimated to take or how long it actually took, but in terms of user involvement the answer is probably that it was going to take too long to consider co-opting a real user for the duration, but not so long that any significent user practices or requirements would change before the project was completed.

Do you want a major contribution from users, or just advice and guidance?
There seems to have been a clear assumption, initially from the client but accepted by Tokairo, that the real users' main contribution should be to the requirements process, to alpha-testing the form and to beta-testing the entire system. Rachel, the systems analyst, acted as a proxy user for reviewing the design, but her contribution (apart from suggesting that the buttons be colour-coded) seems to have been mainly evaluation. 

How many users do you want involved with the project?
The team didn't want all users to be involved but consulted user and stakeholder representatives widely during the requirements phase. For the main design options, design and evaluation activities they mainly used Rachel, the user proxy. The notable exception is that they tested the form design on one driver area before beta-testing the entire solution, presumably having identified this as being at higher risk of failure (eg due to the driver's environment when filling in the forms) than the kiosk design.

Is consistency of user input important?
There was no continuous user involvement apart from that of Rachel, the user proxy, so consistency of user input does not seem to have arisen as a question involving the drivers. The consistency of Rachel's involvement is likely to have been helpful to the design process.

How important is familiarity with the system?
Given the general non-involvement of the drivers, familiarity with the system under development does not seem to have been a significent issue. Rachel, as the sole proxy user, was familiar with the system as it was designed, and this would have helped her contribution.

How important is it for involved users to be in contact with the user group they represent?
As there does not seem to have been any change in the drivers' environment or practices during the project, this was probably not an issue for Rachel with respect to the drivers. 

Comparing this approach with the Microsoft or OU case studies in the Set book, Tokairo could have co-opted a driver to work with the team, presumably part time. This would have been useful if the team had worries about the correctness of their scope and requirements (as the Open University appears to have had) but would have involved disrupting the availability and working practices of the driver in question, and would have risked inappropriate feedback due to lack of appropriate user motivation. In fact the team seem to have regarded this as tightly scoped exercise with well-understood requirements (especially given the comprehensive requirements phase), so the advantages of this step would have been low. 

They could also have conducted workshops and prototyping sessions, as the OU did. This would have had the advantage of reducing the risk of "requiring more changes during the prototype ... and even [the] pre-live stage", but it would have been disruptive to operations, the drivers and their management, and premature exposure would have risked "destructive feedback".

They could also have performed lab-based usability testing, as Microsoft does. This would have reduced the risk of the overall system proving unacceptable during beta-testing but the team seem to have felt that the risk of this failure was not sufficiently high to warrant the required level of disruption and expense. 

Monday, April 13, 2009

M364 Block 1, Unit 3, activity 7

Look back through the previous sections. Describe Tokairo's approach to design. How these map  onto the ID activities and characteristics of the ID process?
Having established the user requirements, Tokairo bought their own experience into the design process. Some of the major decisions - such as the choice of touch-screen input and the choice of a "big lottery ticket" form - were made rapidly, after consideration of alternatives, but not necessarily after much iteration. In the case of the basic form design, rather than the ID activity of "Developing alternative designs" they evaluated alternative existing designs, to similar effect but presumably at rather lower cost. The use of the team's professional experience seems to have provided a similar short-cut for the kiosk design, and might be seen as a greater difference from the ID method. 

The form design was tested in the wild (in "the most militant driver area they could find") and maps to the ID activity "Building interactive versions of the design".

There was explicit evaluation of both the Kiosk and Form designs and this corresponds to the ID "Evaluating designs" activity.

The ID characteristic "focus on users" can be seen in the whole requirements stage, and in thepre-beta  field testing of the form, though less so in the implementation of the kiosk design where the client preferred to provide a user proxy.

The "Specific usability and user-experience goals" ID characteristic is arguable "absent but unnecessary" due to the short lines of communication and clear implicit focus in this area.

And the ID characteristic of "Iteration" can be seen in mainly within activities, in the refinement of the kiosk and form designs based on the feedback from users, user proxies and field testing.

Saturday, April 11, 2009

M364 Block 1, Unit 3, Activity 6

Look back through the previous section and list the characteristics of the approach that Tokairo took to the requirement activity. How do these map onto the ID activities and the characteristics of the ID process (as discussed in Section 6.2.1 starting on page 168 of the Set Book)? 
What is their attitude to stakeholders?
Tokairo have a methodology with a first stage being the Site and System Audit. They try to identify "logical group[s]" of users by business function, who have characteristic concerns and requirements. Having already talked to the drivers and driver foremen, they went on to talk to the systems manager and systems analyst, and union representatives. This maps to the ID "needs and requirements" activity, and the ID characteristic of being "focussed on users".

The ID method has the characteristic of being "iterative". The Tokairo approach in this case appears to involve more iteration within the requirements activity than between activities when compared to the ID method, but they describe this as being due to the success of the requirements activity, and both are in fact present.

The results of the initial requirements activity were communicated verbally rather than as written deliverables, and the usability or user experience goals were probably set implicitly, in terms specific to the project (eg "suitable for well-motivated, literate drivers with big fingers who are in a bit of a hurry") rather than in the more general terms of the ID method, which may represent a departure from the ID characteristic of "Specific usability and User Experience goals".

The Tokairo attitude to stakeholders is open and respectful - because "if you just talk to the managers ... you will have a whole lot of surprises".

Friday, April 10, 2009

M364 Block 1, Unit 3, Activity 5

This activity builds upon Activity 6.4 on page 182 of the Set Book. Compare the two electronic calendar designs using the following usability and user experience goals:
  • Efficiency: In particular, which design enables the user to find a given date most quickly?
  • Learnability: which design will be easiest to learn?
  • Aesthetically pleasing.
  • Enjoyable.
Which design do you prefer, and why?
Efficiency: The desktop design appears to be very inefficient for finding dates since you have to "leaf through" the diary, so the time required to locate a date page is proportional to its distance from the currently opened date. The phone version requires keystrokes instead of mouse clicks but since you're entering a date the number of keystrokes is not large, and does not increase with calendar distance. The desktop diary is probably more efficient for viewing dates within the same week, the phone diary is probably more efficient for navigating to more distant dates.

Learnability: The desktop diary is probably easier to learn, especially given its limited date navigation and graphical rendering, but the simplicity of the phone diary makes it no harder to learn than necessary.

Aesthetically pleasing: The desktop diary is able to use a clean, helpful graphical rendering which offers hints about its further functionality (tabs for the notes and address book sections). The phone diary is written for a visually restricted environment, which means that any comparison is almost as much a comparison of the environments as of the designs. So I would say that the desktop design is more aesthetically pleasing, while noting that this more helpfully addresses the question "Which design would a user prefer?" than "Could the phone design be made more pleasing?"

Enjoyable: The cross-referencing possibilities of the desktop design might generate some enjoyment. Assuming we can click on "John" in an appointment, get taken to his address page, and then see all his appointments (in the same way that we can see all people listed for an appointment), this cross-referencing, together with the ability to read notes from previous meetings, might give a certain bookish enjoyment. The phone design offers less prospect of discovery, so I think scores even closer to a neutral zero.

Assuming that both designs are equally available (for example, that I have gone out and bought an iPhone) I would prefer the desktop design for its greater efficiency when dealing with nearby dates, its slightly greater enjoyability and its better aesthetics,  though I might find its lower efficiency for distant dates frustrating.

Sunday, March 15, 2009

M364 Block 1, Unit 3, Activity 4

I now want you to draw a design for an electronic calendar system that is radically different to the outline sketch in Figure 6.1. [which shows a page and book model] You only need to draw a single sketch in order to illustrate your design. Try asking yourself questions such as: how could a mobile telephone, with a small sceen, be used to access an electronic calendar? What are the characteristics of a magnifying glass, and how might these characteristics be of benefit when designing an electronic calendar?
[click on thumbnail for fullsize illustration - with apologies to Scott McCloud whose Understanding Comics I've just been reading for UX Bookclub London]

M364 Block 1, Unit 3, Activity 3

Draw a stakeholder diagram for the supermarket check-out system from Activity 6.2 on page 172 of the Set Book.
Without further ado...

M364 Block 1, Unit 3, Activity 2

Who are the stakeholders in the At Home website project?
All groups shown in the associated organisation chart have some kind of stake in the project. The front-line, logistics and support, and customer support staff will all be affected by changes in the way the company communicates with customers, and the training group will be affected by changes in the job-description and turnover of front-line staff. Development will be judged by its success or failure, as will, to some extent, Head Office.

Obviously customers are the other major stakeholders - possibly the most important group, and the hardest to communicate with.

M364 Block 1, Unit 3, Activity 1

You are being employed as a novice interaction designer on a project to develop a public kiosk providing information about the exhibits available in a science museum (this is one of the examples used in Activity 1.2 on Page 10 of the Set Book).

Consider how you might implement each of the four ID activities. For example, you might observe users as a part of establishing requirements. As it is still early in the course you will need to use your imagination and experience when answering this question, because we have not yet covered the various approaches in any detail.
Stage 1 - Identify Needs
To identify needs I would start with the relevant stakeholders. I might end  up with a list like this:
  • Visitors
  • Visitor-facing staff - around the displays and at the shop
  • Curators / educators - whoever determines the purpose and manner of museum communications
  • Display builders / designers - a museum probably has permanent staff for this
  • Management - line management and, if possible, the project sponsor
I would start by reviewing the business goals of the project with the project manager. As a novice I probably won't have access to the project sponsor but a smart PM should be able to answer success-criteria questions like: does the museum need more visitors (eg if funding is related to number of visitors), more satisfied visitors (if funding is related to visitor satisfaction ratings), more families (perhaps to compensate for spending cuts on sports fields and leisure centres), more research visitors (to compete for academic funding), more out-of-town or foreign visitors (to increase municipal tourism) or maybe higher-spending visitors (if the museum is self-funding)? 

Some of these have mutually exclusive implications - maybe the way to raise average visitor satisfaction is to reduce the number of visitors, along with the volume of noise and the length of queues. I would also like to have some idea of the human and technical resources available for the project.

I would ask the museum staff to help me draw up an informal profile of typical visitors, broken down into about a dozen types by (for instance) age, group composition (families, couples, schools?), educational or social background, and distance travelled. I would then select perhaps the five most important types and interview some visitors from each, on arrival and on departure, to see why they came and what they thought about the museum. I would then identify any common themes and factors.

I would then talk to the customer facing staff about what they have to help visitors with, and their observations over time of what pleases and displeases visitors (some of this information may relate to times when the museum was different, allowing us to learn from the past).

I would ask the curators / educators about their goals for the kiosk - what they wanted to display, how they wanted to explain things, and what they wanted to draw to the attention of the various kinds of visitors.

I would talk to the display designers and builders about the physical and technical communication options, including any means for visitors to interact with the kiosk. I would try to build a particularly good relationship with them because as a junior interaction designer there is a significient risk that they will simply ignore me, thus hindering my ambitions to become a senior interaction designer.

Stage 2 Create alternative designs
I would develop and sketch different kiosk designs, showing how they related to policy, pedagogical and practical goals identified in Stage 1, any differences in the resources required for each design, and what other trade-offs the options involved. I would present the designs to representative members of each group of stakeholders to get feedback on the goals related to that group, leaving management to last in order to prevent premature option selection.

Stage 3 Build alternative designs
Depending on the time available I would build one or more interactive prototypes, using further testing and consultation to determine the final choice. The interactive prototype might be acceptible as a design specification for the this option, if not I would add whatever documentation or visual design details were required by the rest of the delivery team.

Stage 4 Evaluate designs
We will evaluate designs at all stages since some problems and issues will invariably become visible to the affected stakeholders only at later stages.

Wednesday, March 11, 2009

M364 Block 1, Unit 2, Activity 2

Complete the assignment on page 28 of the Set Book. [...]

Find a handheld device and examine how it has been designed, paying careful attention to how the user is meant to interact with it.

(a) from your first impressions, write down what first comes to mind as to what is good and bad about how it has been designed, paying particular attention to how the user is meant to interact with it.. Then list (i) its functionality and (ii) the range of tasks a typical user would want to do using it. Is the functionality greater than, equal to or less than what the user wants to do?

(b) Based on your reading of this chapter and any other materual you have cine across, compile your own set of usability and user experience goals that your think will be most useful in evaluating the device. Decide which are the most important ones and explain why.

(c) Translate the core usability and user experience goals you have selected in two or three questions. Then use them to assess how well your device fares (e.g, Usability goals: What specific mechanisms have been used to ensure safety? How easy is it to learn? User experience goals: Is it fun to use? Does the user get frustrated easily? If so, why?)

(d) Repeat (b) and (c) for design principles and usability principles (again, choose a relevant set)

(e) Finally, discuss possible improvements to the interface based on your usability evaluation.
I recently bought a Livescribe Pulse smart pen. With no previous experience of smart pens, my ability to use this device is highly dependent on the quality of its design, so I have chosen this as my subject.

[Since smart pens may be less familiar to others than remote controls and mobile phones, it might be helpful to explain the basic functionality here: 
  • You can set it to record sounds, then start taking notes - words, diagrams, doodles, whatever - at a lecture or meeting, and later play back what was recorded at the time you wrote or drew anything.
  • You can then upload both sound and image to a computer where you can browse the page images and continue to use anything written there as an index into the sound recording.
There's more, but this is the core functionality]

[a] The first good thing about the way the device works is the power and simplicity of its core functions. Indexing sound (or video) recordings so that relevant content is immediately and intuitively available is a hard task, and this device solves it brilliantly.

The second good thing thing is the simplicity of the device's mechanical user interface. The pen exposes the following features:
  • The ballpoint or stylus; then, running along the top,
  • A small speaker grille
  • A small microphone
  • A small rectangular OLED display strip
  • A flush on/off button
  • At the blunt end, a 2.5mm socket for stereo headphones which double as a stereo microphone; then, running along the bottom,
  • Some flush electrical contact strips for docking with the device's USB cradle / recharger
  • And, returning to the writing end, a small infra-red camera which can see where the pen is writing.
Of these, only two (the on/off button and the ballpoint) are a direct part of the user interface. This simplicity has been achieved by moving much of the interface that might normally be found on a device on to specially printed paper.

Each page in the pre-printed pads has a set of record / playback / volume "control" icons printed along the bottom edge. The pen uses its camera to recognise these controls, so that if you tap the "Record" icon, it starts recording. 

Each page is printed with a fine set of microdots, using the proprietory Anoto dot pattern, which uniquely identifies every location on every page. This pattern is also picked up by the camera, allowing it to record and locate every usage on every page. So if you wrote a note while recording, you can come back and tap on that note and the pen will start to playback the recording, starting from 5 seconds before you started the note.

Similarly, the inside front page of the notebook has a more extensive set of controls allowing you to use the pen as a calculator, change its settings, or view its status.

The single most frustrating thing about the way it works is that it requires the specially printed paper for its core functions.  There is a menu function viewable through the built-in OLED display which can be operated using a navigation cross (referred to as "NavPlus") with four arrowheads and a central location, but the most useful thing you can do with this and no pre-printed paper is to start and stop sound recording, totally unconnected to what you write or draw.

(i) Functionality provided
  1. Write (by which I include drawing)
  2. Record (sound)
  3. Link recording to writing, so that you can
  4. Tap on anything you wrote earlier and playback what was being said at the time
  5. Copy witing and sound up to a computer
  6. View saved pages, and click with a mouse on any part of the writing to hear what was being said at the time
  7. Play back micro-movies on the tiny display
  8. Use the pen to play the piano given a hand-drawn keyboard
  9. Translate a short list of words between various languages
(ii) What a user might expect
  1. Write
  2. Record
  3. Link recording to writing
  4. "Tap" playback
  5. Upload
  6. "Click" playback
  7. Handwriting recognition
I think the first six provided functions are those that a user would hope for, the remainder are basically gimmicks which are good for entertainment or showing the device off. 

As the Pulse smart pen is likely to be the first smart pen that most users come across, they may not have very clear expectations. My experience when explaining it to others is that the core funcitonality is more than they expect, but given that functionality they then ask eagerly whether it can also read the user's handwriting (to make it clear - it can't, it stores text and graphics alike as lines on pages), so in that respect it may do less than users want.

[b] I think the most important usability goals for this product would be
  • Utility
  • Effectiveness
  • Ease of Learning
The Livescribe Pulse offers functionality that will be new to most users. In order to maximise take-up, its designers need to ensure that users can easily see (in prospect) and feel (in practice) the benefits of this functionality. This means the functions must be valuable to the user (utility), they must be well-executed (effectiveness) and they must be easily available to the new user (ease of learning).

Take-up will also be influenced by the achievement of user experience goals. I would suggest that the most relevant UX goals are:
  • Satisfying
  • Aesthetically pleasing
  • Motivating
Potential users are unlikely to buy this device unless reviewers and/or word of mouth suggests that they will find it satisfying. The iPod generation of consumers will also expect an iPod-priced smart pen to look good, especially since it will primarily be used in a public context (lectures and meetings) and possibly in a high-profile way if the user has to ask permission to start recording. Finally, the experience has to be motivating - taking notes is not an inherently exciting activity, and if users rapidly lose their initial enthusiasm this will limit the viral word-of-mouth so necessary for such a new product from a start-up company.

[c] How can we express the usability goals as questions that would apply to a real Pulse user?
  • Utility - Do its note-taking and recording functions add up? Is there anything missing that I need in order to achieve my personal goals?
  • Effectiveness - Is the Pulse good at what it's supposed to do?
  • Ease of learning - Are there functions or features which are important to me which I have difficulty executing because it's not obvious how, or even that they're supported?
Utility "do its note-taking and recording functions add up? Is there anything missing that I need in order to achieve my personal goals?"

My experience with the Pulse is that the core note-taking and recording functions are well chosen and well-integrated for the purposes of someone attending lectures or business meetings. 

This seems to be shared almost unanimously by the reviewers that I found on the web - I ascribed some of the enthusiasm to the fact that it would also make a good tool for journalists and broadcasters, which was, naturally, the profession of most of the non-"geek" reviewers.

So I give it a high rating on basic utility, apart from the lack of handwriting recognition (which is in any case available as a third-party extra).

Effectiveness "Is the Pulse good at what it's supposed to do?"

The Pulse is designed to record its own writing, record sound, and index the sound recording using the writing recording, both on the orginal paper page and, once saved to a PC, on screen. It does all these functions simply and effectively. 

Some of this effectiveness comes from hidden functionality - I have heard attempts at recording presentations (university debates) using an ordinary cassette recorder, and voices were normally too loud or too quiet, but the Pulse, presumably using digital processing, quietly sorts out the voice volume and clarity. So I rate it high on effectiveness.

Ease of learning "Are there functions or features which are important to me which I have difficulty executing because it's not obvious how, or even that they're supported?"

A new purchaser can demonstrate the on-paper functionality to a curious family members within minutes and, given a short break for installing the software, the basic on-screen functionality too. This is despite the fact that several features of the user interface are novel, namely tapping on controls to control the pen, and tapping on writing (on the page or on the screen) to replay the associated sound recording. So I would say that it rates very high on ease of learning.

Let's translate the user experience goals into specific questions that would apply to a Pulse user.
  • Satisfying - Does the user feel that the pen allows her to take better notes as simply and unobtrusively as possible?
  • Aesthetically pleasing - Does it give pleasure to look at, and pride of ownership?
  • Motivating - Does using my Pulse make me want to keep on using it?
Satisfying "Does the user feel that the pen has allows her to take better notes as simply and unobtrusively as possible?"

Taking better notes is not necessarily a matter with a simple technical fix. Looking at some of my saved pages I can see that I will get more out of this device by interspersing my occasional summary paragraphs and diagrams with short notes that will act as bookmarks into the sound recording. 

I also find it frustrating that I have to take special notebooks with me to make the most of it - after all, one key benefit of the pen as a recording device is its extreme portability and ability to work with any writable medium

But, even without making adjustments in my technique, I still get a great deal of added "note-taking value" out of it, so I say that it is quite satisfying.

Aesthetically Pleasing "Does it give me pleasure to look at, and pride of ownership?"

The design is fairly minimalist, the Pulse has a body shell in the by now standard high-tech anodised black with a discreet logo. The body is about 1.5cm wide, presumably to allow for the inner technology. This is at the limits of visual acceptability. So I would rate it as medium-high - not below expectations, but not in itself reason for showing off the device to all and sundry.

Motivating "does using my Pulse make me want to keep on using it?"

It's hard to get enthusiastic about taking notes as a general activity. Livescribe have tried to address this in many ways, some of them part of the Pulse's context and infrastructure rather than relating directly to the device as such - for example, having saved notes and recordings to your computer, you can then upload them to a central catalogued location on the web, and view other people's notes and pictures by category. 

When viewing a saved document you can have it play back the pen strokes along with the sound, giving it similar presentational functionality to a webcast - thus LiveScribe call it a pencast.

Along with pencasts, sketching is a more creative activity than note-taking, and if you combine the natural motivation for these two activities with the ease of using the Pulse as a self-scanning drawing device, I would say it does make me want to keep on using it.

[d] I would choose the following Design Principles as particularly relevant to the Pulse pen:
  • Visibility - are the controls and the state of the device easily visible to the user?
  • Feedback - does the Pulse pen give feedback on what the user is doing and has done?
  • Affordance - do the controls of the device give the user a clue about how they should be used?
This device is typically used in situations like lectures and meetings where the user can't ask everybody to stop talking while they look for a control or check the device status (or at least not without severe loss of face), so visibility is clearly key. 

These same usages scenarios (lectures and meetings) also dictate that a user cannot easily ask speakers to repeat their last however-many minutes of talk, so feedback on whether the user has succesfully set the Pulse recording or not recording is essential.

Finally, good affordance is important both for making the pen easy to learn and for making it harder to make operational mistakes while using it, which ties in with unforgiving requirement to record things which, if missed, my be unrepeatable. 

Visibility "are the controls and the state of the device easily visible to the user?"

The main controls for the Pulse pen are the set of icons across the bottom of every page of pre-printed Pulse notepaper. These are highly visible, as long as you have some pre-printed Pulse paper at hand. 

The state of the device is visible through the built-in OLED display. When switched on, the display shows the current time and a battery level graphic. When recording, this changes to a an incrementing timer display. After the recording, when the pen is docked with its USB cradle, this changes to an animated upload graphic while any unsaved sessions get copied over to the PC.

Other system status levels can be seen on the Pulse desktop application, or on the OLED display by tapping on specific icons printed on the inside front cover of each pre-printed Pulse notebook. 

I would say that design principle of visibility has been well implemented and prioritised, given the constraints of the device. The market that has grown used to the luxurious visibility options of the PDA or mobile phone is largely the same market that buys - or is at least familiar with -the iPod shuffle. So this level of visibility is probably acceptable.

Feedback "does the Pulse pen give feedback on what the user is doing and has done?"

The primary feedback mechanism for this device is sound. In particular it play different beeps to indicate the start and end of recording. However there's no feedback on whether penstrokes are being recorded, which would be useful both in the case of very light penstrokes, which it sometimes misses, and as a reminder whether the pen is or is not recording. 

The mouse pointer turns from an arrow to a finger when, using the desktop application, you move over part of the image that is clickable, ie you have something written there that will trigger sound playback.

I would rate feedback on this device as good.

Affordance "do the controls of the device give the user a clue about how they should be used?"

The main mechanical inputs are the on/off button and the pen itself. The on/off button could have been implemented with greater affordance, as it is "D"-shaped and visually integrated as one rounded end of the OLED display, but this was clearly a trade-off with the user experience goal of achieving an aesthetically, minimalist design. The fact that the device is at one level obviously a pen, and that is how it should be used, seems to me to count as successful affordance for something this novel. 

This leaves the pre-printed control icons - do they invite one to click on them with the pen? The answer to this question varies so rapidly along the first few seconds of familiarisation that it's hard to provide a globally valid answer, but I would say that once users have any idea at all of how the Pulse pen works, these controls provide good perceived affordance.

I would choose the following Usability Principles as appropriate to this device:
  • Recognition rather than recall - does it force the user to remember things to use it instead of allowing users to remember them?
  • Match between system and real world - does the pen speak the user's language and concepts or does it force them to learn new terms and meanings? 
  • Error prevention - does it prevent errors occurring?
As an innovative device offering unfamiliar functionality, failing to follow either the "Recognition rather than Recall" or the "Match between system and real world" usability principles would pose a particular challenge to new and potential users that could have a serious impact on the device's take-up.

As mentioned above, in most cases that the pen is being used errors would cause a probably irretrievable loss of (spoken) data, so this is also an particularly important usability principle. 

Recognition rather than recall "does it force the user to remember things to use it instead of allowing users to remember them?"

The device is almost entirely compliant with this usability principle. The only thing the user has to remember is how to operate it, which is a skill ("savoir") rather than a datum ("connaitre").

In fact it embodies this design principle for its primary function. The ability to replay sound keyed by what you were writing or drawing at time allows the user to recognise either the physical page of the notebook, or a thumbnail in the desktop application in order to replay a recording, rather than having to recall the date or any other index by which recordings would otherwise be filed.

Match between system and real world "does the pen speak the user's language and concepts or does it force them to learn new terms and meanings?

When a device offers new and unfamiliar functionality there will have to be some new terms or meanings, but I feel that this has been kept to a minimum in this case.  The printed control icons acquire new meaning as being executable, but this is an extension of their their existing meaning (as images of controls) rather than a contradiction. 

There is some breakdown of the pen / notebook metaphor when it comes to buying new notebooks - these are sold with volume numbers, for example a "Black unlined journals Numbers 1 and 2" and all instances of "Black unlined journal Number 1" have the same micro-dot pattern, which would cause the pen to get confused if they were in use at the same time. This means that two physically distinct journals might be identical from a Pulse pen's perspective - definitely a mismatch between the system and the real world. 

Error prevention "does it prevent errors occurring?"

The Pulse pen offers very little scope for creating errors. Immediate operation is done by tapping printed icons with audible feedback. Transfer from the pen to the desktop is done by simply placing the pen in its USB cradle. There is no opportunity for syntax errors. 

There are opportunities for error in other aspects, however. A simple case is drawing or writing over the control icons, which can trigger them. But on the whole the Pulse pen implements this usability principle quite comprehensively.

(e) I feel that the core functionality scores well against most relevent usability and user-experience goals as far as its core functionality goes, so my three suggestions for improvement are extensions of the design rather than corrections.

First, I would suggest that the pen give a haptonic click when the user taps a printed control. This would give better feedback than the beep (more appropriate and less obtrusive) and would also add a more relevant element of "fun" than, for example, the tacked-on micro-movie.

Secondly, I would have the pen recognise that if the user draws a box and then double-taps the top-left and bottom right corner, it should treat this rectangle and its contents as an "Item", for example:

would be an item on Pulse Pen Usability. These Items would then act as a third dimension of organisation (after Pages and Sessions) once uploaded to the PC - I would be able to annotate Items with tags or other metadata, view them as thumbnails, and add content from any page or pages to a single Item, thus providing greater flexibility by employing one of the device's primary design values, namely recognition over recall.

Thirdly, I would offer a notebook-size pen holder, which would unfold to create a wipe-clean pre-printed Pulse pen writing surface, complete with controls.

In order to avoid the difficulties associated with writing on the same surface multiple times (mentioned above in discussion of the hazard of buying duplicate Pulse pen notebooks) I would add a "New Page" control to be tapped whenever the existing page content was wiped. 

The Pulse would then store interpret the microdot pattern of the single wipe-clean page as belonging to a new page.

On unexpected beauty

Jade Goody was a public buffoon - someone who was made us all feel smart by comparison with her, with a small, grudging admiration only for her lack of illusion and her clear-headed determination to make a career as a "celebrity", until early cancer changed her tale from comedy to tragedy.

Looking at the much-exposed wedding photo of her - in black, her hair lost to chemotherapy - kissing her partner Jack Tweedy, I am struck by the simplicity and dignity of her appearance. Astonishingly, she is beautiful.

Is this unexpected beauty any more valid than her earlier public image of buffoonery? After all the dress was doubtless chosen with the help of Max Clifford's finest image consultant, the bold baldness is subtly softened by some perhaps artificial eyelashes and, above all, we cannot look at the picture without knowing the story.

Perhaps I'm stating the obvious, but I'm starting to think that both images are equally superficial and equally valid. I fear we're all ridiculous at some time in our lives, I hope we're all beautiful at others. 

Goodbye, Jade.

Sunday, March 08, 2009

M364 Block 1, Unit 2, Computer Activity 3

Using the design principles (visibility, feedback, constraints, mapping, consistency and affordance) to informally evaluate the user interface of the rather old-fashioned Casio watch simulated on this DVD [...]

As it is easier to evaluate an interactive product with a particular task or tasks in mind, I suggest that without looking at the Help menu (the user may not carry the manual with them), you:
  • use the stop-watch function that the digital watch provides, switching on the stop watch and then switching if off after one minute, then resetting it to zero.
As you use the simulation you will be playing the role of a real user of the watch.

As you complete the exercise, consider how the design addresses each of the design principles.
The watch's four buttons offer physical affordance for clicking. Due to the limited space available on the watch face and around the watch's sides, and the limits to how text and buttons can be shrunk while still being usable by human eye and finger, I was unsurprised to find limited visibility of the watch's extended functions. 

In fact there are two sets of button labels - an outer set of four labels, in light blue, and an inner set of two labels (for the right-hand buttons) in white on black. Given that one of the outer labels was "MODE" this mapped to the idea that the "inner" labels would only apply when the watch was "in" an applicable mode. 

As none of the outer labels (ADJUST, LIGHT, 24HR and MODE) would stop or start a stop-watch, I cycled through the modes until I saw "ST" appear in the top left section, and "0:00 00" in the main display. The mode-cycling operation was consistent with other digital watches, and the "ST" was feedback (of a kind) that we might be in STopwatch mode.

I then used the bottom right button, with an inner label of "SIG.ON-OFF / START-STOP" to start and stop the stopwatch.

To reset it, I first tried changing  to the default mode and back again, but this left the stopwatch value unchanged. So, still in stopwatch mode, I used the top-right button, with an inner label of "ALM.ON-OFF / LAP-RESET / REPEAT" and this immediately set the display back to zeroes.

Saturday, March 07, 2009

M364 Block 1, Unit 2, Computer Activity 2

Using the definitions given in Section 1.5.1 of the Set Book, informally evaluate how well the National Health Service (NHS) Direct and the Royal Horticultural Society (RHS) websites satisfy the six usability goals described in Section 1.5.1 of the Set Book: effectiveness, efficiency, safety, utility, learnability and memorability.

For each of these I suggest you role play exploring the site as a member of the likely target audience - for the NHS Direct site, as someone with a minor illness, and for the RHS site as an enthusiastic gardener.

As it is easier to to evaluate an interactive product with a particular task or tasks in mind, I suggest that for the NHS Direct site you:
  • try to find the diagnosis for some symptoms
  • find the telephone number to call for further help
  • carry any other tasks that seem particularly worthwhile
Now explore the RHS site. I suggest that for this site you:
  • try to find a climbing rose, noted for its fragrance
  • find out the price to enter the RHS Gardens at Wisley
  • carry out any other tasks that seem particularly worthwhile
Whilst you are exploring the sites, I suggest you sketch the findings of your evaluation using a radar diagram as you did for Computer Activity 1.

How might you measure how well each of these goals has been achieved? This will be covered in detail in Block 4 of the course, so just use your judgement in this activity and see this as an opportunity to think through the issues.

A couple of years ago I had a throat condition so I decided to research this. I clicked on the "Self-Help Guide", chose "A-Z of Symptoms", clicked on "S", then "Sore throat in adults". I was presented with a Yes / No dialog. On reporting (truthfully, back then) that I was unable to swallow my own saliva, I was told to call 999, with a reminder of the sole question I had answered so far, and my answer. 

This was a very rapid drill-down, and was, if not exactly an informative answer, certainly good advice, so I felt the site scored well on efficiency and effectiveness. But I wanted a slightly lengthier investigation so I re-started with some of my three-year-old's recent symptoms and was guided through ten questions leading to a (correct) diagnosis of Chicken Pox, with helpful (and lengthier) advice. 

As in the previous case, my question-and-answer trail was visible on the screen and I could simply scroll up the screen to change an earlier answer and resume the enquiry from there.

Overall I was impressed by the effectiveness of the site. The main contact number was visible at all times, and the support for self-diagnosis was so much better than I had expected that I felt that it should have its own top-level domain rather than being one-amongst-many options from the main NHS Direct page so that more people would find and benefit from it.

The RHS site was pleasing on the eye but I found it a bit hard to get started - to find the admission price for Wisley, I had to go to "What's on" then choose between "RHS Gardens", which took me to the Wisley home site, and the "RHS Garden Finder", which eventually gave me a summary of the same information.

The search for a fragrant climbing rose took me to a page with a "To access RHS Plant Selector click here", which in turn led to a comprehensive query screen, where I entered "rose" as a key word then selected "Climber/wall shrub" for plant type and "Noted for fragrance" under as the fragrance option. The results page told me that I had 63 results matching my query "Noted for fragrance, Roses". Inspecting the results and finding for instance, both Rosa 'Cécile Brunner' and Rosa 'Climbing Cécile Brunner' demonstrated that the search engine had lost my "Climber/wall shrub" criterion. I repeated the query without the "Rose" key word and discovered that the results list included just 19 climbing fragrant roses. This procedure struck me as both inefficient and unsafe. These problems may have been complicated by the fact that I was not a registered member - the initial results said "63 plant(s) were found that matched your requirements. As an RHS Registered Member you would be shown  131 plants" but I was left with a lack of confidence in this function.

On further exploration I located the RHS Harlow Carr garden which is convenient for our regular family trips to North Yorkshire.

I documented my results as a Radar Chart - the attached thumbnail image will take you to the full size version.

Friday, March 06, 2009

M364 Block 1, Unit 2, Computer Activity 1

Using the definitions given in Block 1, Unit 2, Review Question 2, informally evaluate how well the CBeebies and Chelsea Football Club websites satisfy each of the user experience goals: satisfying, enjoyable, fun, entertaining, motivating, aesthetically pleasing, supportive of creativity, rewarding and emotionally fulfilling.

[... R]ole play exploring the site as a member of the likely target audience - for the CBeebies site, as a teacher or parent of a five-year-old, and for the Chelsea site as a Chelsea supporter. In both cases, you may actually be a member of the target audience, which should make it much easier for you.

As it is easier to evaluate an interactive product with a particular task or tasks in mind, I suggest that for the CBeebies site you:
  • find out what, if anything, is on the television at the time you explore the site
  • find an interesting activity to play
  • explore the site, finding anything else that might interest you or activities that you may find enjoyable
Now explore the Chelsea Football Club site. I suggest that for this site you
  • find out who the various goalkeepers are and the countries from which they originate
  • find out who was manager from 1962 to 1967
  • explore the site, finding anything else that may interest you or activities you may find enjoyable
Whilst you are exploring the sites, I suggest you sketch the findings of your evaluation using a radar diagram. [... Rate both sites on the same diagram.] This representation enables you to compare the websites along a number of different axes.
The CBeebies site appears to be as much addressed to toddlers as to adults - much real-estate is given over to clickable images of series or characters, and only one out of 17 left-hand navigation options is labelled "Grown-ups". Nevertheless it was easy ([PgDn] > "What's On") to find out what was currently showing.

I also found games rapidly ("Fun and Games") and played the Fimbles "Arty Tunes" which I suspect my pre-literate toddler would have been able to handle quite easily - for reference, he can explore DVD menus using the DVD player's cursor arrows, but cannot play the Ker-wizz maze game without adult assistance (which is why he is - sadly - no longer allowed on the CBeebies website). The obviousness of the game, and the ease with which new characters, objects or dots could be placed on the canvas to be played as "music" with timing and pitch depdning on their X and Y position made the game quite rewarding, but the fact that you still end up with a slow-motion cacophony was not very satisfying.

Having done that I hopped to "The Fimbles on the bus" which popped a media player in a box. This was Real Player, and I was surprised to find that I had to click on the Start button to start playback. Apart from this minor annoyance, it just worked.

The Chelsea site tasks proved trickier. I was able to find the first team players immediately ("Players" > "First team") but ended up cycling through most of the 27 players in order to find the Czech Petr Cech's substitute, Welshman Rhys Taylor. 

Finding the club history was once again an easy start ("Club" > "History") followed by an enjoyable bit of burrowing through the narrative in order to identify Tommy Docherty as manager for the period (though neither the word "manager" nor his start or end dates are mentioned). This was an example of reduced usability - efficiency, to be more specific - leading to enhanced user experience - entertainment, or perhaps emotional fulfilment depending on how far the user identified with the club.

I entertained myself by finding William "Fatty" Foulke, mentioned in the team history as their 22 stone first goalkeeper, in the historical Player Database, and admiring his photograph.

Finally, here is a radar chart comparing user experience scores by category for the two sites - click on the thumbnail to see the full size version.

M364 Block 1, Unit 2, Activity 1

[Two screenshots] are taken from BBC website and the BBC CBeebies website.

How do the two home pages compare with each other, with regard to the user experience goal aesthetically pleasing and the usability goal efficiency? Place two Xs on [a] matrix that indicate their relative positions with regard to these two parameters. A high score for aesthetics means that it is very attractive and a low score means that it is less attractive. A high score for efficiency means that it is possible to complete your tasks quickly and a low score means that it is relatively slow to complete these tasks. [...] This is a purely subjective exercise, and there is no one correct answer, but you should be able to justify your answer.
On a scale of 1 to 10, and using an axis order of (Efficiency, Aesthetics), I rated the sites
  • BBC: (7, 6)
  • CBeebies: (8, 7)

I took the approach of rating each site from the perspective of its target audience, but have not attempted to factor out the five-year age of the screenshots.

The BBC home page looks purposeful but it is clear that some functionality will require scrolling. It also suffers from the lower resolution of its time, which reduces the amount of information (text, more than graphics) that can be put on the screen without looking noisy and cluttered.

The CBeebies site is garish, and supports both structured navigation and just playing around, clicking on characters - both of which characteristics are highly appropriate for the children and toddlers who watch this channel. A good half of these users are likely to be pre-literate, and will find it far easier to recognise and click on characters than to identify menu options (which they would have to do by appearance and position rather than by reading).

I rate the CBeebies web site higher on efficiency than the course book does because I regard playful exploration as one of the supported activities of the site, and I think that the high proportion of the screenshot dedicated to clickable images of progams and characters is a very efficient way for its audience to do that. 

Monday, March 02, 2009


Just documenting this recipe while I can, since I can no longer find it online, but luckily found an old shopping list which helped me reconstruct it:

Cooks for 4-6 people

  • olive oil
  • 1 onion, sliced
  • 3 red peppers, sliced 
  • 2 cloves garlic, crushed
  • 2 sticks celery, sliced
  • 1 teaspoon turmeric
  • 2 hot peppers, shredded (or the Hungarian variation - 1 teaspoon Eros Pista)
  • 400 gms chorizo, diced
  • 400 gms prawns (I use uncooked)
  • 400 gms chicken, diced (I like thigh fillets, but breast is fine)
  • 400 gms rice
  • 1 litre chicken or vegetable stock
  1. fry onion, sweet peppers, hot peppers, garlic, chorizo and chicken for ten minutes. 
  2. Add chili, turmeric and rice, stir to coat till translucent
  3. add stock, bring back to simmer, cook until rice is swollen
  4. add prawns, cook for another ten minutes, then serve.

M364 Block 1, Unit 1, Activity 6

Start by reading the interview with Gitta Salomon on page 31 of the Set Book.

List the main characteristics of Gitta's approach and the activities she carries out.

How do these relate to the activities in the ID process and the characteristics of the ID process?

Gitta Salomon's approach appears to involve iteration through the following activities:

  1. decide how to engage with the client's team ("the client")
  2. get the client to present their product (or demonstrate it, if demonstrable) and explain their target market, the competition and other relevant features of the product context 
  3. Gitta Salomen's team ("GS") take notes, videos, tracings etc 
  4. GS then construct a "coherent framework" from these resources, aiming to get to the "big design problems of the product" 
The characteristics of this approach are:

  1. it has a strong focus on the client relationship and client communication, in that it explicitly addresses effective communication from client to GS, from GS to the client, and within the client's team. 
  2. it reflects her view of interaction design as a "design discipline" featuring guidelines and good practice, but still requiring both creativity and analytical thinking, as opposed to a science or a set of rules
  3. it iterates through the activities in order to evolve the design(s) against client feedback
The GS approach shares with the set book ID process the emphasis on iterative design and communication, but differs in that the focus of the communication is the client rather than the end user, and in that there is less explicit structure to the overall process.

It could be said that the GS approach, with its focus on the client relationship, has a slightly art-of-the-possible or descriptive flavour, whereas the OU approach, with its focus on the end user, has a more ideal-case or prescriptive flavour.

Sunday, March 01, 2009

M364 Block 1, Unit 1, Computer Activity 2

The following are three of the currently important ID websites. I suggest that you:
  1. explore Don Norman's site to see what, if anything, he has to say about the latest piece of interactive product you have recently bought or read about
  2. explore the site to see what it says about paper prototyping (paper prototyping is an important part of this course)
  3. read Usability News to see what the latest news is.
You might like to follow some of the links from these sites in order to identify other useful sites. If you find one, I suggest you recommend to to other students via a message to your online course forums.
  1. Don Norman has an article on Waiting: a necessary part of life which interests me because I recently occupied time spent in an NHS waiting room by analysing the strengths and weaknesses of the interaction design of its patient queueing system (in short, great self-registration, poor notification). He argues that "[w]henever two systems must interact, unless every event of one is perfectly synchronized with the events of the other, one system is going to have to wait"; that once you become sensitised to this view you will see buffers everywhere, from the factory to the queue, buffet food, plate, page and even brain; and that these buffers embody major interface problems and solutions.
  2. I searched for "paper prototype" and "paper prototypes" and got the same 16 results. There seemed to be three themes running throught these pages. These were firstly, research indicating that the results of usability testing against "low fidelity" paper prototypes are just as effective as testing against "hi fidelity" software prototypes; secondly, guidance for creating and testing paper prototypes; and thirdly, the use of paper prototypes in specific contexts such as Form Design and (new to me) Parallel Design.
  3. Usability News had a review of an article on the live question of "Does UX still matter in touch economic times?" where the current decline of Starbucks, "a model for the experience economy", is contrasted to the success of McDonalds, and offered as evidence that quality of user experience may become less important than price as the economic chill deepens. But the article then refutes this worrying suggestion by pointing out that providing a good user experience may be the cheapest way of increasing the value of your offering.

Wednesday, February 25, 2009

To love is to fear

Such sad news about the death of David and Samantha Cameron's little son, Ivan. It's hard to imagine the horror of hearing such a diagnosis for your son, the hard slog of caring for and loving him, the sorrow of finally losing him.

And time, of course, to consider all the other hidden heroes, quietly getting on with it like Claire Bates, whose very personal response to Ivan's death is almost unbearably moving.

Monday, February 23, 2009

M364 Block 1, Unit 1, Activity 5

It is common for members of a multidisciplinary team to have different priorities which can lead to conflicts. For [an interactive website educational website to accompany a TV series], list the likely priorities of each of the following team members:

Interaction designer

Educational advisor

Graphic designer

Software engineer

Describe three different conflicts that may arise in the team as a consequence of these differing priorities.

How might different team members differ in their use of the word learning, possibly leading to miscommunication?

How might these conflicts and misunderstandings be overcome?
[1] The interaction designer would prioritise ease of use, ease of learning the site, and an appropriate balance of fun, challenge and satisfaction in the user experience

The educational advisor would be interested in promoting the education aims of the TV series, possibly even in supporting specific learning outcomes. He would also be concerned with ensuring that the educational approach suited the target age-groups.

The graphic designer would want to ensure that the site as a whole expressed good visual design quality in terms of fonts, layout, colour, and general clarity.

The software engineer would prioritise ease of implementation, performance, robustness, security and maintainability. For the user interface, these considerations might well lead him to prefer the use of a library of standard UI components.

Three possible conflicts include:
  • The educationalist might prefer the site's visual character to conform to the active and dynamic TV series, whereas the graphic designer might prefer a less cluttered and busy look, while the software engineer would be inclined to just keep it simple.
  • The team members might have different ideas about user interactions. The educationalist might, for example, want to give the user a choice of inputs using a thought-bubble containing floating images, whereas the software engineer would prefer to use a drop-down list or radio buttons.
  • The educationalist might want to animate parts of the site, for instance to give graduated feedback to users in response to inputs, whereas the software engineer and graphic designer might prefer to express feedback using standard error boxes and form transitions 
[2] The Educational advisor would probably use the word learning to refer to the site's educational topics and aims, whereas the other team members would be more likely to use it to refer to learning how to use the site

[3] Conflicts and misunderstanding could be reduced by recognising the problem and agreeing to adopt a common project vocabulary. This could even be made a deliverable, as a glossary section in the documentation or help system.