I've been thinking that it would be pretty easy and very compelling
to build a usability testing module for OpenACS.
The Big Picture
Most software and web site developers aren't too good at interface
design, and yet they rarely do usability testing. This doesn't make a
whole lot of sense. Presumably, software is built to be used, and it
won't be used if it's not usable. How many web sites or applications
have you given up on, never to return, after a few minutes (or hours)
spent just trying to figure out how to find your way around the durned
thing?
Now, beside the fact that people just don't recognize the need for
it, one other major reason why people often skip usability testing is
because, in its classic form, it's time consuming, expensive, and hard
to do. Fortunately,
Jakob Neilsen has solved this problem for us. He's come up with a <
a href="http://www.useit.com/papers/heuristic/" target="_blank">
heuristic evaluation method that is easy to implement and, happily,
fairly easy to put imagine implementing in code.
I believe that having a solid usability testing capability based on
best practices could be a tremendous selling point for OpenACS.
Certainly, a module like this one would have done a lot of good when aD
was building the admin pages for ACS Classic.
The Basics of Heuristic Usability Evaluation
There are three steps in Nielsen's process. In the first step,
testers review the site keeping 10 basic rules of usability in mind.
Whenever they encounter a problem, the testers write it down, along
with the rule that they think it violates. In the second step, all of
the problems the usability testers found are compiled in to one list.
Then all of the testers rate the problems on a scale
of 0-4, where 0 is not a problem and 4 is a major disaster. The scores
are then averaged for each alleged problem. In the third step, the team
brainstorms solutions to the most serious problems. It's a pretty
simple process, provided that (a) the testers have at least minimal
instructions/coaching in the process, and (b) it's easy for them to
make notes on the problems they encounter as they encounter them.
How a Usability Testing Module Might Function
We'd need a bit of functionality for each step in the testing
process. For the first step, imagine that we could make a &
quot;usability problem report" link available on every page of an
OpenACS site. It would be nice if it would appear only for certain user
groups, so you could control your tester population. The link would pop
up a dialog box that would allow the tester to describe the problem and
select the heuristic that the problem violates. They could also access
descriptions and examples of each heuristic in case they're not sure
which one applies to the situation. Users would not be allowed to enter
a problem without selecting a heuristic. This may seem like a trivial
detail, but actually its crucial to the system, both because it forces
some standardization of the process and because it forces the testers
to learn and apply the science of usability testing. The system would
also automatically record the tester's identity, the date and time the
observation is made, and the page to which it corresponds.
For the second step, testers could access a list of the identified
problems. (Moderators could weed out redundant entries.) Once again,
the testers would be forced to assign a rating the problem and would
have access to definitions and examples of each rating level. It would
be nice, too, if we could provide a link to the page in question for
each problem so that the testers could easily look at each alleged
problem. We could have the module average the scores and maybe even
dump it into the graphing module.
The third step is the easiest one. You just feed each problem into a
bboard thread and let the users suggest solutions.
So whaddaya think?