Forum OpenACS Q&A: Usability testing module

Collapse
Posted by Michael Feldstein on

I've been thinking that it would be pretty easy and very compelling to build a usability testing module for OpenACS.

The Big Picture

Most software and web site developers aren't too good at interface design, and yet they rarely do usability testing. This doesn't make a whole lot of sense. Presumably, software is built to be used, and it won't be used if it's not usable. How many web sites or applications have you given up on, never to return, after a few minutes (or hours) spent just trying to figure out how to find your way around the durned thing?

Now, beside the fact that people just don't recognize the need for it, one other major reason why people often skip usability testing is because, in its classic form, it's time consuming, expensive, and hard to do. Fortunately, Jakob Neilsen has solved this problem for us. He's come up with a < a href="http://www.useit.com/papers/heuristic/" target="_blank"> heuristic evaluation method that is easy to implement and, happily, fairly easy to put imagine implementing in code.

I believe that having a solid usability testing capability based on best practices could be a tremendous selling point for OpenACS. Certainly, a module like this one would have done a lot of good when aD was building the admin pages for ACS Classic.

The Basics of Heuristic Usability Evaluation

There are three steps in Nielsen's process. In the first step, testers review the site keeping 10 basic rules of usability in mind. Whenever they encounter a problem, the testers write it down, along with the rule that they think it violates. In the second step, all of the problems the usability testers found are compiled in to one list. Then all of the testers rate the problems on a scale of 0-4, where 0 is not a problem and 4 is a major disaster. The scores are then averaged for each alleged problem. In the third step, the team brainstorms solutions to the most serious problems. It's a pretty simple process, provided that (a) the testers have at least minimal instructions/coaching in the process, and (b) it's easy for them to make notes on the problems they encounter as they encounter them.

How a Usability Testing Module Might Function

We'd need a bit of functionality for each step in the testing process. For the first step, imagine that we could make a & quot;usability problem report" link available on every page of an OpenACS site. It would be nice if it would appear only for certain user groups, so you could control your tester population. The link would pop up a dialog box that would allow the tester to describe the problem and select the heuristic that the problem violates. They could also access descriptions and examples of each heuristic in case they're not sure which one applies to the situation. Users would not be allowed to enter a problem without selecting a heuristic. This may seem like a trivial detail, but actually its crucial to the system, both because it forces some standardization of the process and because it forces the testers to learn and apply the science of usability testing. The system would also automatically record the tester's identity, the date and time the observation is made, and the page to which it corresponds.

For the second step, testers could access a list of the identified problems. (Moderators could weed out redundant entries.) Once again, the testers would be forced to assign a rating the problem and would have access to definitions and examples of each rating level. It would be nice, too, if we could provide a link to the page in question for each problem so that the testers could easily look at each alleged problem. We could have the module average the scores and maybe even dump it into the graphing module.

The third step is the easiest one. You just feed each problem into a bboard thread and let the users suggest solutions.

So whaddaya think?

Collapse
Posted by Ben Adida on
This is great stuff. I think it would be very useful. The only thing, of course, is defining how it can plug into the whole system. I think this adds another important point to the modularity needs of the ACS. Modules should be able to plug in at the header and footer level of every page.

This kind of innovative stuff is precisely the reason why we need to push ArsDigita (and if not them, then OpenACS) towards a much more modular level that allows for this kind of contribution without centralized approval.

Go ahead and design this, and hopefully within a few weeks we'll be able to have the modularity we need to include this without breaking our ACS/Oracle ties...

Collapse
Posted by Michael Feldstein on
I'm glad you like the idea. I'll bet we could interest Jakob Nielsen himself in usability testing our usability testing module. That would be fantastic PR for us.

In terms of moving ahead with this, I don't think that we (meaning the Knowledge Garden folks) are at the point where we have the skill to code it ourselves yet. We're still learning our way around ACS/OpenACS. What I can do now is write a more detailed functional spec proposal, possibly even including some interface mock-ups.

It sounds more and more like OpenACS will be able to do a lot more in terms of growing new modules once ACS 4.0 is out (http://www.arsdigita.com/bboard/q-and-a-fetch-msg.tcl?msg_id=0003ML&topic_id=175&topic=ACS%20Development). If aD really commits to making ACS modular, then we can add much more value to ACS Classic (in addition, of course, to the primary goal of making OpenACS better) by cranking out lots of interesting and useful new components. Perhaps we could generate a few module specs we're interested in tackling as a group as ammunition for encouraging aD to make it as easy as possible for us to do this?

Collapse
Posted by Don Baccus on
I like this.  Put together your proposal and let us take a look.

The reason I'm posting, though, is to comment on how ACS Classic and OpenACS strike my ears when they appear in the same sentence.  ACS Classic is another good name, contrasts well with OpenACS and I think hits just the right tone regarding aD's path vs. our own.

I'm going to start using the term myself.

Collapse
Posted by Michael Feldstein on
Oops! Sorry about the broken code in my original post. I wish we could edit our posts, dammit!
Collapse
Posted by Peter Vessenes on
Ybos would be interested in providing some dev time, and support for this module. Especially if it can be plugged into ACS Classic. (I like the name, too..) We'd be especially pleased to do it if someone else did the first draft of the design document.

(Titi, are you listening here? Any thoughts?)

I do have a few comments about the design - it makes more sense to me to actually make a second toolbar to allow people to submit usability problems right on the page, so they'd get, say,

Problem Type: Severity: Detail:

I bet this would drastically increase the rate of response, and ease of use for testers.

Collapse
Posted by Michael Feldstein on
I'm working on the design document right now. I should have something fairly detailed drafted by early next week at the latest.

I like the idea of a floating toolbar; I've been thinking along the same lines myself. What I'm going to propose is putting a "Report A Usability Problem" on every page but allow the problem entry window to float and check what page it's at when "Submit" is pressed. That way users who want the floating palette will have it while those with limited screen real estate can call it up when they need it.

Regarding the idea of having the users rate the severity of the problem at the same time they are entering it, Neilsen separates the two steps, and for good reason, I think. You want people to be focused on entering the particular usability problem *and the heuristic it violates*. Once you have a running list of proposed problems, you'll get a more statistically significant response if you have the entire group of testers rate each alleged problem and then average the ratings.

Keep in mind that we're talking about implementing a very particular, empirically tested method of usability testing. We've got to follow Neilsen's prescription if we want the system to have credibility.

As for implementing it in ACS Classic, I'd love to see that too. I suppose that's up to the folks who have the coding skills.

Collapse
Posted by Michael Feldstein on
I've fallen a bit behind schedule with the specs but I'm working on them. Give me a few more days.
Collapse
Posted by Peter Vessenes on
So, Any news?
Collapse
Posted by Michael Feldstein on

Sorry, I got slammed with paying work (imagine that?!) and some behind-the-scenes work for knowledgegarden.org, so I got further behind schedule on this. I am working on it, however, and am about halfway done. I'm trying to be careful about specifying interface details; it would be pretty embarassing to have a usability testing module that is difficult to use. I probably should learn not to make public promises of completion dates, but since I haven't quite learned yet, I'll say I'm going to try very, very hard to have it done by Friday.

BTW, I actually found somebody who has some formal training in usability testing; I'm going to try to entice her into giving us some feedback on the spec as soon as it is done.

Collapse
Posted by Jade Rubick on
I have my Master's in usability, and I'd be willing to look at your design doc and provide some feedback.
Collapse
Posted by Michael Feldstein on
Great! I'm inspired now. I should be able to put in at least a couple of hours on it today.
Collapse
Posted by Michael Feldstein on
I made some progress on this but still have a bit more to go. I lost the time I thought I would have today and tomorrow due to unexpected work demands and, since we're moving in a week, it will be tough to get much done over the weekend. It turns out to be a bit more complicated than I expected.
Collapse
Posted by Dave Bauer on
Is there any update on this project? Did you hold off until OpenACS 4.0 to make it easy to plug into?
Collapse
Posted by Michael Feldstein on
I put a spec together for interested parties some months ago and nothing has happened since. However, as you suggest, it makes sense to wait for 4.0 anyway.
Collapse
Posted by Dave Bauer on
I'd like to reopen discussion on the idea of a usability testing package. I think this is a great idea. See the article on heuristic usability testing methods: http://www.useit.com/papers/heuristic/ and reread the initial post on this thread https://openacs.org/forums/message-view?message_id=14556 for more information on the concepts involved.

My basic idea is to add a mode similar to developer support or translator mode that can collect usability feedback from the testers.

It is way to collect even more test data. This might be something that could be installed for testing at large OpenACS or .LRN installations and the test data could be collected and shared to improve usability of OpenACS and .LRN packages.

I have asked Michael if he has any of the original design he did when the idea was first proposed, although most of the information to start work on this is in the thread. Also there are many related articles and papers on the web http://www.google.com/search?hl=en&q=heuristic+usability+testing&btnG=Google+Search (skip the obviously commercial offerings, of course).

Collapse
Posted by Michael Feldstein on
I'm not sure if I have the original design document, but I can reconstruct it if necessary. I don't know much about how developer support or translator mode work; is there someplace where I can see either one in action?
Collapse
Posted by Dave Bauer on
Michael,

Developer Support adds a toolbar at the top of every page with developer tools and informaton.

https://openacs.org/picture/photo?photo_id=282759

This also shows translation mode, where a link for each message_key (small green o's).

A usability mode would probably have a form at the bottom of each page to enter information on the heuristics.

Collapse
Posted by Michael Feldstein on
Ah. Very cool.

If you're going to the trouble to stripe in a toolbar, though, why limit it to usability testing? Why not make a QA tester's toolbar that supports usability testing and bug reporting both? And why not use a little ImageMagick...er...magick to support doing screen grabs as part of that functionality?

Collapse
Posted by Matthias Melcher on
I don't think that we have too little usability testers
feedback. We have too little developers' responses to them.
And the http://test.openacs.org/ needs to be revitalized,
as discussed in
https://openacs.org/forums/message-view?message%5fid=274413
Collapse
Posted by Michael Feldstein on
Matthias, there are several issues. First, this tool isn't just for the OACS toolkit developers (although that's a great use too); rather, it's for OACS toolkit *users* who are developing their own applications using it. We want to make it as easy as possible for them to build high-quality, usable applications.

Second, the point of the usability testing module was to do more than simply gather more usability feedback. Its intent is to gather more *accurate* usability feedback by teaching the testers how to use Jakob Neilsen's empirically tested heuristic usability evaluation method. This method has been shown to produce results that correlate well with far more expensive video-based observational usability tests.

Collapse
Posted by Dave Bauer on
Matthais

One issue is that often the testers have quite different opinions on how to fix something, and with no consensus nothing is resolved. I don't want to discount the previous efforts of those who truly wish to improve the usability of OpenACS. It is of course valuable and appreciated. What is necessary in addition is time and resources to take action on the feedback.

Using the evaluation techniques Michael mentioned maybe we can get some more structured actionable feedback. Ideally OpenACS would even have a set of guidelines for user interface design. I think some good results will come from the effort to clean up the design and make OpenACS themable, but there will always be more room for improvement.