Archive for June 11th, 2007
Visualization researchers enjoy coming up with innovative techniques to visualize their data. The field almost borders on art where design choices, color choices and many other decisions can make the generated visualization very appealing.
But, the least favorite part of the work that visualization researchers have to do (or are strongly encouraged to do) is a formal evaluation of their visualization techniques. It requires one to be fair and ensure that one’s techniques are being accurately evaluated without any bias.
As one would imagine, the practitioners have observed this reluctance on the part of their peers to conduct user studies and to do it well. A particularly well written paper by Ellis and Dix discusses why conducting a user study is hard and how many user studies are conducted haphazardly, in a biased manner to ensure success for their techniques. My favorite paragraph from this paper is
“If your aim is to prove that your system is best, go get a job as an advertising executive. If your aim is simply to make your system as good as possible, then sell your product but don’t write about its development. If your aim is to make your product as good as possible in order to effectively deploy it and so learn, this is essential, but not a thing to report in detail. However, if your aim is to understand whether, when and under what circumstance a technique or design principle works or is useful – yes now you are doing research.”
They have also cited some excellent work by researchers in the field of HCI.
- Henry Lieberman from MIT Media labs – Rant: The Tyranny of Evaluation and
- Shuman Zhai’s reply Evaluation is the worst form of HCI research except all those other forms that have been tried
These definitely make for very interesting reading.
There are other researchers who want to share their experience on the topic of conducting a user study and help new researchers who are about to embark on their first user study. Some excellent papers have been written covering topics such as why should one bother conducting a user study in the first place or why should one take the trouble of finding experts and getting one’s techniques evaluated from them and so on. I would highly recommend reading some of these papers first:
- Robert Kosara, Christopher G. Healey, Victoria Interrante, David H. Laidlaw, Colin Ware, Thoughts on User Studies: Why, How, and When, IEEE Computer Graphics & Applications (CG&A), Visualization Viewpoints, vol. 23, no. 4, pp. 20-25, July/August 2003. http://www.kosara.net/papers/Kosara_CGA_2003.pdf
- G. Ellis and A. Dix(2006). An explorative analysis of user evaluation studies in information visualization. In Proceedings of the 2006 Conference on Beyond Time and Errors: Novel Evaluation Methods For information Visualization (Venice, Italy, May 23 – 23, 2006). BELIV ’06. ACM Press, New York, NY, 1-7.
- Plaisant, C. The Challenge of Information Visualization Evaluation. Advanced Visual interfaces, Italy, 2004, ACM Press. http://hcil.cs.umd.edu/trs/2004-19/2004-19.pdf
- Tory, M., Möller, T. Evaluating Visualizations: Do Expert Reviews Work? IEEE Computer Graphics and Applications, 25(5), 2005, 8-11.
Here are some researchers and research groups who have taken the effort to conduct a user study correctly and evaluate their results.
- Chris North and Ben Shneiderman(2000), Snap-Together Visualization: Can Users Construct and Operate Coordinated Views?,International Journal of Human-Computer Studies, Academic Press, vol. 53, no. 5, pp. 715-739. http://people.cs.vt.edu/~north/papers/snap-IJHCS.pdf
- David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr (2005). Comparing 2D vector field visualization methods: A user study. Transactions on Visualization and Computer Graphics, 11(1):59-70, January-February 2005. http://www.cs.brown.edu/research/vis/docs/pdf/Laidlaw-2005-CVF.pdf
- An Evaluation of Pan &Zoom and Rubber Sheet Navigation with and without an Overview. D. Nekrasovski, A. Bodnar, J. McGrenere, F. Guimbretiére, T. Munzner. http://www.cs.ubc.ca/nest/imager/tr/2006/Nekrasovski2006CHI/
- Kobsa, A. (2004): User Experiments with Tree Visualization Systems. Proceedings of InfoVis 2004, IEEE Symposium on Information Visualization, Austin, TX, 9-16.http://www.ics.uci.edu/~kobsa/papers/2004-InfoVis-kobsa.pdf
- Layout of Multiple Views for Volume Visualization: A User Study, Daniel Lewis, Steve Haroz, and Kwan-Liu Ma, Proceedings of International Symposium on Visual Computing, November 6-8, 2006, pp. 215-226. http://www.cs.ucdavis.edu/~ma/papers/isvc06.pdf
This is a small sampling of some excellent user evaluation papers that are out there. I would request readers to comment on these and add more user study related papers that I may have missed. Please let me know your thoughts on conducting a user study and share your experience if you have any interesting ones. :)