Archive for the ‘user study’ Category
The IEEE Visualization 2007 conference concluded in Sacramento, CA on November 1, 2007. I had mentioned some of my favorite papers when the papers list was out. But, I found some more papers, panels and talks to be even more interesting than I had thought.
I guess its going to be a big post, since I’m going to go through all that I liked in the conference (and maybe some things that I didnt like).
- The InfoVis keynote by Matt Ericson of New York Times was very well done. His group faces the daily challenge of communicating ideas using a visual representation of facts, stories and data. Unlike visualization researchers who assume that their audience will take the effort to understand their complex visualization, they at NYTimes cannot assume any such thing and have to develop ‘informational graphics’ for a reader. Fernanda Viégas from IBM Research has summarized some of the key points of the talk and also has managed to get access to some of his images. Here’s the link on infosthetics – http://infosthetics.com/archives/2007/10/infovis_keynote_matthew_ericson.html
- The InfoVis session on Infovis for the Masses was one of my favorite sessions – The session chair was Ben Shneiderman who does such a fantastic job every time he talks. I attending his paper presentation last year and this year, as session chair, he setup the talks so well. The session contained a paper on Many Eyes (which as the readers of this blog may have observed, I really like :)) Fernanda Viégas, Martin Wattenberg and Frank van Hamm gave an excellent presentation. The second impressive talk of the session was on Scented Widgets by Wes Willett at UC Berkeley. Jock Mackinlay then showed a heavily demo-focussed presentation of the Show Me: Automatic Presentation for Visual Analysis paper. He was demoing the Tableau software from Tableau, which is the company he is now working for.
- The Evaluation session was another fabulous session that had a great collection of superb papers. I liked each and every one of them.
- Jeff Heer’s talk on Animated Transitions in Statistical Data Graphics discussed some work he did with George Robertson at Microsoft Research. He discussed some of the limitations of using animations that are extremely well documented by Barbara Tversky‘s paper – Animation: Can it facilitate? Visualizing any kind of change is extremely hard and he sort of weaved his way around the problems of animations and presented some great work. The user study was well conducted and I think its essential to be able to really stand up and defend your results in a quality conference such as InfoVis (pretty pictures obviously help) :) Some results obtained by them were not in favor of their techniques, but thats a result too and it was interesting to see that. In case you didnt already know, Jeff Heer is also the author of the popularly used Prefuse infovis toolkit.
- This year was the second year they had the Art show at Vis and they showcased some really beautiful and insightful visualization.
- This year the conference was also augmented by some excellent workshops on Knowledge-assisted visualization and VisSec – visualization for computer security. Smaller more focused workshops are an excellent venue for researchers to publish papers that might not get accepted due to the size of the idea or due to the fact that they’re too specialized for Infovis/Vis. The downside is that sometimes the ideas though great might not as polished and get accepted due to the low submission numbers at the workshop. I still give such workshops a big thumbs up and hope we see more of them.
There were some excellent tutorials at the conference this year. Some of the ones that I attended and my thoughts on the same
- Visual Medicine tutorial – The morning session on Introduction to visual medicine though well presented is really something that does not need to be presented for the second part of the day on Advanced Visual Medicine to be understood. The advanced session was amazing and presented some phenomenal work that researchers were doing to use the power of medicine in improving doctors treat patients better in DT MRI, reconstructive surgery, perfusion data and multiresolution volume rendering.
- Illustrative Display and Interaction in Visualization – This tutorial has become a mainstay of the conference and has included a great group of speakers over the years. The images produced present a compelling body of excellent work in the field of illustrative visualization. I have attended it in the past and have seen parts of it this time. The only minor complaint that I have is that the work is of limited use unless its being use in an application domain such as preoperative planning systems and other real world applications. In the European version of their tutorial at Eurographics 2006, they had a few other speakers who covered preoperative planning using such techniques.
- GeoVisualization with Google Earth and GIS – This tutorial clearly conveyed that Google earth has a lot of wonderful applications than just flying around the world. The ability to add layers and interact with regions of interest was simply superb. Some datasets and files that they used during their tutorial can be downloaded here.
I realize now that I have many more thoughts that I’d like to add as regards the conference and so I shall add them in part 2. Coming soon!! Well you know how often I update this blog, so ….. soonish :)
Visualization researchers enjoy coming up with innovative techniques to visualize their data. The field almost borders on art where design choices, color choices and many other decisions can make the generated visualization very appealing.
But, the least favorite part of the work that visualization researchers have to do (or are strongly encouraged to do) is a formal evaluation of their visualization techniques. It requires one to be fair and ensure that one’s techniques are being accurately evaluated without any bias.
As one would imagine, the practitioners have observed this reluctance on the part of their peers to conduct user studies and to do it well. A particularly well written paper by Ellis and Dix discusses why conducting a user study is hard and how many user studies are conducted haphazardly, in a biased manner to ensure success for their techniques. My favorite paragraph from this paper is
“If your aim is to prove that your system is best, go get a job as an advertising executive. If your aim is simply to make your system as good as possible, then sell your product but don’t write about its development. If your aim is to make your product as good as possible in order to effectively deploy it and so learn, this is essential, but not a thing to report in detail. However, if your aim is to understand whether, when and under what circumstance a technique or design principle works or is useful – yes now you are doing research.”
They have also cited some excellent work by researchers in the field of HCI.
- Henry Lieberman from MIT Media labs – Rant: The Tyranny of Evaluation and
- Shuman Zhai’s reply Evaluation is the worst form of HCI research except all those other forms that have been tried
These definitely make for very interesting reading.
There are other researchers who want to share their experience on the topic of conducting a user study and help new researchers who are about to embark on their first user study. Some excellent papers have been written covering topics such as why should one bother conducting a user study in the first place or why should one take the trouble of finding experts and getting one’s techniques evaluated from them and so on. I would highly recommend reading some of these papers first:
- Robert Kosara, Christopher G. Healey, Victoria Interrante, David H. Laidlaw, Colin Ware, Thoughts on User Studies: Why, How, and When, IEEE Computer Graphics & Applications (CG&A), Visualization Viewpoints, vol. 23, no. 4, pp. 20-25, July/August 2003. http://www.kosara.net/papers/Kosara_CGA_2003.pdf
- G. Ellis and A. Dix(2006). An explorative analysis of user evaluation studies in information visualization. In Proceedings of the 2006 Conference on Beyond Time and Errors: Novel Evaluation Methods For information Visualization (Venice, Italy, May 23 – 23, 2006). BELIV ’06. ACM Press, New York, NY, 1-7.
- Plaisant, C. The Challenge of Information Visualization Evaluation. Advanced Visual interfaces, Italy, 2004, ACM Press. http://hcil.cs.umd.edu/trs/2004-19/2004-19.pdf
- Tory, M., Möller, T. Evaluating Visualizations: Do Expert Reviews Work? IEEE Computer Graphics and Applications, 25(5), 2005, 8-11.
Here are some researchers and research groups who have taken the effort to conduct a user study correctly and evaluate their results.
- Chris North and Ben Shneiderman(2000), Snap-Together Visualization: Can Users Construct and Operate Coordinated Views?,International Journal of Human-Computer Studies, Academic Press, vol. 53, no. 5, pp. 715-739. http://people.cs.vt.edu/~north/papers/snap-IJHCS.pdf
- David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr (2005). Comparing 2D vector field visualization methods: A user study. Transactions on Visualization and Computer Graphics, 11(1):59-70, January-February 2005. http://www.cs.brown.edu/research/vis/docs/pdf/Laidlaw-2005-CVF.pdf
- An Evaluation of Pan &Zoom and Rubber Sheet Navigation with and without an Overview. D. Nekrasovski, A. Bodnar, J. McGrenere, F. Guimbretiére, T. Munzner. http://www.cs.ubc.ca/nest/imager/tr/2006/Nekrasovski2006CHI/
- Kobsa, A. (2004): User Experiments with Tree Visualization Systems. Proceedings of InfoVis 2004, IEEE Symposium on Information Visualization, Austin, TX, 9-16.http://www.ics.uci.edu/~kobsa/papers/2004-InfoVis-kobsa.pdf
- Layout of Multiple Views for Volume Visualization: A User Study, Daniel Lewis, Steve Haroz, and Kwan-Liu Ma, Proceedings of International Symposium on Visual Computing, November 6-8, 2006, pp. 215-226. http://www.cs.ucdavis.edu/~ma/papers/isvc06.pdf
This is a small sampling of some excellent user evaluation papers that are out there. I would request readers to comment on these and add more user study related papers that I may have missed. Please let me know your thoughts on conducting a user study and share your experience if you have any interesting ones. :)