Activity Theory Intro.

From CS260SP09

Jump to: navigation, search

Contents

Readings

Chapters 1 and 2 from Activity Theory and Human-Computer Interaction, by Bonnie Nardi (Ed), MIT Press, 1997

Background

Human-Centered Design Considered Harmful JND website.

Discussions

Please post your critiques/commments on the required readings below. To do that, first login by using your user name and password, then click the "edit" tab on the top part of this page (between the "discussion" page and the "history" page), New to wikis? Read the Wiki editing guide. . Hint - Please put a whole line == ~~~~ == (literally) at the beginning of your submitted critique, so the wiki system will index, sign and date your submission automatically.

KetrinaYim 14:15, 24 February 2009 (PST)

Bannon's notion of users also being designers seems particularly relevant to the Internet, particularly Web 2.0. Customizable networking sites, blogs, and user-created online content all demonstrate how the Internet is a system shaped by its many users. Most are content to simply use the Internet, but there are those who seek to make use of the system's full potential, and that is where users turn into designers. These designers build web pages, post stories, and upload media to modify the Internet into tools for their individual needs, whether it is to share knowledge, establish connections, or express their identity in a space visible by millions of people.

Seth Horrigan 19:41, 24 February 2009 (PST)

Hmm...looks like my comment did not show up. Oh, well, I will write a new one below.

Simon Tan 00:06, 25 February 2009 (PST)

I was quite surprised to learn about HCI's shaky roots in the information processing branch of cognitive psychology. It feels that the "traditional" HCI from that era has been abandoned, since I rarely hear of designers today talking about humans as if they were merely cognitive processors. Also, theories such as GOMS (which seems like it came right out of that time and thinking) are never really put to use these days, not even mentioning how little they are taught.

This paper was from 1997, though, so I wonder how Activity Theory has fared since then. I'm not entirely sure that it has become the "new foundation" of HCI, but I'm not sure of it's acceptance, either. Is the field of HCI still as fragmented as it was then? Have the researchers of this field come to rally around this theory as the "common language" that the author had hoped? Or is the complex (and, I feel, the excessively dynamic) nature of Activity Theory making it hard to adopt?

This reading made me realize how young HCI as a field still is, and how it still has the potential to question its foundation yet again.

Nicholas Kong 03:43, 25 February 2009 (PST)

It seems to me that the subfield of HCI that activity theory is most applicable to is ubiquitous computing, and I do think there is some published work that structures interaction around the notion of an activity, although not exactly in the activity theoretic sense. James Landay at the University of Washington has done a lot of work utilizing the Intel Mobile Sensing Platform (Pervasive Computing '08, Description), a little wearable gadget containing sensors whose data can be used to reveal the "activities" of the user. For example, one application of this platform (UbiGreen, Froehlich et al., CHI '09) uses the MSP to infer the transportation mode of the user (e.g., walking, biking, driving, taking the train) and provides the user with positive feedback via cellphone when taking greener modes of transportation. The transportation modes are termed "activities" by Froehlich et al.; I wonder, though, whether the activity theoretic "activity" is actually "reducing carbon emissions", and that the mode of transportation used constitutes an activity theoretic "action". The decomposition of an activity into its three levels is still a little opaque to me.

David (Tavi) Nathanson 03:48, 25 February 2009 (PST)

I was quite intrigued by the point that practice has been ahead of research in HCI, with the example being direct manipulation. As stated in the article, direct manipulation has been around six the sixties, and yet apparently no one started grasping *why* it works so well until the eighties! I found this intriguing because research had to focus on *why* direct manipulation works, rather than on *creating* (coming up with) direct manipulation (since it already existed).

I also want to comment on Bannon's third issue: that "the actual use of systems is a long-term process that cannot be adequately understood by studying just the initial steps of usage," while "a large part of HCI research has studied only inexperienced users." I agree that studying users in a more long-term fashion could be valuable, but I presume that the reason a large part of HCI research is dedicated to studying inexperienced users is that a very common goal is to make systems easy to use for *new* (inexperienced) users. This is a very important goal for new product adoption, which is often a key concern in practice.

Priyanka Reddy 05:38, 25 February 2009 (PST)

I thought it was really interesting that "there is a well-known gap between research results and practical design". They mention that many of the HCI guidebooks give a list of "practical guidelines" which don't have any connection to the "theoretical background". It's also mentioned that the "practical guidelines" are "derived from practical experience, with no underlying theory", which the author seems to suggest is a negative thing. However, I wonder why it's a drawback that guidelines are based on practical experience; it's true that generalizing to theories is helpful because they can be applied across more contexts, but wouldn't it also be worthwhile to get information on more practical experience in more contexts? As long as the guidelines that result from this practical experience work well, does it matter where they came from?

They also mention that most of the successful UI design projects didn't use any psychology. Presumably, these designs were also based on practical experience, also supporting the argument to use practical experience to base practical guidelines on.

Himanshu Sharma 08:27, 25 February 2009 (PST)

I agree with the author about the huge potential for using action-operation dynamics in computer interfaces. Type-ahead menu example explains how a chain of actions collapsed into a single action enlarges the scope considerably.Another example I can think of- IDE's (like Selenium IDE) that automatically generate a script of the actions of the user, are being used by professionals involved in software/website testing. Their task is reduced to Record once and then Play any no of time.

Brian Tran 09:33, 25 February 2009 (PST)

The article's discussion about how human users should be treated as people swayed by influences rather than cognitive processors was particularly interesting to me. It just simply makes sense. People that have had experiences with a similar interface are bound to adapt to a new interface better. I feel that designing an interface for people is much more useful although definitely more complex. I disagree with David and Bannon on long-term user testing because I believe that it is better to produce something after a reasonable amount of user testing so that we can have something out the door and learn from our mistakes earlier rather than to test extensively to find out we missed something critical far down the road.

Kenrick Kin 10:04, 25 February 2009 (PST)

I thought it was interesting that it wasn't until 1991 that Bannon observed that there was an ongoing change of human subjects being thought of as active actors and integral for an iterative design process. Although maybe it's not that surprising - when did people really start using computers as a population? I agree with Bannon's recognition that study of actual systems is a long-term process and initial testing with inexperienced users cannot fully capture the usefulness/usability of a system. This initial testing is useful and still done when exploring new interaction techniques, but in the end it's hard to predict if the interaction that's being tested will be adopted in the long run.

Seth Horrigan 17:49, 25 February 2009 (PST)

The Human Information Processing model is still extremely useful. The physical and cognitive limits determined empirically are invaluable in understand what humans can and cannot do. Additionally, conceiving of the input/output split inherent in traditional computer interfaces is very useful. That said, I agree wholeheartedly with Nardi's position that treating humans as a "factor" in the system, where the human component can be optimized is the wrong direction to take this. While analyzing a program using GOMS or the KLM to allow for the best possible output is necessary in certain fields, the majority of applications do not require flawless, time-critical input.

    While I would not use the KLM to minimize keystokes solely for throughput efficiency, minimizing keystrokes can reduce the effort needed by the user, and it definitely reduces the number of places where an error occurs. As an example, the menu hierarchy for the XBox game Halo missed the mark on a lot of heuristics, but over-all it was not bad. The same hierarchy in Halo II became so deeply nested that users must complete 5 or 6 steps to accomplish the same task that took 2 in the original game. Eliminating these excessive steps (as a KLM analysis would have suggested) while enforcing heuristics like "recognition over recall" would provide a interface clearly better both subjectively and objectively.

    The point? We can incorporate the lessons learned from the HIP model and cognitive science into the essential design framework that all interfaces should be built upon. Do I think that Activity Theory is the right place to base everything? No.

    I mean to say that the obvious problem with the HIP model is that humans are not machines. Trying to overlay the model of a computer onto a human is doomed to give poor results when it becomes the base for thinking about humans. It is artificial. I see the same problem with Activity Theory. AT provides an understanding of humans actions and intentions that can be very useful in designing interfaces. That said, it comes off as an extremely artificial system overlaid on the realities of humanity. Simplifying humans according to AT allows us to pick out specific aspects for which to design, but taking it too far will lead to the same poor result as relying on HIP.

    Where does that leave us? It leaves us with practical guides derived from experience, and a couple of "theories" to quantitatively or qualitatively guide IT design. AT proponents will likely argue that AT really models humans because it is general enough to cover any human situation, but really it is only looking at a couple of dimensions of humans within the framework of reality. Perhaps we do need an underlying framework to provide the necessary common language or understanding for all HCI practitioners, but it will not be HIP or AT or any other model that tries to reduce humans to a simpler form for which we must design.

Personal tools