Historical Perspective

From CS 160 Fall 2008

Jump to: navigation, search

Lecture on Nov 19, 2008

Readings

Discussions

Please post your critiques/commments on the required readings below. To do that, first login by using your user name and password, then click the "edit" tab on the top part of this page (between the "discussion" page and the "history" page), New to wikis? Read the Wiki editing guide. . Hint - Please put a whole line == ~~~~ == (literally) at the beginning of your submitted critique, so the wiki system will index, sign and date your submission automatically.

Contents


Gary Wu 21:22, 18 November 2008 (UTC)

After reading the man-computer symbiosis article, I can see how the topics of this class plays an enormous role in this relationship. The efficiency and benefits of man and computer will be highly crippled without a well-designed interface to interact between the two. After reading the ubiquitous computer article, I feel that the issue of personal privacy and security would come into play if this future were recognized today. Forgetting and not physically seeing these computers around us might seem like an invasion of privacy. I'm sure this would be an argument of any opposing groups. Otherwise, I feel that the visions and ideas discussed in the article were innovative and unique.

nathanyan 01:56, 19 November 2008 (UTC)

The Man-Computer symbiosis reading I think gives some insight the sorts of issues that UI design in the future will hold. While today's computers are fairly complex and integral with most of our lives, the relationship is still very much that of the "mechanically extended man". Programmers develop the programs, users interface and supply commands or data, and then the computer processes. The user interface, then, while important, is not an ever-pervasive or defining portion of the program - even with a terrible user interface, if a user can trudge through it, the program will have the input it needs and be able to process the result, which is ultimately what's important to the user.

As the reading mentioned, the Man-Computer symbiosis means an interaction process that happens in "real time" - essentially the interface would be seamless and invisible - data and instructions would need to be inputted into the computer as soon as the user thought them, and that information outputted back into the user's consciousness instantly (not just data read out to the user, that must have time set aside to interpret before it becomes useful information).

The second reading is a bit confusing - it starts off talking about progressing computer technology to the same level as literary information technology, making it an "invisible" part of society but then launches into a technical discussion of mini-computers that can be placed everywhere... That said, the idea of ubiquitous computing does seem inevitable - I don't think I'm convinced that the tabs, pads, and boards are the ultimate means (and in any case they're still computer-centric and still don't represent the "invisible" blending into society initially described), but the description on page 9 gives a good example of the sorts of things that can be possible with ubiquitous computing (indeed, some of them are already possible today).

Technology-wise, I think the tabs/pads/boards idea still struggles with a ubiquity of devices. Text is extremely simple and ubiquitous, because the information can be output at any time (everyone can easily read text), and information can also be input fairly easily (all one needs is a pen). The goal of ubiquitous computing shouldn't be "have a lot of devices around", but "have the read/write access to the information at all times". Most likely, a single all-in-one device is what would prevail here.

Perry Lee 02:39, 19 November 2008 (UTC)

Although both articles raise interesting possible futures -- ones in which man and machine are more closely intertwined -- I can't help but feel that some of the scenarios they depict are overkill and truly invasive. As Gary mentions, two concerns that quickly arise are privacy and security. Though I can certainly see how computers can be used to allocate our time more efficiently (e.g., offload the data crunching to the computer as described in the man-computer symbiosis article), the articles also show the flip side: e.g., why would there be a desire to see technology used to implement features such as "time markers and electronic tracks on a neighborhood map"? It seems more Big Brother than a use of technology to make our lives easier. I don't see privacy and security as issues that are only unique to today; but rather as issues that will become all the more important in the future.

Haosi Chen 05:25, 19 November 2008 (UTC)

The paper by Mark Weiser presents the efforts of Xerox PARC in the field of ubiquitous computing devices and applications, and follows this with speculation/projection of possible future scenarios. I found the discussion of 'current' technology and research into ubiquitous computing devices quite interesting. I was previously unaware of the efforts of Xerox PARC toward these ends. The details of these devices operation, at the time, were a major innovations and greatly enhanced the field of computing. Also, the author presents a very insightful discussion on how reading became a ubiquitous technology, and how this can be used to define computing (ie. Anywhere you currently read, you could potentially compute as well). This idea, coming from the father of ubicomp, is of enormous significance to the scientific community. The paper mentions that this idea of ubiquity, in the same way as reading, means a drastic change in not only application features, but methods for measuring terse human actions which will eventually define the features of ubiquitous applications.

Vedran Pogacnik 06:10, 19 November 2008 (UTC)

The papers seem to describe an environment where computers have a bigger role than they have today. The first paper in particular, seems to focus only on the benefits of having computers doing tasks faster, with less error. I disagree that that is beneficial. The machines will replace a lot of jobs, so that’s no good. Secondly, the complexity that so many computers bring might be overwhelming. And thirdly, security might become too great to manage. I guess the problem is that since 20 years ago computers have embedded themselves to many aspects of our society, but some people are completely clueless on how to use that. I am not just referring to elderly people, but also to people who live in the parts of the world where such technology might not be available. But, in the end, repeating myself, I’d just hate to see some machine being bought to replace my job. Happily, we seem to be far from that day.

Buda Chiou 06:11, 19 November 2008 (UTC)

I would see privacy and security issues as some trade off for technology progress. However, this it's human who invade people's privacy not computer. In fact people can choose not to use any high tech product if they want, but most people still choose to use high tech product as much as they can, which shows that even though there are some side effects, people actually gain much more from the technology progress.

Jordan Berk 06:15, 19 November 2008 (UTC)

The Weisler paper on 21st century ubiquitous computing was very interesting, especially given that's it 17 years old, and thus with the hindsight of two decades of technological innovation we can look at it's projections and easily critique them. The paper certainly got a lot of things right, or at least projected trends correctly. The 'tabs' frequently refered to in the paper share many similarities with RFIDs, in that they are passive computers that wirelessly share information, very similar to the analysis of the properties of the dress example given. And with everyone carrying around a powerful miniture computer these days in the form of a cell phone (many of which have bluetooth/wi-fi for wireless communiction), this possibility becomes even more realistic. Of course, standards would have to be established for the wireless format, the data format, privacy concerns, etc, but despite the frequent and often valid concerns people have, it looks like we are headed toward a world in which computing is indeed as ubiqutious as reading a street sign.

KevinFriedheim 06:18, 19 November 2008 (UTC)

I don't think there can ever be a symbiotic relationship between humans and computers although there have been many movies written abut such a pairing. I believe this is an impossiblity because the moment we rely too much on computers, the harder we will fall when they fail on us. I think the perfect image of this comes from the movie Wal-E (Pixar). Here, humans rely so heavily on robots that they almost run themseleves into extinction when things start to fail. Although the readings makes several valid points about how machines are able to do things more quickly and efficiently than humans, I don't, as I've already stated, beelive that the case will ever arise where we will enteirely be reliant upon them to live symbiotically.

The second reading discusses the idea of how computers are "weaving" themeslves into the mainstream. Contridictory to the first reading (that talked about symbiotic relationsihps between humans and computers), the second simply states that computers are incresingly useful to humans - but they remain a tool only -- nothing more.

Kai Lin Huang 06:56, 19 November 2008 (UTC)

Some of the designs mentioned in Weiser’s paper have been realized in offices nowadays. However, due to a still high cost over benefit ratio, these designs are not widely used outside of commercial organizations. For example, our school does not have these electronic whiteboards in most classrooms. For the same reason, the scrap pad idea does not seem realistic yet even in commercial organizations. Unless a device like iPhone costs ten bucks without a wireless plan contract, it will not be more preferable than a paper scrap pad. Another device that may be the future of this is tablet computer, but it is currently still a standard laptop associated with ownership except that the screen can be rotated and laid down to be written onto.

Furthermore, in the description of a day in the future, Sal reads the newspaper and uses a pen to circle a quote, and the pen sends a message to the paper which transmits the quote to her office. This has already been prototyped by Johnny Lee at Carnegie Mellon University; the only difference is that neither the paper nor the pen sends the message out, but the wiimote a few feet away that is monitoring the movement of the pen. There are a lot more implemented examples today that reflect the predictions back in 1990s, and more are still coming. As time goes by, these predictions and imagination will be implemented, evaluation and redesigned to fit into the lives of the mass. The inventions in the future are predictable because through study and research, we know that human need those innovations today.

A side note about Licklider’s paper, I actually feel that computers are being programmed to “think” creatively in addition to its precision. Examples include the development of artificial intelligence and painterly rendering image processing.

Juanpadilla 07:40, 19 November 2008 (UTC)

Aside from the security and privacy issues many people have mentioned above, I’m not too sure that the tabs and pads idea is a great one. Even if they were able to get the pads reduced to the size of paper, isn’t the whole point of creating an electronic document that you don’t have all that clutter on your desk and shelves, which will minimize storage. Aside from that, I think the over all idea of creating a ubiquitous environment involving a symbiotic relationship between humans and computers is exciting and something we will eventually see. In actuality, we are at the very infant stages of computing, heck the transistor is barely 60 years old, and the progress that has been made is staggering. Considering the marvels of medicine and the increasing demand in biotech, I think this is just around the corner.

Karen Tran 08:18, 19 November 2008 (UTC)

Man-Computer Symbosis: The main idea of the paper was that computers should be developed with the goal "to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs." Man-computer symbiosis would augment human intellect by freeing it from mundane tasks. This idea may not seem revolutionary in today's world of ubiquitous personal computers, but in the days of batch processing it was. Computers would effectively and quickly do routine work. Processing speed and easy user interfaces would allow humans to interact with computers in making decisions instead of simply responding to long-awaited output. Users thus interact directly with the computer instead of relying on technicians and punch cards. Results are obtained immediately.


The computer for the 21st century: Weiser envisioned a future with two characteristics of computer use: ubiquitous and invisible. Computers are ubiquitous nowadays, they exist in almost every corner of our lives, perhaps not quite to the extend described in this article. However computers are not invisible, most operations of computer are conscious operations of computers or devices with embeded computers. However, developments of such "invisible" computer products are underway, such as the embedded computer with screen projected onto the bathroom mirror for people to read their calender or emails as they brush their teeth. The description of Sal's typical day was very interesting, and some of ideas such as traffic checking will likely be incorporated in the future (probably not on the rear mirror). However personally I do not see the employment of the pads and tabs described by Weiser, for the benefits from such devices is not obvious enough to exceed the low cost and convenience of the paper/pen, also consider potential health issues from replacing paper with computer screens everywhere?

I agree with Weiser's point that the most profound technologies are the ones that disappear. While disappear might not be technically correct when referring to the technology's state, it's a good way to think of it. When new technologies first appear, it usually causes waves of shock such as the introduction of computers. This certainly made sense since the older computers are about the size of a room. however, as technologies improve and made computers more accessible, it's almost expected that every household owns a computer of some sort nowadays.

Kumar Garapaty 08:24, 19 November 2008 (UTC)

The second article reminds me of the tangible user interfaces article about creating new interfaces that will be functional ubiquitously in every day life. These technologies can also improve the relationship between man and computer. From the first article, I found that although the advancement of personal computer has greatly improved the man-computer symbiosis, I believe it is still a parasitic relationship that people use computer for very singular purpose. As computers become more ubiquitous, the combination of all these electronic devices will allow for greater symbiosis and there will be no actual effort in trying to use a computer but the devices should automatically solve the problems as they come to us.

Volodymyr Kalish 08:49, 19 November 2008 (UTC)

Man-Computer Symbiosis article hints how important is the interface for humans to interract with computers. Yes, computers are a big part of most people's lives. They do all the monotonous and boring work quickly and without any complains, and freeing people's minds to do the creative part. However, I don't think that the future is in the design of intuitive and efficient interfaces based on heuristics. Pretty soon people will be able to interface witht he machines in a more direct way, cutting out all the slow and abstracted graphical interfaces inbetween and speeding up the whole process of interracting with the machine.


Wenda Zhao 09:02, 19 November 2008 (UTC)

Man-computer Symbiosis talks about man and computer work together to perform complex task much more effectively. This paper reminds a quote, I did not remember the exact quote, but the general idea is that man are extremely smart and slow, Computers are extremely fast and stupid. Together they can be so much more powerful. Although his notion of human-like communication and collaboration did not happen after fourty years his first propose the idea, I think this will definitely be happening near the future. Man and computer work as one is going to be the way of future.

Trinhvo 09:15, 19 November 2008 (UTC)

How interesting. Man and computer symbiosis sounds like a sci-fi fiction. Imagine one day if this comes true, everyone will have one or more computer chips implanted in their brains to maximize thinking processes. Human brain is super fast at parallelly taking in inputs while computer chips are accurate and fast at calculating digits. The two work together to help increase human working abilities. The article also mentions "trie memory" which I never heard of before, so it's good to learn this new concept and see how it can be applied in the future for Man-Computer symbiosis.

Frank Yang 09:52, 19 November 2008 (UTC)

Both of these readings, which I presume to be both rather old, seemed to exactly realize the potential behind computers for the future. As soon as I read the first article, I realized that in order to have a seamless relation between human thought and computer computation, it would be extremely necessary to have the perfect UI for the human to input the thoughts in real time, as well as the artificial intelligence to be able to generate the question from the thought and the answer from the question. The whole idea of the symbiotic relationship between man and computer seems like an interesting goal, but to me still sounds far-fetched and a little creepy. But notably, the goal results in a never ending quest to create the perfect interface between human and computer. As mentioned before, tangible interfaces are a step in this direction. The less we notice that there is a computer behind the actions, the more seamless the relation between human and computer becomes.

Hao Luo 11:38, 19 November 2008 (UTC)

The man-computer symbiosis sounds like an idea that can be applied to in a certain context or to a certain extent but never fully realized. And I think that's a good thing. I thought a good point was brought up when the article mentioned that in some ways it's really like the human is helping the computer and not the other way around. Instead of seeing a computer as a tool built by humans to do their work, it's like the computer is the perfect tool that's only missing the proper guidance of a human. It works both ways. Another good point was how humans are intelligent but mechanically they are very slow, so having computers do the calculations is a good way to go. Indeed, I remember in a previous computer class where it was mentioned that programmers used to worry about memory and how to code in order to be the most efficient and conserve memory. Nowadays, the focus is on programming in such a way as to minimize the programmer's time, since memory has gotten so large that conserving memory isn't as big of an issue. It seems like it used to be programmers catering to the limits of a computer and now it's computers catering to the limits of a programmer('s time).

Witton Chou 12:15, 19 November 2008 (UTC)

It's amazing how far computer systems have come. Originally, human labor was more efficient and cheaper than computers. Nowadays, we are coming up with ways to make computers easier and faster to use for humans. These electronic devices are now merging into aspects of our everyday lives and blending into our surroundings.

Jimmy Nguyen 12:30, 19 November 2008 (UTC)

This is a little bit about how I feel about technology, especially nowadays in which concepts are so far ahead of our time. I do like how the ubiquitous computing article ties in with this class. Even our project is a little bit ubiquitous in the sense that the interface is 10x simpler than all the actionscript and code behind the scenes. There are all sorts of timers, synchronization methods, sound classes, and animations going on that are best abstract to the user.

Antony Setiawan 15:26, 19 November 2008 (UTC)

It seems to me that Weiser's future perspective regarding computers for the 21st century goes around the word ubiquitous. Advancing technology would produce small integrated computer, even small enough to be caried around. Yet, he mentions that ubiquitous computers will come in different size, from inch-scale machine like "the active badge" to yard scale display to replace bulletin board. When I see myself in a position of a 21st century man, most computer are being scaled down to palm size. The concept of ubiquitous computers, to me, is basically make computer cheap and small and connected (to internet), so that it can exist everywhere.

The other article, to me, picture more of computers back then in 20th century where people have to queue to get his code executed or compiled in a computer.

Saliem Than 17:02, 19 November 2008 (UTC)

Both of the articles expressed an idealism which was very endearing and cute.

That being said I think the 9 page article's analogy of the technology of computers state of pervasiveness compared to the state of pervasiveness of writing to describe a given technology's advancement is a little wrong. Reading about the analogy I felt a sudden jump: how do you get from writing be a fairly simple and widespread thing to do to computers doing everything for you?

The sentiment and the idea that computers will be doing everything for you I think is the cause of the jump. The mental muscles being stretched in the description of development of writing are the mental muscles being stretched in order to achieve one specific medium restrained goal: the things that needed to be done before writing something down. Those mental muscles are completely different from I think the mental muscles need to say: remembering what was said at the latest meeting. In the latter there is an extra dimension: that of being able to do something: think of certain things, things you would not normally be able to do or think of, where as in writing something down: you are thinking of things you are already capable of doing.


With that* being said I think that the goals of the research at PARC here is somewhat achieved but still far from the ideal. Case in point being the iPhone which is rapidly taking place/transforming of a technology that is already powerfully pervasive, the cellphone, and the attempts by Google to combine speech recognition technology with they search engine and the widespread popularity of tablets which take advantage of the UI Design principle of affordance which will hopefully usher in the taken for granted aspect. Somehow.


Bing Wang 17:19, 19 November 2008 (UTC)

The two readings today is quite interesting. It talks about how human and computers react to each other in order to help each other achieve goals. Many might think that this is just the user interface aspect of the computing system that matters in terms of interacting with humans. However, other aspect of computer science is involved. One of the disciplines which provides the functionality that the author mentioned in the article is the artificial intelligence. With artificial intelligence, user input can be recognized by humans. If we can make the computers understand or interpret how humans function, it can greatly increase the use of computers and increase the interactions between humans and computers to solve the problem. The problem that we face now is having a rational computer to think like humans. Such system is hard to design and in today's world is still not achievable in some sense. I believe that we are getting closer to the point where computer-aided tasks are a possibility. Even that however requires major testing before the computers can aid us in things like battles, surgeries or things where a great deal are at stake. If we turn the other way, it does seem that we are getting closer. We now have computers in the cars that can aid us in parallel parking and the idea of having a computer in the car is almost non-existant until the last 20-30 years.

The articles mainly talks about how computing should be everywhere and how emphasis should be made to have computers aid humans in tasks. I think we are slowly approach having ubiquitous computing in terms of the idea that we have access to computers everywhere that we go and we have smart phones such as the iPhone and the Blackberry that does much of the computing work for us. The One Child Per Laptop idea also strengthens the argument that computers are here to stay. They are not going anywhere. With today's world not able to function without the access to internet or the computer, it will only take time before the computers are everywhere.

I just want to conclude stating that we might still be far away in achieving the perfect society where computers and humans work flawlessly together, but we are getting there and we are definitely making great improvements to make that stride forward.

MuQing Jing 17:24, 19 November 2008 (UTC)

Both articles appear to be portraying some sort of future environment where the use of computers is much more profound (and invasive) that it is today. By automating more and more tasks that we would normally have to perform ourselves, we increase the reliance on these machines (which in turn causes us to rely more on the companies that develop and program these machines). Whether that is a social detriment or benefit is very debatable; on one hand you're closing opportunities as human work is phased on, but on the other hand, new opportunities are opened up in the field of computer science. The main concern, though, is that since this type of interaction is so invasive, privacy and security becomes a huge point; you can't "hack" into a person very easily, but you can easily take advantage of a machine.

Interestingly, it would seem that some of the systems that Weiser talks about are already incorporated or implemented in today's world. The rest of them, however, seems like they require some sort of quantum leap in that we need to be able to systems that are self-contained and self-maintainable. With that, I think that we can reach what Licklider forsees.

Greg Nagel 17:40, 19 November 2008 (UTC)

These readings get into an area that is usually reserved for science fiction: how will machines grow to augment humanity? With "goggles and bodysuits" we may as well be talking about cyborgs. But I find it interesting to see that some of the problems that were mentioned are still being discussed as new ideas today: computerized whiteboards, desk surfaces for drawing, and speech interfaces. Computers may have grown exponentially more powerrful, but artifical intelligence has failed to follow suit. Early successful attempts weren't scalable, and we're still implementing intelligence manually. So we focus on the symbiosis between man and machine, rather than create the automotons that the public expected.

Mike Kendall 17:43, 19 November 2008 (UTC)

For a paper written in 1960, this reads with a surprisingly realistic expectation of what Artificial Intelligence is really capable of. The author tries to define man-computer symbiosis as something more than the computer being an extension of the man, as more then a calculator... But it seems to me that when he started analyzing the best use of a computer in his line of work, it was mostly as a calculator. Albeit, it was a complicated and specialized calculator, but it was still doing rote calculation. This is the kind of thing that Palantir specializes in, making computations that would usually take three people a day, now takes a computer about thirty seconds. Is this intelligence? No. Is it man-computer symbiosis? Maybe. The author seems to think that the person and the computer will be "thinking together" but then seems to lose track of that definition. I would have to think that the author perceives thought in his calculations, just because of the level of difficulty of doing them by hand.

Anyone else find it funny that Xerox Parc see a future where badges are used to track where employees are so that you can get your calls forwarded to you whereever you are... yet they didn't see cellular phones? heh.

Cynthia T. Hsu 18:03, 19 November 2008 (UTC)

I disagreed heavily with the Man-Computer Symbiosis article; it seemed more of a description of Star Trek style science fiction technology than an article of any real substance:

"Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun. To think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today."

Even in the world of Star Trek, some sort of staff is necessary on board so that meeting with a programmer is instantaneous instead of being deferred to the next day, the computer can do its analysis instanty instead of taking until next week to produce is 5" 47s worth of 20 pages of data, some subordinate will have to digest the information into a feasible outline (although in wiki style, I'm sure this process can be automated somewhat), and then hand it ot he captain. Still requiring all this procedures, he does fairly well in assessing the best battle situation. The author's autobiographical description of the procedure also struck me as having too high of an expectation:

"About 85 per cent of my “thinking” time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. Hours went into the plotting of graphs, and other hours into instructing an assistant how to plot. When the graphs were finished, the relations were obvious at once, but the plotting had to be done in order to make them so. At one point, it was necessary to compare six experimental determinations of a function relating speech-intelligibility to speech-to-noise ratio. No two experimenters had used the same definition or measure of speech-to-noise ratio. Several hours of calculating were required to get the data into comparable form. When they were in comparable form, it took only a few seconds to determine what I needed to know."

While the idea of spending more time digesting information than obtaining it sounds tempting, I feel like this kind of defeats the point of research and in some ways, ruins the meditative process that people go through by which they digest information in different ways. If the same computer system laid out all the information in the world to everyone in the exact same way, then everyone would learn to be think in the exact same way. It sounds rather 1984-ish,.

The Weiser paper was a bit more interesting, providing more concrete examples about how technology could become ubiquitious with a limited amount of artificial technology. However, I agree with the points brought up by several users. Saliem Than mentioned that the analogy of the ubiquity of writing and the idea of technology doing everything for you is a huge jump; I highly agree. Writing still requires effort - someone still has to read, has to decide what to read. The majority of information conveyed in writing is still locked away in archives that must be found and digested.

Like MuQing Jing pointed out, ubiquitous computing increases the invasiveness of computers and the major privacy breaches. Of course, it is described as something being in the background such that it will be barely noticed, but it still seems sad to be completely surrounded by technology such that you do more "thinking". It seems like with the more spare time, human beings become more nad more disillusioned by what to do with it. I wonder how the authors of this television would rate the ubiquity of television - while still confined to individual boxes, it still transcends individual boxes, and is fairly easy to use. Yet everyone who uses television substantially is accused of letting their brain rot.

The first conclusion the Weiser paper presents: "Sociologically, ubiquitous computing may mean the decline of the computer addict. In the 1910s and 1920s many people "hacked" on crystal sets to take advantage of the new high-tech world of radio. Now crystal-and-cat's-whisker receivers are rare because high-quality radios are ubiquitous. In addition, embodied virtuality will bring computers to the presidents of industries and countries for nearly the first time. Computer access will penetrate all groups in society." I find this a highly biased conclusion. The "decline of the computer addict" will simply occur simply because everyone will be a computer addict. And ubiqutious computers will only serve as another social stratification factor, the way most technologies are.

The second conclusion, overcoming information overload, is a bit more interesting. I would definitely be interested in that, but as Mu Qing said, the perpectual invasiveness of it seems nice. There is something very tangible of being able to close your laptop on the world, whereas a perpetually available computer that you become dependent on to do things like make coffee is much harder to block out in times when you just want to meditate by

JoshuaKwan 18:08, 19 November 2008 (UTC)

The Man-Computer Symbiosis reading strongly reminded me (simultaneously) of Star Trek, the Model Human Processor, the heuristic guidelines... The author clearly envisioned a future of "Computer, calculate a trajectory through the nebula that minimizes the number of asteroids that the Enterprise will strike. Oh, and tea, earl grey, hot." The MHP stuff factors in halfway through the article, talking about how we have poor response time and lossy memory compared to computers. (I chuckled at: "When we start to think of storing any appreciable fraction of a technical literature in computer memory, we run into billions of bits, and, unless things change, billions of dollars.")

The Weiser article takes a more critical tack, where instead of talking about what could be, talks about what current computers aren't. I agree with him about virtual reality, it's a bridge to nowhere. He speaks of the lack of ubiquity of computers and the lack of wireless networking that have since become a reality, though what he says still rings true for personal computers: they haven't just faded into the background. We still sit at our computers and use them actively. He also speaks of scratch computers that we can just pick up, use, and toss aside like scratch paper - OLED paper may soon make this a reality.

I think the people who wrote these articles are (or would have been; are they alive?) pleasantly surprised at the state of affairs of computing today. Their vision isn't 100% fulfilled, but we have come a long way in just half a century.

Paul Im 18:18, 19 November 2008 (UTC)

The ‘Man-Computer Symbiosis’ article describes the progressing relationship between humans and computers. As of yet, the computers are still seen as a part of the man, in other words, as an extension. In due time, as this relationship grows, there will a need to provide a better interface that links the two together. This link, as proposed by Licklider, seems a bit farfetched, however. There just seems to be too much invasion on the part of the computer.

In the article by Mark Weiser, the focus is also on the future of the computer, but not so much that we will be integrated tighter. Instead Weiser takes a more practical approach, such that computers will become a larger and larger part of our lives. This has obviously become true even within the past twenty years. With the emergence of the internet, there is no doubt that computers will continue to evolve in our lives.

Stuart Bottom 18:22, 19 November 2008 (UTC)

I’m going to take the skeptical view again and posit that Weiser’s view of ubiquitous computing, while clearly visionary and radical for its time, is fundamentally flawed in one key area. It may be unfair to critique his writing 17 years later, however, given the fact that we still have not seen a realistic implementation of his proposed “pads and tabs” system, I’ll go for it.

Weiser speaks about computers as being replacements for everyday items (paper, book spines, file folders, pure glass windows, etc.), and fitting them elegantly into the way we do things now. To me, this discretized distribution of small computers presents an information-technology and logistics management nightmare. The first question is “what if something breaks?” Do I lose half the functionality of my book because the computer battery suddenly runs out, or I drop it on the table and crack a circuit inside? Must I replace my entire desk if its expensive internal computer breaks? Finally (on this note), how is one to integrate all this? It seems in Weiser’s proposed world, I would still be running around my house querying different items for different information as I needed it. Shouldn’t I be able to do all that from one place? Why should I have to fish the garage door manual - in paper form, no less - out from behind the workbench, and why shouldn’t I be able to view it from any “computer” in the house?

The second question to ask is “What, really, is the point of ubiquitous computing?” To me, the fundamental requirement is that ubiquitous computing fundamentally change the way we do things. Why should computers enhance and complement the way we do things today, when they could instead radically transform what we do into something far better? Why stick computers inside of everyday objects, when we can have new objects that mean we don’t HAVE to do things the way we used to – instead, they fundamentally change the way we live – making the old objects completely unnecessary.

I have great respect for Weiser’s visionary ideas, especially given their age. However, I fear he falls into the same trap many of us do: it is too easy to think about technologies in the future solving our problems with processes of today. In short, I ask: why change the process technology, when you can change the process itself? Weiser’s vision strikes me as too heavily promoting a “steampunk” mentality: if we use horses and buggies today, why not enhance them with technology to make them easier to use? If you think that’s a ridiculous idea, consider: is the concept any different with pens, or a pad and paper?

Man used the horse and buggy (chariot, cart, sled, etc.), or some variation on it, for thousands of years. Yet today, most of us don’t ride around in a horse and buggy, even for fun. Why is that? Something better – the automobile – came along. Why should this be any different with the paper pad and pen combination (which in some form – papyrus, clay tablets, etc. – has also been around for millennia)? Maybe there IS something better than paper and pen too, we just haven’t seen it yet. I would argue our energies should be devoted to finding this fundamentally new technology – not necessarily to making computers fit our “old school” ways of doing things.

Geoffrey Lee 18:25, 19 November 2008 (UTC)

In my opinion, the reading on the man-computer symbiosis, despite being historical, is still fairly valid in today's world. While computing speed and memory have progressed by leaps and bounds, the concept of humans setting goals and computer offering courses of action hasn't really gotten that far, although I think Google is making very good progress in this area. Often times, Google and Wikipedia are the first things that I go to when I need to prepare to make a decision. I enter in a bunch of fuzzy terms based on my own intuition, and Google or Wikipedia returns multiple results for me to choose. Today, Google is even capable of suggesting queries for me if my initial query was not good enough.

Kevin Lam 18:25, 19 November 2008 (UTC)

Reading these articles made me realize just how much progress we've made in the last twenty or so years. Still, it's hard to image where we will be in another twenty years. These articles provide a glimpse of a possible future. The man-computer symbiosis seems almost out of reach. I personally don't think we will ever reach there, simply because computers have and are currently being used as tools to help humans complete their (everyday) tasks. In order for a computer to really aid people beyond simply being a calculator, they need to almost have a mind of their own. Until we can develop a free thinking AI, I don't see that happening.

In terms of security, I would have to agree with what most of my peers have already said. In order for a computer to better aid humans in their work, the computer would have to be in sync with the human mind. That tears down any security barriers that exist. Also, computers need input (information). To get that information, we as humans would have to be more willing to sacrifice our privacy. Take Google maps, for instance. To get better information about driving/walking routes, we need more information about streets and addresses. You may not want to disclose images of your home or neighborhood but Google has taken the initiative to capture street views in order to help people navigate their way around the nation. Thus, there is a fine line between sharing information and giving up your privacy. That fine balance will continue to be tested as computers become more complex.

Anthony Kilman 18:27, 19 November 2008 (UTC)

The Symbiosis paper holds some interesting points. Though I definitely don't think that some of the technologies mentioned are anywhere near around the corner. With the UIs we deal with in 160, they tend to be pretty straightforward, and only require a bit of psychology and some hacking skills to flesh out. But a direct interface to the brain would require leaps and bounds in neuroscience. As a whole, the fraction of the human brain that is understood is minimal. Even if a significant development project were to successfully integrate a tiny computer into a human brain, the ramifications and effectiveness of such a project would be minimal at best.

In regards to the second paper, I'd have to agree with nathanyan with respect to the goals of ubiquitous computing. Immediate access to information is more of an accurate goal, as supposed to tiny computers everywhere. The widespread use of computing devices will provide said access to information, but from a high-level view the objective is not the hardware but what the hardware provides. And I came across some people mentioning personal privacy and security issues with the second reading. We have the same issues today, and in a sense this is a consequence of the level exposure to the world being proportional to connectivity. In regards to privacy; case in point: *everything* Google.

Yuta Morimoto 18:36, 19 November 2008 (UTC)

Today's reading describes possible or desired future that the authors want to realize. Both of them told me how the relationship of human and computer would go. I think some technology they discussed or predicted is already realized and in some case is invisible from us. For example, credit system is now days very pervasive thing, but in fact it is a great example of aid for decision making and cooperative task. I think more sophisticated ubiquitous ones will likely be emerging in few years. However, I still not convinced whether these technology are useful or not in real situation as author described. Because some of newly computer or challenging product like they said are appearing and disappearing every

Shyam Vijayakumar 18:29, 19 November 2008 (UTC)

One of the articles talks about the concept of ubiquitous computing and examples of it (live boards instead of chalk boards at Xerox PARC). I think multi-touch technology is an important development that can be used to implement another example that employs the concept of ubiquitous computing. For example, in the case of live boards, one needs a pen-like device to use. However, a multi-touch board is more intuitive and easier to use. The popularity of multi-touch devices like the iPhone further supports the implementation of ubiquitous computing devices using multi-touch technology. I believe Microsoft has actually headed in this direction with the technology they came up with called surface computing. Here, instead of a wall, they used the surface of a table for people to interact with the computer.


Mikeboulos 18:30, 19 November 2008 (UTC)

Weiser has a future perspective towards computers that revolves around "ubiquitous". I can see this happening already! but I see that ubiquitous computers are more about cheaper and smaller computers that can integrate with other technologies and can be almost invisible, but will change our lives.

The other article talked more about computers in 20th century where everyone waited in a queue and to get there code compiled in a computer and executed.

Jonathan Fong 18:39, 19 November 2008 (UTC)

The Man-Computer Symbiosis paper is incredible for something written in 1960! It explored the challenges that would be encountered to form such a relationship, but the fact is that now that 1) we understand humans better (e.g. human-processor model, learning, etc), and 2) technology has advanced so we have faster processing and more viable memory options. That paper could be the roadmap on "how to" implement something where computers can much more effectively compliment real-life human intelligence.

James Yeh 18:53, 19 November 2008 (UTC)

Seeing as how both articles were written a long time ago (especially the second one), it was kind of interesting to see the optimism that both authors had for the future of computer functionality in our lives. Compared to the way that we use computers now, their visions for computer roles were quite a bit different. Furthermore, the scenarios presented in the two readings were also different. In the first article, the author saw the computer as almost a pet-like extension of man; while the generality of the article and the naïve understanding of computers was understandable (considering the article was written in when computers were still a novelty), the tone of the article was remarkably accurate in reflecting what computers can do for us today. For example, most calculations and automated procedures today are executed by computers, but we still need to program them to do what we want—as predicted by Licklider. The second article, however, seemed a little misguided concerning the direction of the computer’s future; in particular, while the concept of “tabs” and “boards” might seem like a novel and amusing idea at the start, the larger and more realistic human desire of improving efficiency in daily life begins to reveal the infeasibility of Weiser’s scenario. While Weiser does present the flaws of his case, the article seems to overestimate a lot of the cost, computability, and size predictions of future computers. Why would we need computer screens in the form of “paper” when we could just type the same information up and email it or print it out? I understand that it could be very hard to predict what people would want to do with computers in 10 years, but Weiser seems to push a specific example without considering the many other possibilities of ubiquitous computing. In fact, the approach described in the article appears to propose the comfortability of living a life infiltrated with computers, while computers today have gone in the direction of usability, practicality, and efficiency. In the end, the contrast in accuracy between the two articles demonstrates how a prediction in the general trend of future computers is much more likely to be realized than a sketch of a specific, elaborate scenario.

Personal tools