Defining ubiquitous computing vs. augmented reality

(and vs. All Other Such Paradigms)

What’s the difference between Ubiquitous Computing (“ubicomp”) and Augmented Reality (“AR”)? I hear this question often, and you could replace “augmented reality” in that question with any of the following buzzy paradigms for people-interacting-with-computers: Virtual Reality, Pervasive Computing, Mobile Computing, Wearable Computing, Multi-Device Interaction, Cloud Computing, Intelligent Systems, Ambient Intelligence, Context-Aware Computing, Adaptive Systems, Machine Perception, Social Computing, Smart Environments, Everyware, and so on.

The perils of definition

Is ubicomp a superset or subset of <buzzy paradigm>? A fair question, but I’ve hesitated to propose a formal definition because they’re:

  • Overly confining — I’ve heard people say “oh, but ubicomp doesn’t address x or y” …When it does.
  • Often misused — I’ve heard people call the use of a browser on a smart phone “an example of ubicomp” “–no, that would be “mobile computing” …But do we care??
  • Usually degrade into never-ending semantic or ontological debates — I’ve heard long discussions about how “ubiquitous” means literally “everywhere at once,” so ubicomp can only be equivalent to some kind of all encompassing artificial intelligence.  … Please. We’re not trying to write science fiction here, we’re trying to create systems that help people throughout their life.

For the most part, I don’t find formal definitions useful; you can call it whatever suits your fancy. All that matters is that I understand what you mean when you use a term and that you understand what I mean when I use it (whether we use that term in the exact same way or not is immaterial). So, here’s what my colleagues and I generally mean when we talk about ubicomp.

Ubiquitous Computing is…

The attributes of a definition that carry lasting meaning are not technological properties (performance, cost, size, distribution, latency), but the core capabilities that the paradigm enables for usage.

Ubiquitous is the property of being or seeming to be everywhere. Its synonyms: omni-present, pervasive, everywhere, universal.

Computing is the act of calculation or generation of an output based on input. It can be carried out by a person alone (“she is computing the result in her notebook”) or with the support of technology (“the teacher is computing their scores on the mainframe”). It’s also possible to think of computing as being carried out by technology alone (“the laptop is computing their scores”) but in fact those cases are directed by a human operator.  In all cases, “computing” involves a human (or some other autonomous intelligence).

Usefulness is actually a pretty important attribute of Ubiquitous Computing (or “ubicomp”). Just as the sound of the proverbial tree falling in the woods only matters when someone is there to hear it, the act of computing only matters when it is of use to someone. Usefulness differentiates ubicomp from terms like “artificial intelligence” or “ambient intelligence” or “smart environments” — where the intelligence or smartness could, theoretically, exist for its own sake, not necessarily for the usefulness of others or those in an environment.

  • Synonymous (practically) with ubicomp are “pervasive computing” and “Everyware”, though the terms didn’t come into use until later.
  • A common characteristic in ubiquitous computing systems is  “multi-device interaction”, but it is possible to create ubiquitous computing systems where the user is primarily interacting with one device (e.g., a smart phone, an electronic kiosk).
  • Not necessarily (but often), ubicomp can involve “mobile computing”. This term implies that either the person or the computation is capable of being in motion, but  it would not necessarily be the case that such mobility spans all places (certainly my 3G network doesn’t go everywhere I do). So mobile computing is not necessarily wholly ubiquitous, nor does ubicomp wholly include mobility as a ubicomp system might be stationary (e.g., a home entertainment system).

So what’s the difference between Augmented Reality and Ubiquitous Computing?

Getting back to the question, Augmented Reality (like “mobile computing” as described above) is neither a subset or superset of Ubiquitous Computing.  Augmented Reality (AR) is the presentation of electronic information along with a real-world object, projected physically or as seen through an electronic display. Ubiquitous Computing (ubicomp) is the seamless integration of information services as we accomplish goals throughout our work and personal lives.

BOTH have to do with the use of information services in conjunction with real-world objects.
BUT one is about perceiving “reality”, and the other about the usefulness of the “computing” to our goals.

The key point of overlap, and the source of confusion to some, is that both AR and Ubicomp utilize machine perception to detect the state of the real world. AR systems typically use cameras, GPS, and electronic compass to detect the location and orientation of physical objects relative to each other. A Ubicomp system may also employ those same sensors along with others such as switches, thermistors, microphones, chemical detectors, strain gauges, accelerometers, and more. Such sensing technologies enable machine perception that is approaching the fidelity of human perceptions — of temperature, sound, sight, smell and taste, proprioception, balance, and motion,  respectively.

Cutting to the chase,

  • AR depends on machine perception technologies to detect the identity and physical configuration of objects relative to each other. It aims to project information alongside a physical object.
  • Ubicomp does not necessarily require that the information be displayed alongside one’s perception of the real-world items. Ubicomp uses machine perception to incorporate inputs that are not necessarily explicitly entered by human operators — such as physical states of motion (running, walking, driving, or riding?), attentional demands of the situation (driving in traffic or sitting on a train?), other people’s attributes (roles, demographics, or psychographics), and more. It further encompasses electronic information about things outside of one’s physical environment, perhaps adapting the presentation based on the attentional (driving in heavy traffic) and physical (arms full) demands of the user.

Ubiquitous Computing, like sands through the hourglass

In essence, Ubicomp research focuses on the use of information services throughout our everyday lives, which might include an AR-style of interaction. In that sense, Ubicomp might seem to cover everything; but then the term “Ubiquitous Computing” would be no more meaningful than “Computing” in general.

For PARC, ubicomp is not a superset or subset of anything, but deals with what exists in the interstices of other computing paradigms and research areas. For me, this includes:

  • Device Network Interoperability, Implicit Interaction, Human Micro-Behavior Analysis, Contextual Behavior Modeling, and Activity Awareness;
  • A growing number of applications such as: Secure Data Exchange, Persuasive Systems, Context-Aware Recommendation Systems, Behavior Modeling, Remote Monitoring and Troubleshooting; and
  • Future advances that realize other capabilities not currently on the roadmaps of other paradigms such as: Responsive Media (similar to Human-Robot Interaction but without the anthropomorphic robot), Ubiquitous Digital Assistance (systems that make decisions modeled on your personal priorities and tradeoffs), Life Coaches (systems that monitor and advise you to help attain your personal goals), Hyper-Presence (the ultimate extension of digital presence we see of instant messaging and Social Computing), and of course all sorts of other things.

More about these some other time.

 

16 thoughts on “Defining ubiquitous computing vs. augmented reality

  1. Joe McCarthy

    Well, let’s not forget about sentient computing (the term pioneered by Olivetti Research Lab) and situated computing (coined by HP, and later adopted by Accenture before it shifted to “ubiquitous commerce” as a primary focus); pervasive computing was coined by IBM when it started getting involved in the field. For a while, it seems like many of the big companies had to imprint their own brand on the field as part of their entry… which is why I’ve always preferred ubiquitous computing or ubicomp as the most general descriptor.

    I’m glad to see your emphasis upon usefulness; however I’m not sure this is widely shared throughout the Ubicomp community. As I look over my notes from UbiComp 2009 and UbiComp 2006, I see a mix of projects that are clearly useful or potentially useful, and others that are of questionable utility.

    I believe there are still a significant number of people in Ubicomp who exhibit a techno-centric perspective, engaging in just-in-case development and conducting research on technology in search of a problem.

    I am heartened by what I see as a human-centered focus that grows a little stronger each year, and I hope that the usefulness of the research — to humans besides the researchers — will likewise grow more prominent each year.

  2. Bo Begole Post author

    Hi Joe!!!

    Those are great points and I do hope the field is making progress toward usefulness. I’ve sometimes challenged researchers who seemed to be presenting technology-for-technology’s sake and generally found, though, that they genuinely believe that their inventions are useful. So, at least the intent is there.

    A problem that Ubicomp faces is that it’s really hard to validate or invalidate “usefulness”. Even if a system fails a user experience test (which they somehow never do :-), well it could just be that the technology didn’t fit that particular use-case well but would still provide value in another scenario.

    I wonder if there is a way to characterize “usefulness” along some theoretical metric. Haven’t found it yet.

  3. Joe McCarthy

    Ah, good points in your followup as well.

    I agree that many Ubicomp researchers are genuinely motivated by use cases they believe will yield real value to [other] humans. Some of these seem more compelling (to this human) than others.

    And I also agree that ubiquitous usefulness is a very tall order, but ubiquity in Ubicomp is always an approximation, anyway.

    Finally (for now), your point about declaring victories is well taken. As I highlighted in my notes from Pervasive 2007, your paper with Kurt Partridge was a delightful exception, i.e., along with the areas where your hypotheses were validated, you included a valuable lesson about one that was invalidated:

    Most interesting to me was their analysis of why Mechanical Turk didn’t prove to be a more effective approach. It turns out that the army of low-paid wizards turned out to be somewhat unreliable, often sending the same response (e.g., “drinking coke”, or [more] random responses such as “ZZ” or “a”) to any query, regardless of the activity labeled. Although the team came up with a two-tiered system for collecting 5 responses and offering those up to 10 other MT workers for votes (which, of course, turned out to be considerably more expensive, as it involved significantly more queries), it does seem that they generally got what they paid for, and demonstrated that any web 2.0 service is vulnerable to “gaming” behaviors. Still, I thought this was a really cool idea, and believe the general idea of utilizing web 2.0 services for HII user studies is vastly understudied (and may well be the basis for killer apps in ubicomp).

  4. Adam Nieman

    Thank you for a very useful semantic overview. As you point out, definition is perilous.

    I have considered the relationship between AR and “everyware” also, but in a rather different way — by looking at the “history of the future of computing”. See: http://bit.ly/creL0u

    I have also considered how to co-opt the real world to enhance computed realities. That is, to “borrow” the world itself to provide a better interface with data. Effectively, this is turning AR on its head, which I call “Reality Augmented Data” or “RAD” but, as with ubicomp, locates RAD in the landscape of computing is non-trivial. See: http://bit.ly/48cqq3

  5. Pingback: links for 2010-03-04 | Don't mind Rick

  6. Pingback: Four short links: 4 March 2010 | Tech News From All Over The Net

  7. James Landay

    Nice, but putting usefulness in this definition seems redundant. Who is making things they don’t THINK are useful? Then every part of computing should define whatever it is and tag on useful? Now, I did claim in my own critique of much ubicomp research that people make things that aren’t of high value (e.g., find lost objects, tell me I’m running out of milk, etc.)… but, not sure having value defines this area.

  8. Joey1058

    @ Joe McCarthy: You mention any web 2.0 service, but I would think that ubicomp in general is subject to gaming behaviors. Especially now that computing platforms are becoming handheld for the general public. Just my thoughts on it.

    A good article, over all!

  9. Bo Begole Post author

    @James — Right on. You are pointing out that ‘usefulness’ is subjective. I always recommend that we don’t let researchers determine whether something is ‘useful’… leave it to the market or other external assessment.

  10. Pingback: UBICOMP vs Augmented Reality « LUCI blog

  11. Pingback: It’s time to reap the context-aware harvest - PARC blog

  12. Pingback: The best way to invent the future is to predict it - PARC blog

  13. Pingback: Interview with Vernor Vinge: Smart phones and the empowering aspects of social networks & Augmented Reality are still massively underhyped | UgoTrade

  14. Anselm Hook, PARC researcher

    I’ve felt recently that one of the values of augmented reality interfaces is not to let us “look through walls” or reveal hidden information about the outer world in general, but to help us organize, manage, and review our day-to-day work objects.

    I refer to this latter definition of AR as “Augmented Cognition”, and it falls more in line with the Doug Engelbart approach to technology, where technology acts as a prosthetic to amplify our ability to deal with information normally too voluminous, discrete, or rapid to process otherwise. You see this with the convention of the GUI and mouse – affording us a greater parallelism over work and allowing us to hold and manage more data objects at once. I think the logical conclusion of that will be stand-up interfaces where work floats in front of us.

    Another factor that is also being brought to bear is how physically unhealthy the sit-down interface is for us as humans. Our bodies are not designed for it. So I like to imagine that future augmented cognition interfaces will encourage us to stand up more, to work in a way that is somewhat more performative, less hermetic, and more visible… I imagine some day we might all be standing around in PARC’s, er, parks, collaborating on writing software in a way that looks much like we are all engaged in tai-chi.

  15. Pingback: Weekly digest of week 9 2010 | Capping IT Off | Capgemini

Comments are closed.