The Understanding Group (TUG)

View Original

"GPS" At The Scale Of Your Living Room - So What?

By Dan Klyn

How many iPhone 11s are there in your office?

10 days have elapsed since launch day, and there are already 2 of these top-of-the-line telephones in TUG’s 3-person Grand Rapids office.

The iPhone 11 launch event was a juggernaut of features and functions. What I remember being interested in as I watched a bit of the live webcast had mostly to do with cameras and lenses. Unlike my officemates, I believed that I was unaffected by the new iPhone. I didn’t feel the earth move under my feet. Not the way it used to (RIP Steve).

This morning, I felt something. Call it an aftershock. Stuff started rumbling for me when I read this headline from WIRED magazine’s Twitter account: Amazon and Apple building networks that know the location of everything.

Initially, I dismissed this rumbling as nothing more than the machinery of my personal interests being activated and then cranked up by clickbait. As you may know, the interplay of location and meaning—especially in digital products and services—is the linchpin of my research, teaching, and practice in information architecture. It’s my raison d'être.

On account of this keen and fairly niche personal interest in the spatiality of meaning, I consider my inner voice to be an unreliable narrator of what other people who work on websites and apps care about. So please feel free to dismiss what follows as the ravings of one whose wool is already dyed.

So here’s what I think; here’s my rumble.

I think this “networks that know the location of everything” stuff is going to blow up in our faces like a cartoon cigar; and in our googly-eyed denouement, we’ll begin to see something that video game, live event, and exhibition designers have always seen. Which is that the people who decide where we situate things and information in space (especially on the Z axis), and the people who make decisions about the choreography and the emplacement of human activities and human bodies relative to things and information, hold all the cards in the game of designing users experiences.

These networks that know the location of everything, to the extent that they solve real problems, will be solving old problems. What will be has always been. What stuff is, and where stuff goes, have always been intertwingled in a Gourdian knot. Artists, architects, and engineers have been correcting for differences in the ways that people prefer to experience things in space, and the way things are actually situated in space, for at least a few thousand years.

In 2D art, one of the great Tintoretto paintings at the National Gallery comes to mind: where the size and position of some of the figures in a picture of Christ washing his disciples’ feet seems all wrong until you learn that it was meant to be viewed from below and from the right, from a particular vantage point in a church in Italy.

In architecture, we could use entasis as our example: where the shafts of the columns in front of a grand building are made with a bulge so that, when viewed at a distance, the shafts appear perfectly straight.

Entasis in the columns of a building, like linear perspective in a painting, helps designers embed certain values and meanings in the environment, and deliver particular experiences for particular audiences in spite of how and where things would more typically show up for us in the environment.

Knowledge of the inner workings of effects of this sort, where perception is re-written or “hacked” through precise and thoughtful arrangement of material in space, used to be encoded as craft knowledge and protected through master/apprentice relationships and guilds. Sooner than later, I predict the equivalent secrets for shaping augmented and blended space will be available to all, as open source software. For now, however, we’re projecting into the realm of proprietary techniques, and trade secrets. This month both Apple and Microsoft touted entasis-for-your-face features in their devices and software, promising the ability to make you appear to be paying attention during a video conference. Which reminds me of something Andre Gide said: one cannot be sincere and at the same time seem so.

With the coming mass-generation and aggregation of precise spatial maps of most every thing, and most every behavior, in most every place, precise and reliable computer-generated information overlays in our eyeglasses and contact lenses become a near-term, implementable reality.

“GPS at the scale of your living room” (as Apple’s marketers now say) means that the location of every item in the environment is established, checked, and ongoingly mapped by sensors in devices including smartphones and smart speakers, and made available for incorporation into products and services. It’s not literally using global positioning satellites: it’s using data collected by sensors embedded in networked devices.

The advent of geo-spatial (GPS-like) technologies at livingroom scale shifts the conversation from an internet of things, to a matrix of emplacements. And it begs a corresponding shift in the focus of digital product designers from things to systems. From spaces to places. From (as Bill Buxton so beautifully said at the Interactions conference in Seattle in February) ubiquity to ubiety.

Every space suddenly becomes a place. Every position, on every compass, becomes a site for situating information, products, and services. And most startlingly (to me, at least): all of the things we dwell among, and on whose basis our dwelling depends, now become available for editing and re-situating in a hybrid world of blended spaces that’s seamlessly co-present with the given world and “objective reality.”

The machine learning technologies of today, that we use to anticipate and adjust the number and kind of products in, say, a Best Buy kiosk at the airport, are just as easily used to arbitrage and populate what shows up in the visual field of device-wearers in the near future.

My colleague Andreas Resmini says he’s more skeptical today than ever of the role that VR- or AR-like technologies will play in all of this. Instead, he’s curious about what people will do in the environment where these technologies are emerging, and suggests that the sharpest corner to navigate in this “spatial turn” is about behavior change, not technology change. In one of our endless email exchanges, he describes this shift as one where we’ll be “using our embodiment engines (spatial primitives, proprioception, etc.) in a different way - conceptually blending digital and physical into a new kind of place that we navigate differently.”

Enter cybernetics. And cyborgs. An Apple watch on a chain, with an Apple Monocle at the other end?

What’s promised here is on a different evolutionary branch from rule-based pattern matching. It’s not so much about computers being able to guess at what any particular thing is by cutting the thing out of its background, and then comparing the cut-out to other cut-outs and finding similarities. With “GPS” at livingroom scale, computers will be able to both precisely describe and accurately predict the situatedness of all the things in a space, on the basis of the whole scene, and what takes place there over time.

Richard Saul Wurman says that everything takes place some place. We’re about to see what it means when computers take-in (vacuum-up?) all the places, on the basis of all the things, and then offer us the ability to re-situate, remove, and redesign the parts we don’t like. At least when it comes to individual perception.

Klyn family livingroom

In this new kind of game, we’ll keep track of how near or far away from the desk the chair ever gets, and take note of changes in the position and “suchness” of the items on the coffee table over time, in order to build up high-dimensional models of places as a function of experience and use. All of it available for analysis, editing, and re-mixing.

“GPS” at the scale of your living room is about cutting-into the worldness of the things in your world, to mine and understand their ontological cores as interconnected wholes; as contrasted with a cutting-out and extracting of entities from an image or series of images, for analysis on the basis of their part-ness. It’s less about “desk” and “chair” than it is about a particular desk from West Elm, oriented a particular way, in concert with an Eames shell chair from Herman Miller, being used in a fashion that differentiates what’s going on in my living room from a “home office.”

Klyn / Resmini OTC Model: Ontology (suchness) at the core, Topology (emplacement) as most everything else, and Choreography (systemics) being the part that people do and notice the most.

In the new game, even more than in today’s practice, there’s no way to talk about UX without talking about UI.

In the new game, a “place” is both the interface for and site of experience.

In the new game, information architects and digital product designers are as concerned with shallow - deep as they are with up - down and previous - next.

In the new game, what gets bracketed when a UX designer says “it depends” has more to do with changes in situatedness and emplacement over time than changes in business requirements.

In the new game, we’ll need patterns to account for how things and information relate to other things and other kinds of information in the environment, as much as we’ll (still) need maps and models to describe what people do with and want from things and information.

And to win the new game, we’ll need to supplement our design systems and component libraries with typologies of topologies: something like a modern-day Vignloa crossed with what Bill Buxton calls placeonas.


I’m curious to hear about what you hear when Apple says they’re giving us GPS at the scale of our livingrooms. Am I all wet? Do design systems already cover-off on the stuff I think is coming / missing?

Note: special thanks to @mariekennedy on Twitter, who helped me clarify that what Apple marketing means by GPS is not global positioning system satellite data.