X
Business

GestureBank

A few nights ago a few of us were gathered together by Yahoo to preview some new RSS tools. As is customary at these events, the Yahoo executives chatted us up.
Written by Steve Gillmor, Contributor

A few nights ago a few of us were gathered together by Yahoo to preview some new RSS tools. As is customary at these events, the Yahoo executives chatted us up. Scott Gatz, the company's lead RSS guru, refused to tell me what the announcement was about until I pointed out the press release I'd been handed at the door. Mike Arrington and John Furrier hovered over their machines, counting down the minutes until the 9PM embargo expired. Dave Winer and Om Malik got to know each other in person for the first time.

The senior Yahoo was Geoff Ralston, chief product officer. After some preliminary fencing (are services products? yes) Geoff looked for feedback about Yahoo's aggressive RSS stance. I gently walked him from RSS to attention, then dropped the bomb: Now that everything is about attention, we're on the move to the Gesture Economy.

Geoff covered it well, but lurking behind the slightly tensed brow was the deer-in-the-headlights look--a little fear, but a lot more excitement. Excited because if he just played it right, we'd hand them the ideas on a silver platter. Fear because what is free is never fully appreciated.

Of course, nothing is for free, really. Gmail is free, but at the cost of your metadata. Search is free, but at the cost of a tactical answer, not a strategic one. Which result you choose is the payment, setting off an event chain that sometimes results in action and monetization. The metadata--which item you choose, the fork in the road you take--is captured but not shared. The result: an opportunity cost lost to the Google or Yahoo or MSN silo. The cost: time not saved.

Follow the breadcrumbs for a minute. If a gesture is not shared, what is lost? The network effect, for one. GestureRank for another. What? GestureRank--the price the market will bear for harvesting the authority of a particular gesture. Remember: this is a post-attention world we're living in. Just as the RSS wave triggered an embarassment of riches and triage cost, the Attention wave triggers an authority architecture and corallary characteristics. If PageRank crystallizes link authority, GestureRank crystallizes gestures of intent and, crucially, the lack of intent.

Attention to something is valuable, but in a world of too much information divided by the time to consume a portion of it, signalling a lack of attention is more valuable. By that construct, gestures of inattention will fetch a greater price, and purveyors of gestures of indirection or redirection will gain inordinate value as compared to domain experts. Deriving GestureRank is therefore a function not only of who the gesturer is, but what is the nature (or type) of the gesture, and who or what group or domain it is directed toward.

The Gesture Economy's power derives from its obedience to the time constraints of the user-in-charge. The key to understanding the inevitability of this transformation is the profound effect the gesture dynamic has on the content that it refers to. Where current information is created in a broadcast, attention-seeking environment, Gesture-triggered data is generated as a result of multiple proferred contracts with users. It's the opposite of invasion of privacy, the invitation of privacy. A request for proposal, complete with cues as to how to prioritize the inflow of information to deliver the most time-efficient transfer.

This is the podcasting fundamental--not the triumph of the amateur but the intersection of the receiver in the conversation that is the material being created. It is neither better nor more valuable than its predecessors, but simply unique in its low barrier to entry and targetted economic reason for being. Gestures become inextricably interwoven with so-called content, creating a fabric of intelligence, emotion, and humor that is difficult if not impossible for audiences to resist. Why do we like comedies at the movie house--to enjoy the laughter of the idiots next to us.

Shared laughter efficiently reveals the power of gestures. All around us we hear and generate the sounds of humor--the chuckle of recognition, the cackle of just deserts, the snort of derision mixed with self-knowledge, the humanity of it all. It's jazz, isn't it; the improvization we all want to sit in on. Let's try an experiment: when you wake up in the morning, what's the first thing you click to to get up to speed? Email? OK then what? RSS? News? Memorandum? Scripting News? Om Malik?CNet? Paidcontent? Crunchnotes? Battelle? Email newsletters? Back in email and around we go again.

Now let's clear the slate and start over with gestures. Examine your inputs from an attention perspective and you'll see most if not all sources are based on attention fundamentals. Google Desktop's Web Clips are my current favorite example--click the Options link and you are offered the opportunity to "Automatically add commonly viewed clips." From the moment you check the box, Web Clips tracks not just what you drill into in the Web Clips menu but everything you read regardless of reader, interface, attention metrics, whatever. It is a blunt instrument, but it has built-in amplification of what you pay attention to across your environment.

Interestingly, it amplifies gestures more than raw attention. Take my vanity feeds, scoped to "Gillmor" across Technorati, PubSub, and Google. When Alex Barnett cites me in his OPML/Attention posts, I pick them up in a number of places--Rojo, Web Clips, even the occasional Google Reader sojourn when I am in search of a quick fix without disturbing my bookmark in Rojo's river of news view. But Web Clips records all of these inputs and slowly but surely subscribes in effect to Alex's feed, regardless of whether he cites my name.

No matter how coarse the gesture modeling, Web Clips gains traction and mind share because it is more responsive to gestures than other engines. Rojo captures gestures such as "read", "saved". and "shared", but the relative opaquenss of the river of news to read characteristics maintains an echo chamber more than it flushes out emergent trends. Google Reader maintains a black box relevancy setting that is interesting to monitor, but ultimately inefficient because there is no way to determine what gestures it responds to. It is likely a product of the application of pagerank  to subscriptions and item consumption, but the net result is a one-way attention filter with no incentive for gesture production.

But even the opaque nature of Google Reader relevancy output gains value when farmed for the effects of gestures, no matter how ambient. Web Clips' auto setting harvests Google Reader discovery when it is invested in. If your gestures could be recorded while using GR and then played back as inputs to a subscribed gesture feed, the benefits of whatever algorithm package is applied can be measured for additional value if any. In other words. sharing gesture streams can provide a salutory prioritization and discovery dynamic regardless of the identity of the original gesturer. In effect, a gesture signature can be distributed without violating the privacy of the author, harnessing the network effects without revealing proprietary information.

This also raises the possibility of gesture modeling based not on what the user generates but what the user would like to have generated based on the resultant value of the information. A gesture stream could be played back and annotated, edited, or compressed to improve its value for sharing and remixing with other streams. As these performances approach the boundary between the original content and the shared context of the metadata, the unique mashup that results has correspondingly higher value and therefore GestureRank.

I'll be expanding on the Gesture Economy at Syndicate on the 14th.

Editorial standards