The new video age is going to require radical attention conservation tools to manage
by Vinay Gupta • January 18, 2009 • Everything Else, Personal • 0 Comments
Twitter and video blogging represent opposite ends of a spectrum.
I follow something like 100 people on twitter. It takes about ten minutes a day, maybe a little more if I’m using the web interface. Already several very interesting things have come out of that investment of attention – a larger sense of a social circle in London, for example.
On the other extreme there is video. Video is a real-time medium – one minute per minute. Audio is the same way, but podcasting is half way to heaven. Video is the dominant real time medium.
There is no way I could follow 100 people who were video blogging. At one three minute post every other day from each person I would spend two and a half hours a day just watching friends video blogs.
Twitter is poor media. It’s textual, short, low-bandwidth, microblogging of microcontent. The richness of the twitter experience comes from context cues. Here are the context cues I can recognize from the twitter stream:
* Time that posts are made
* Frequency of posting from a given poster (are my friends busy?)
* Location data giving event cues (“I’m watching Fred on stage at this cool conference”)
* Linkage
* Retweets and retweet frequency
So although the total textual content of a single person’s tweet stream is perhaps 1k per day if we add the necessary context to the stream we begin to have quite a bit of real information about what is going on.
Now video is quite another question entirely. One of the most influential films I made was an hour of two talking heads recorded from Skype. By ordinary standards this film was utterly unwatchable, but for a specific group of people, inside the context of a given community, it was a huge deal: Marcin’s work went from being poorly understood and extremely opaque to being a fairly well-understood perspective on reality leading to a rational plan to engineer artifacts. Perhaps a few dozen people watched that film in its entirety but those people explained and supported and communicated and connected in ways that really contributed to putting Open Source Ecology on the map.
That was a deep learning experience for me – the impact of a film is not about the size of the audience, it’s about what gets done. But it takes a solid hour or two of effort to watch that film and think about what is in it, then more time to follow up on the web sites to get some sense of the reality of what is being done over there and how it all fits together. And the newer media from OSE largely supersedes the early film. It was an artifact of a time and place.
What I’m getting at here is the difference between peripheral attention and completely concentrated attention. Twitter is how I map a broad space in the same way that I might stand by the bar at a party and survey the room. It gives me a little data on a lot of people, and if something interesting is going on I can choose to react and investigate. On the other hand, a long film produces a totally different understanding of what is going on. A lot of this is personality cues and context from people’s facial expressions, body motions, emphasis and vocal tone, flow of thought over an extended period, casual storytelling behavior and so on. That’s the equivalent of being drawn into a conversation and listening to other people discuss something in depth.
Douglas Hofstadter talks about “parallel terraced scan” as an algorithm for search: look at everything, but look at the more promising stuff first. My twitter feed is an on-ramp – you can follow Gupta in less than five minutes a day and I’m easy to tune out by Twitter.
On the next level in, you can read this, the blog. Large slabs of thought with long gaps, but you can scan for what you’re interested in. It’s probably an hour or two a week to read everything if I’m in a prolific mode.
Then there’s my AV stream. In the past week or I think I’ve posted about an hour of video and maybe five hours of audio. Now we’re getting away from reportage that I wrote up after an event, or an article I wrote – you’re no longer dealing with artifacts I spent time to create – but now you’re following me around. At this point, you’re my real-time companion – there’s not much way for you to accelerate your uptake of my audio stream, say, and that goes double for the video.
I could edit hard – and that would take a ton of time – and try and get you some more scope to skip forwards and back with indexes – but now you’re in terrain where it’s going to take you so much time to absorb what I’m emitting that you’ve got to be very, very motivated: unless you’re seriously looking at using my patterns and models in serious conditions where your comfort and safety might depend on my thinking, you’re not going to follow that stuff.
I think that this approach needs to be modeled and ratified: we all need to be thinking about Attention Conservation on behalf of our readers, and structuring our feed patterns to allow people to consciously choose what level of monitoring they want to do of our output, including sectoral analysis.
I think that this kind of modeling – a shared “pattern language” for feed structure – might be a really significant development in the “always-on economy.” I want the bits of your twitter feed that tell me when things are going on in London, and I want to know major life developments, and the OMG THIS CHANGES EVERYTHING news links you post. And I want that from about 50 people. Past that, I’m working to consume feeds, stretching my brain’s limited ability to store other people’s life narrative and so on.
Now this context begins to make sense of some things for me in terms of how to structure project data – it’s a tiered, terraced, tagged-and-sectoralized scan matrix – you can burrow in through the layers to actually see your way to the bottom of the stack if you need to but otherwise you just sift over the surface waiting to see an event you need to respond to…
What I’d like to see is some kind of XML markup one could attach to web pages and other data resources.
It would have a series of feeds – RSS / ATOM / what-have-you feed URLs – and markup making estimates of four things:
1> minutes per month required to consume this feed
2> number of events per month on this feed
3> anticipated bandwidth for a month of this feed
4> richness of embedded metadata on this feed (geography, time, tags) – just what metadata normally lives on this that I might be able to use to improve my ability to skip out on the bits I don’t care about.
This kind of feed profile could allow me to actually set my level of follow with software rather than manually managing it across multiple tools – twitter for the cloud, facebook, RSS readers, email lists and so on.
I think that this kind of approach is going to be very necessary in the next year or two as the entire damn world starts putting up videos. We’re really seriously approaching some kind of crossing point for people posting video of their activities online, and the time implications of moving from a two paragraph blog post that takes seventeen seconds to read to a three minute video of essentially the same content…
I need tools to collage and collage friend’s video feeds. I need local copies scraped off RSS feeds to watch on trains. I need filers and properly timecoded transcripts. Really what I’m saying here is this:
Twitter
Blog
Blog with frequent rich media links / looonnnng articles
Video Blog
Real-time recordings of performances, talks and events you attended
It’s a tired stack, consuming more and more time as I move further in.
Closing point: gimme a one hour video biography of you that I can watch and then never watch another video – hell, hire a professional to make one. Save me two months of talking to you to understand what you’re about. Save yourself all that time dealing with people who have no clue about who you are. Personal Video Biographies: It’s the next big, big thing.