Latest Entries

Video: Information R/evolution

Information R/evolution is a five minute video telling the story of the transformation from a world of categorized information to a world of living information the we all enrich continually. It’s from the same guy (Michael Wesch) and in the same style as "Web 2.0 … The Machine is Us/ing Us."

When his "Web 2.0," video came out I wrote that

Perhaps the so-called ’social web’ isn’t about connecting people, but about information conservation: If a person chooses to do something — no matter how small — it’s inherently interesting, precious, and valuable.

I still think that’s true, and I find more support in this new video:

Here is "Information R/evolution" by Prof. Michael Wesch:

Hap tip to the information aesthetics blog which is a great source for "data visualization & visual design."

Web Visions 2008 Conference

I just received word that one of the better conferences around is back for another year. Web Visions, the annual event in Portland, Oregon, will be May 22-23 (Thurs-Fri).

Join the rockstars of design, user experience and business strategy for two days of mind-melding on what’s new in the digital world. Get a glimpse into the future, along with practical information that you can apply to your Web site, company and career.

Session proposals are being accepted under the end of 2007.

It’s really a lovely conference, and I recommend that you check it out if you’re in the area (note that it’s light on dev and high on design topics). I love that it’s smaller and more personable. Plus, the friendly, thoughtful vibe that is Portland carries into the conference itself. It attracts more passionate folks instead of 9-5ers, and that’s a good thing. Plus, it’s especially affordable. Registration isn’t online yet, but sign up on their site to be notified.

Perhaps I’m partial because the first conference talk of my career (First Things First: IA and CSS) was at WebVisions 2004 (thanks for Christina Wodtke)

More Info

Slipping: TechCrunch Reporting

Over lunch today I was catching up on my reading. I was drawn in by one of their headlines (which I saw on TechMeme.com). My interest quickly turned to disappointment because the article was poorly researched, exhibited nearly zero analysis, and sat under a sensationalist traffic-grabbing headline that it failed to back up. I expect more from TechCrunch, and I think they owe their 598k subscribers – me included — better reporting. The #1 blog should lead us to quality and respect by example, not through sensationalism and hollow reporting.

This was going to be a comment on TechCrunch’s site, but I agree with many recent commentators that posting on ones own blog and letting Trackbacks make the connection is the more respectful, responsible, and effective way. I’m not exactly sure why I needed to get this off my chest today, but here goes:

Mr. Schonfel, in my opinion your article and its headline are bad journalism. I believe the data reported by AddThis is insignificant and an insufficient basis for your broad headline. You provided no context or substantiation. I feel that you’ve done your readers a disservice by publishing this article.

You report that AddThis is used “nearly 2 million times per month.” Does that seem like a lot to you? Significant? Does their data correlate or challenge other available data or trends? What, exactly, gives you the confidence to warrant such a far-reaching headline?

I believe you would have done well to report on the overall market size that they are a niche within. Technorati’s About Us page reports, for example, that there are 1.6mm new blog posts PER DAY (sounds like “nearly 2mm” to me); over 5mm new blogs each month; over 100mm blogs total.

In addition to questions of reach, I have to question the use-case and user profile that AddThis.com enjoys. I know you have the button on your site, but can you report what % of your visitors interact with it? Have you cross-checked your total del.icio.us saves witt the numbers AddThis reports? You have both those pieces of data – so that should be reportable.

I’m given additional pause when I notice that approximately 1 in 6 AddThis users us it save to their native Favorites folder! Really? Why would anybody do that? You don’t need a special tool to bookmark a site in your browser, in fact it’s much slower than any of the other available mechanisms (native menus, keyboard-shortcuts, dragging-and-dropping). There’s nothing wrong with people doing that, but it doesn’t make then seem like trendsetters.

In total, I don’t see any reason to think that this article is insightful or relevant. I’m worried about TechCrunch’s integrity when such poor data and analysis leads to such a presumptuous headline.

I’ve taken the time to write this comment because I expect more from TechCruch. You’re earned my attention in the past, and I won’t let my silence help you short change yourself. I’m a big TechCrunch fan, like most of your (alleged) 598k readers, but I expect you to do much better reporting than this sensationalist rubbish. I’ll be back for your next post, and hope it’s much better.

I have two hopes. First, I hope I’ve misread or misunderstood something, and that I’ll have an opportunity to retract this entire objection. If not, but second hope is that this call-to-action encourages greater journalistic integrity, whether new or old media.

Respectfully,
Nate Koechley

@Dom Vonarburg, comment #25 on TechCrunch and a representative of AddThis, please feel free to provide the answers my comment is hunting for.

Less Than Perfect

I spend a lot of time thinking about what it means (and takes) to build a high-quality, professional web site or application. I consider the whole spectrum, from macro concepts like Graded Browser Support, Separation of Concerns, and Progressive Enhancement to micro rules like never employ href="#" and always use the label element to bind text to form controls.

It’s difficult to compose a prescriptive list of all the issues a "perfect site" must satisfy. So, lately I’ve been think from a different perspective. Instead of "what it takes to be great," I’ve been asking myself "what does a great site NOT do? More specifically, I’ve been assuming a perfect site gets 100 points initially and then loses points for shortcomings.

Here are a few examples that I could use to measure a site:

  • -1 point for each instance of href="#", (max of -5).
  • -5 points for redirecting old browsers to a "you must upgrade" page instead of letting them see the plain linear content at least.
  • -2 points for design degradation at +/-1 font-zoom level; -1 for degradation at +/-2 zooms.
  • -1 for each form element missing an associated label element.
  • -2 for a missing (or malformed) doctype

It’s too early to debate the mechanics (should it be -1 or -2 points), but I like the approach in general and am going to keep playing with it this week. One good way I’ve found to discover the list of things is to find a nice modern page, hit view-source, and start giving it a code review. Each thing that catches my eye probably belongs on the list, somewhere at least.

I’m quickly building a longish list — and will publish it before too long — but right now I want to ask: What would you put on the list?

An Unfortunate 404

I just registered on a shopping web site (to get some staples delivered). I clicked through to skim their privacy policy because some types of shopping sites share info in ways I’m not comfortable with. One section to pay attention to is "Using Personal Information." This one was pretty standard – not great but nothing unexpected. I was happy when I saw the following sentence/offer, as when given the chance I opt out of most mailings:

If you prefer not to receive this type of information from us, you can contact us at 1-877-723-3929 or online, click here.

So I clicked though. Dead link. "Sorry, there is no Safeway.com web page matching your request."

That’s encouraging. Thanks a lot.

(I hacked around for a bit and was able to find the correct link to update your safeway mailing and privacy settings.)

No, Mr. O’Reilly, it’s not all back-end

Tim O’Reilly, in a nice rebuttal to the flame up of silly "Web 3.0" noise over the last few days, gets much right. I agree with everything up until he writes:

Google is the pre-eminent Web 2.0 success story, and it’s all back-end! Every major web 2.0 play is a back-end story. It’s all about building applications that harness network effects to get better the more people use them–and you can only do that with a richer back end.

Um, no. I agree with his reminder that Web 2.0 does not equal some specific technology (ahem, DHTML/Ajax), but to say that front-end magic has nothing to do with the Web’s 2.0 resurgence, or, more specifically, that front-end technology has nothing to do with Google’s darling status just doesn’t cut it for me.

[Oddpost and] Gmail reminded most of us that you shouldn’t need a page refresh to read your web mail, and that the improved efficiently is good, welcome, and here to stay. Sure, it’s cool their back-end provides unlimited storage and great spam filtering, but it’s the great interface that gets people’s hearts beating and them coming back for more. [G]Maps noted that in the real world you can slide the map left and right in front of your eyes, and that offering the same direct-manipulation interface online is better than the one-tile-at-a-time approach. Sure, Satellite and Hybrid views are cool, but without drag and drop the game hasn’t changed. (TerraServer and others offered Satellite view since the last 90’s at least, but it wasn’t a gamechanger.) Tags are cool, and he’s correct that Flickr is largely a network-effect play — but Flickr also showed that reducing the cost of adding tags (by not requiring a refresh) made for more tagging, and therefore more network effect. Google Docs (previously Writely) is all about complex front-end engineering.

I grant that these services are made possible by increasingly sophisticated back-end systems, and that other Web 2.0 systems such as Last.fm or Ad Sense are fundamentally back-end systems. But to say that the world’s Web 2.0 fascination is related exclusively to clever back-end shenanigans misses the mark. My point is that you wouldn’t recognize Web 2.0 without the glamour and power of today’s front-end interfaces and techniques..

It’s never been a better time to be doing front-end engineering. DHTML/Ajax is not Web 2.0, but it’s hard for me to imagine the recent resurgence without it.



San Francisco, California | Creative Commons By-2.5 License | Contact

RSS Feed. This blog is proudly powered by Wordpress and uses Modern Clix, a theme by Rodrigo Galindez.