Archived entries for Design

Accessible DHTML presentation at CSUN this week

It’s been so busy lately, both professionally and socially, that I haven’t been putting any time into this blog. I’m sorry about that, and have lots of ideas swirling around in my head that I hope to be able to write here soon.

In the near term though, I wanted to let you know that I’ll be in LA this Thurday presenting a paper at the CSUN accessibility conference. The paper/presentation, co-authored by my colleage Victor Tsaran, has the long title, “Yahoo! Experiences with Accessibility, DHTML, and Ajax in Rich Internet Applications”. The 45 minute talk will review the current state of web development and then offer three families of techniques for making the DHTML development that’s at the heart of Web 2.0 accessible to all users.

It’s an interesting and important topic. From 1999 thru 2004 the web became increasingly accessible with the broad adoption of Web Standards and related modern methodologies. Since 2005, these gains have been under pressure as we all race to push the limits of what’s achievable with DHTML in capable and modern browsers. While it is a myth that DHTML is not accessible, in practice the rush jobs and rapid innovations of the day often leave accessibility as but an afterthought. Additionally, as mouse-based desktop interactions — drag and drop for example — become more commonplace online, it’s tempting to exclusively rely on mouse-based input and manipulation which is a cause of concern to the accessibility community (and keyboard-loving geeks everywhere). The straw that often breaks the camel’s back is Ajax, which partial-page updates are often unnoticable to screen readers and other types of assistive technology.

I’ll post slides after the talk, and will be writing about this with Victor in an upcoming article for our Yahoo! User Interface Blog.

Yahoo! User Interface Library

It lives! I’ve been pushing and planning for this since last summer, and I couldn’t be more excited. Nor could I be happier with the response we’ve received so far from all of you. Thanks for the encouragement and all the kind words.

What am I talking about? About nine hours ago we publicly released and open-sourced two cool previously-internal libraries, a companion blog, and an article on Graded Browser Support that I authored:

Yahoo! User Interface Library – Industrial-grade JavaScript for DHTML and Ajax. The same libraries that power Yahoo! today.

Yahoo! Design Patterns Library – Our thinking and solutions on common interface design issues.

Yahoo! User Interface Blog – News and Articles about Designing and Developing with Yahoo! Libraries (rss)

Graded Browser Support (article) – An inclusive definition of support and a framework for taming the ever-expanding world of browsers and frontend technologies.

If you have any questions, let me know. I’ll be posting more details on the blog throughout the week (and ongoing), but wanted to get the links up now before bed.

For a more thorough introduction and more links, check out the first three posts on

7 Characterists of Web 2.0 Development Practices, from O’Reilly Radar

Marc Hedlund writes on the O’Reilly Radar blog about “Web Development 2.0“. In his experience, “many startups and companies seem to be developing a new set of software development practices”:

Software isn’t written for Web 2.0 companies the way it was during the
bubble, nor is it written the way traditional, shipped software was.
New ideas about Web applications seem to necessitate new ways of making those applications.

He reports on 7 characteristics:

  1. The shadow app
  2. Sampling and testing
  3. Build on your own API
  4. Ship timestamps, not versions
  5. Developers – and users – do the quality assurance
  6. Developers – and executives – do the support
  7. The eternal beta

He’s got a paragraph or two under each of those bullets, so I encourage you to head over and take a read. I’m a big fan of #3, and have been doing #2, #4, and $5 for years. What about you? Any to add to the list?

Same Language, New Dialect

Vivabit’s Dan Webb wrote an interesting post a few days ago that touches two important topics, both of which I’ve been thinking about a bunch lately. His entry is called DOM Abuse Part 1: Drag and Drop and he says towards the beginning that:

As more and more JavaScript libraries add solid drag and drop support I begin to shiver. Everyone is going to be doing it soon and Im scared.

I’m not scared – I think we’re entering a great era of web design – but I understand exactly what he means. I would categorise his two points as “the accessibility issue” and “the discovery issue.”

First, accessibility: Advanced interactions and behavior provided via JavaScript must be enhancements, not the sole way to accomplish a task. In desktop cut-and-paste, there are at least three ways: keyboard shortcuts; “Edit” menu options; and drag and drop. Accessibility isn’t an optional characteristic of the Web. With what could be considered a gold rush of JavaScript development powering a big chunk of “Web 2.0”, the accessibility gains won over the last four years (Web Standards) are at risk. For JavaScript, the way forward is clear – progressive enhancement, unobtrusive javascript, and Hijax – and championed by the DOM scripting task force.

His second point I’d summarize as the “discoverability issue”. It’s definitely an issue, but it’s also a symptom of a larger overarching issue, what I call “the low expectations issue.” Here’s what he says:

Drag and drop is not a method of interaction you see on the web (at least at the moment) and as such you do really need to be told when to do it. That’s not good. I’m not used to reading what’s on the screen. How are we supposed to know to and when not to try it?

It’s not that the feature isn’t discoverable (though it could certainly be aided by some visual affordances), it’s that he’s not expecting it to be there! On the desktop there are minimal cues because we expect it to just be there, and often don’t need to be told.

In my opinion (with a hat-tip to colleagues Eric Miraglia, Bill Scott, and others), this is a primary design challenge of the day. It’s not just about adding visual affordances, it’s about something bigger. It’s about raising overall expectations in a careful, purposful, we’ve-got-one-chance-to-get-this-right type of way.

Since the beginning, we’ve been lowering expectations of what’s possible in the browser compared to other desktop software. No double-click in the browser. No drag-and-drop in the browser. No right-click, context menus, auto-save, auto-complete, full screen, minimize, layers, spell-check, not even many tooltips.

More broadly: no direct manipulation, no immediate feedback, and no persistence in the browser. On the desktop, we learn by experiementing. In the browser, users have stopped exploring because there hasn’t been a reason to explore, nothing to find. (To make matters worse, every click has traditionally meant many seconds of page teardown and replacement.)

It’s not that we didn’t want to provide a familiar experience, it’s that the technology wasn’t really available in the browser. That’s not true anymore.

But, the availability of new technology isn’t a cure, or a reason to believe we’ll make a successful transition.

Being able to do something does not mean that we should do something. Why is more important that how. Using animation to create 2006’s version of the 1999 Flash splash screen isn’t a why, it’s a “because we can” (and a bad idea). On the other hand, using animation to ease transitions, provide user feedback, maintain user orientation, and promote learning of new idioms are four good reasons why.

But knowing why isn’t the same as doing it well. When I say that we’re got one chance to get it right, I mean this: If we bring the rich interaction patterns of the desktop to the browser in a recognizable, comfortable, thorough, complete and appropriate way, user’s will break through their doubts and quickly transfer their desktop experience into the browser. We won’t have to put big neon signs on our sites saying “drag here”. If we get it right, users will just assume.

On the other hand, if we don’t get it right, if we’re spotty, if we don’t keep the façade intact, then the illusion will not stick. If we make too many missteps, if we leave to many gaps, then the nearly free “user education” and the potential parity of expectations will be gone again. I’m not scared by drag and drop, I’m scared that if we miss this chance to bring richness to the browser, user’s expectations won’t just be low they’ll be shattered.

To be clear, it’s not about replicating the desktop in the browser. They’re different environments. Instead, we want to take the idomatic language users already understand and express it within this new environment. Same language, new dialect.

Most Underrated API? The Yahoo! Term Extractor

There’s a million APIs out there, and I couldn’t be happier. It’s easy now to translate street addresses to lat/long coordinates. It’s easy to grab local results, and overlay them on a map. It’s easy to use Yahoo or Google to get all types of search results (local, images, etc), and sites like Amazon to get prices and products.

But I think one of the coolest and most underrated APIs is the Term Extractor API from Yahoo!:

In other words, you point it at a piece of content — a news article, blog post, movie review or whatever — and it returns a list of terms, or keywords (or “tags” for those of you keeping score at home).

What do you do next with a list of keywords from a piece of content? Well, lots of things. Jeremy Keith wrote yesterday about a few ideas (that seem up for grabs, if you’re in a hacking mood!).

What if you treated each returned term as a tag? You could then pass those tags to any number of tag-based services, like Flickr,, or Technorati.

So, instead of the simple “here’s my Technorati profile” or “here are my Flickr pics” on a blog, you could have links that were specific to each individual blog post. If I sent the text of this post to the term extractor, it would return a list of terms like “api”, “yahoo”, etc. By passing those terms as tags to a service like Technorati or, readers could be pointed to other blog posts and articles that are (probably) related.

Like he suggests, it gets interesting when you let the output from this web service be the input for another service. I was lucky enough a few months ago to lend a small bit of help to the team that brought you the Yahoo! Events Browser mashup. One challenge of that product was to get images associated with each event. If you’ve ever worked with unstructured data — event listings are super unstructured — then you know that they don’t provide many high-quality hooks for understanding their content. The team tried doing image searches on venue or artist name, but the results weren’t very relevant or interesting, even when the parsed venue or artist was accurate. So, being the put-lots-of-pieces-together types there are, they decided to use the Term Extractor to discover more accurate, meaningful, and specific query terms to then find images for. Here’s how they summed it up:

To display appropriate images for events, local event output was sent into the Term Extraction API, then the term vector was given to the Image Search API. The results are often incredibly accurate.

I’ve only seen a handful of implementations of the Term Extractor API so far. If you’ve got a cool one to point me to, or a cool idea for a future implementation, please leave ‘em in the comments below.

wg:List – Best Web Development Articles of 2005

Alessandro Fulciniti reported his Top 20 Bookmarks of 2005 on the Web-Graphics blog. Some great stuff, in particular On having layout (a must-read for anybody trying to get CSS to work in browsers). If you’re doing web development or design, I recommend being familiar with all 20 of his list.

San Francisco, California | Creative Commons By-2.5 License | Contact

RSS Feed. This blog is proudly powered by Wordpress and uses Modern Clix, a theme by Rodrigo Galindez.