Ok, so I know I haven't really explained a lot of things about this past year, but I feel like posting about something current. Beware: This post is about 4 pages worth of text. Avoid if you're just skimming through your RSS feed. :)
This summer IdeaTree Design, the start-up non-profit I'm working for this summer is designing the user interface for an automated TB testing device being developed by another group of students on Olin's campus. As a result, I've been talking and thinking a bit about interface design, so here goes my ramble.
In order to make sense of everything that was going on in my head, I decided to pretend that I was responding to someone who has asked me "how I would teach interface design to someone else." So here I go.
Now, this is probably going to be a bit difficult to follow because I’m going to be talking about two different things at the same time.
- First of all, I am realizing what exactly I know about interface design by determining how I would go about teaching it to someone
- While I do this, I can make connections between my explanation of teaching interface design and what the fundamental concepts of interface I feel I understand are
Here goes. If I had to teach someone some of the things that came to mind about interface design, I would start with some background research/thought about who my "pupils" are. For example, teaching kindergarten is radically different from teaching a college student, and I don't mean content differences (because those are obvious), I mean a different approach. When teaching college students as a teacher, assuming students understand material and do the work is much more efficient than in kindergarten. You can’t just assume all students learned the last lesson in the alphabet and just move on, hoping that if they don’t understand they will just speak up. A confused kindergartener stays confused until you “extract” their confusion. College students can understand being confused by something, and can work harder or ask questions . Kindergarteners get sad or distracted if they don’t understand you. So, say I was trying to teach interface design to some subset of the population. I want to teach 30-40 year old office employees who interacted with computers on a regular basis. Now, I need to do some background research. What do these people think about on a regular basis? What’s the best way to teach them anything? Well, here are some facts:
- They are fundamentally connected to their 9-5 schedule. If you were to go in an “teach them” something, it better be relevant to what they’re doing.
- They aren’t students, so assuming they’ll be excited about learning about interface design is a fatal mistake
- They interact with computers, often they learned how to interact with computers recently, or at least they’re not as proficient with computer usage as your average tween myspace addict.
I could go on, but let’s see what we’ve got. Well, it seems to me that a logical approach would be to “teach” them something relevant to their job and day-to-day happenings instead of teaching them about something they won’t care about. So, why not teach them something about computers that indirectly teaches lessons about interface design? Now you’re probably asking, why the heck do they need to know anything about interface design?! Well, to be honest, they don’t. This is just a helpful way to make me think about what I know personally.
Here’s what I would “teach” them. I would talk to them about things like shortcuts, and how you can get to use a computer really well without the use of a mouse. Many 35 year old office employees still find using a computer to be a memorized process, rather than second nature. So, teaching them the joys of shortcuts and speeding up their overall interaction with the computer will be worthwhile. They can learn how to access my documents without even touching the mouse. They’ll open, save, and print a quick document without even touching a mouse. They’ll shut down their computer and reset their volume controls or dim their screen as well. Then after that, I can start to show them how they can customize the computer to suit their needs. Given an example, I think they’ll see how much they can do to optimize their interaction with the computer. Applications like Launchy, Firefox (yes, some have no idea what this is…*tear*), multiple virtual desktops (I use VirtuaWin v3.1), or even things like Google Desktop widgets, Yahoo widgets, browser toolbars, and anything else can be downloaded and customized to their specific needs. The important thing is that THEY have to own their own learning experience. They have to download the programs they want (I can explain what some of them are, if necessary), decide if they’re useful for them, and customize as necessary. I can show them how neat my computer shortcuts and Firefox plug-ins are, but at the end of the day they have to do it themselves on their computer. In any case, ideally they can understand things like widgets, plug-ins, shortcuts and other things well enough to realize/accept that they own their interaction with the computer.
Many people in their situation just interact with computers as if the computers are forcing them to do so. The troubles most people in this group have with computers is often that they are frustrated that the computer won’t cooperate with them, but in reality they just need to find out how to make the computer interact with their own wishes. I know this seems odd, but approaching a computer problem logically, trying to use help and support and the internet to solve your problem is much better than banging your head against the computer for hours and then asking your kids to do it for you. The reason I chose to “teach” my imaginary office employees about computers is that I figured thinking about how I would teach someone how to better interact with a computer would help me think about interface design in general. So maybe they didn’t REALLY learn about interface design, these imaginary pupils of mine. Maybe all they realized (ideally of course) is that they can own their relationship with a computer, which is in a sense, gaining an understanding of the core value of interface design without thinking about it: making the product/device interact with the user instead of having the user interact with it.
This brings me to the second part of what I’m talking about that I mentioned a long time ago, the “connections to interface design” I would realize while thinking about my pupils. I believe that the ideal user interface is one where the device interacts with the user, not the other way around. If the user has to interact with the device, then it isn’t 100% intuitive. This is obvious, and nothing is completely intuitive right?
Assume the user knows that he/she can:
- Assume the computer will learn their mannerisms and protocol
- Expect everything to be modifiable, and take no status quo as permanent if it is cumbersome.
If the user understands this, they can feel free to experiment with the computer and really make it their own. Using speech and hand gestures, a computer which scans the space in front of it to determine who the user is would learn how to recognize hand signals and speech patterns. Imagine writing an email out loud, sort of like when you are dictating something to someone. Instead of typing and using the backspace key to correct for errors, the computer would be able to recognize (eventually perhaps) the following speech:
“Hmm, Ok, let’s start with: Hi Dave, how’s your day been? I know we haven’t talked in a while. No,…that dumb. Get rid of that last bit ...Okay...let’s try again…”
So, that looks like something someone would say, but the computer would not only recognize when the person goes from dictation to “thinking aloud” as in the section: “let’s start with: Hi Dave….” The computer could recognize “let’s start with” as a trigger for when that specific user wants to dictate something, but the computer doesn’t REQUIRE the user to say that. The computer would ideally track intonation and eventually realize what the user wants. The computer would be smart enough to ask the user what they want if it isn’t sure, instead of not doing anything, and the computer should have some standard operations that users can use such as: Computer, Do X. or Computer, Open X. This way if the computer doesn’t recognize what a user means when they say “Google microbiology” or “Rocket Pitch Powerpoint,” and the user doesn’t want to bother “teaching” the computer what it means, the user can just use the standard “Open Firefox, Google Search: X” or something…
In any case, I realize this sort of thing is best explained in a conversation instead of on paper, so I will wrap this up. I know this is a very long, confusing, and bizarre “blog” post, but if you do have questions or just would like to flip out and go crazy, comment! :)

2 comments:
Cute Marco. But computers are much more efficient if they use new (ie non-intuitive) methods. Let's add some real numbers.
Conversations are carried on at around 200 WPM. Someone who can type decently gets around 60 WPM. Well damn let's go with speech? Maybe not. Conversations are hugely inefficien. I'd be shocked if there was 100 WPM of real information there. Shocked. Slideshow presentations go at about 100 WPM and these are memorized ahead of time. I'm going to be generous and guess 90 WPM for speech.
Still way faster than typing until I start using shortcuts. 4 key-strokes gets me my e-mail address, 2 gets me a dividing line of dashes etc. Way faster. Also, computers need to know more info than just what you want written. I doubt you could make a computer that knows when you mean italics, bold, underline and combinations as well as size differences consistently. These setting can be handled quickly from the keyboard if you're an experienced user.
Rather than fearing computers limitations, we should embrace their strengths. As old limitations become a thing of the past and new strengths arise, we must quickly adapt. The qwerty keyboard is a great example of humanity failing to adapt.
Intuitive mappings only make sense if we want to transfer old abilities to a new context - If we want to surpass them we must learn something new.
Well, the thing is, I don't envision this pretend ideal computer to drop keyboard functionality necessarily. I can see typing to be faster with a keyboard, but the thing about those shortcuts of yours is that they don't work for the average computer user.
Most people don't bother with mappings of shortcuts. Eventually perhaps when most computer users have grown up using computers this will be less of an issue, but right now most people don't even use Firefox tabs, and for many CTRL+C for copy is still news to them.
I agree with that qwerty keyboard thing, but again, I don't think dropping the keyboard is necessarily the way to go. Granted I used an example of dictating a letter, but that's not necessary. Typing should always be an option. When I first talked to Erik Kennedy about this, he brought up Minority Report, when the main character was navigating using some handheld device/gloves. Intuitive mapping of gestures and speech could speed up rote, memorized shortcuts for most people who don't bother with them in the first place.
Here's a picture from that Minority Report movie. Yes, I know Boris...*LaMe* but whatever.
http://tinyurl.com/y646ql
Post a Comment