« All Posts

Re: Voice Interfaces

Dustin Curtis wrote a fantastic article discussing the shortcomings of voice interfaces. I'd like to discuss a few further points in relation to this problem.

Go read his thoughts first, then come back. I'll wait.

That's nice of them to help out, but why do I have to tell my phone how to be accessible every time I use it? Personalized accessibility should come as a standard. It shouldn't be that difficult of a task. In fact, how cool would it be if I could tell my phone how well I could reach, and it remember that? Perhaps adapt my interface to work with my hand better. Or maybe track where my thumb is and move the icons nearer to it, magnet-style. Seems relatively doable if Amazon's phone can track my eyes.

This kind of context is the low-hanging fruit; the things that continuously provide a positive return in user experience. This is the evolution of "user account preferences".

This should be a contextual setup on my computer, but it's not.

Maybe you have kids that you want to allow to use your computer, but you don't want to set up full user accounts for them. Why can't you easily set access control and flip a switch to change contexts?

It's very simple to make this happen - in fact, on a few occasions, I have set up scripts to make these kinds of contexts happen with a simple command. But unfortunately, my operating system doesn't do this on its own, and my contexts shift over time. Thus, maintaining scripts to handle this for me is unrealistic.

My laptop should be as smart or smarter than my mobile device. Until the resolution of interaction on a phone matches or surpasses that of the interaction on a desktop computer, desktop OS innovation must keep up.