Continuing on the cyborg theme, I thought to myself:
I've been here on the sofa all afternoon, fumbling with my awkward iBook, and yet it is here, with this sub-par setup, that my thoughts flow most freely into the computer, and not at my desktop. It must be the relaxing recline of the sofa, for the larger screen and use of a mouse, instead of the restrictive 12" inch screen, clumsy trackpad and awkward keyboard navigation, are far more conducive to my interaction with the machine.. but not with my thoughts.
So, mobility wise, what do you want?
Well, let's see. For starters, a heads up/goggle-type display with an enormous "Desktop" (no need to go architecting 3D UI OS's), voice navigation and voice dictation. Added to that a pair of gestural data input gloves. With this, an application which presents everything I say (commands excluded or in a meta window) as text which, via the gloves, I can "copy paste" wherever I need it: chat with Joi, comment reply to Aaron, email to ... whoever, blog entry, etc...
To continue on the concept that conceivably the brain can "learn" to use new virtual artificial extensions to my "self", my two hands could merely be triggers or "training wheels" to new, multi-tentacled, again, virtual interfaces (read: fingers). Think of the special robotic multi-fingered hand prosthetics in "Ghost in the Shell". Why lob off one's hands and replace them with robotics when their existence could be fully virtual, yet controlled by the brain? It IS feasible, I have SEEN it.
I posit that with less effort than to learn sign language, our brains could learn to use virtual extensions like these.
It's like riding a bicycle! Right Marshall? ;)
This is also a basic theory of "good design". You know "how" to use a tea kettle just by looking at it. By making it intuitive, as opposed to a language learning process, you make it more useable.
It all comes down to wasted CPU cycles in the brain. The less I must process "how", the more I can process "what"... and where, who, when...