Computer: How far is it to the next good interface?
Paw Prints: Writings of the maddog
I am a big fan of Star Trek. I do not consider myself a “Trekie”, but I do enjoy the show and thought a lot about what the writers put into it.
For example, on Star Trek the interaction with the computer was not only through the one interface of keyboard and mouse.
If you were walking down the corridor and you had a simple question to ask you would hit your communicator badge and simply say:
“Computer, how far to the next galaxy?”
In which case the computer might answer:
“The next galaxy is four hundred light years away and the ship can get there in four hours at warp factor nine”.
Simple question, simple answer. [Trekies, please do not bombard me with trivia about the actual calculations of time, space and warp factors.]
On the other hand, you might be walking with Captain Janeway through the ship's corridor, tap your communicator and ask:
“What were the results of my latest physical.”
The response might come back:
“You have 125 venereal diseases and 42% of them are alien.”
Unless you were Captain Kirk you would then slink away to your “personal log” computer in your cabin and type in:
“Computer, are any of those treatable?”
The “personal log” interface in your cabin was for the longer, more intricate inputs than just talking to the computer through your communicator.
Likewise the Enterprise had “workstations”. Not like we think of “workstations”, which are (for the most part) just powerful "personal computers" with a large screen, keyboard and mouse, but real WORKSTATIONS, where you did REAL WORK.
For example, in the middle of a battle there might be someone hurt on the bridge. Dr. McCoy would show up, pass his tricorder one time over the person to see what was wrong, then IMMEDIATELY say
“Beam us to Sickbay.”
Off they went. Why? Because Sickbay was the doctor's WORKSTATION.
Sickbay had everything the doctor needed at his beck and call. If he could not cure the person in Sickbay, well....the person was wearing a red shirt, were they not?
Or navigation. Sure Chekov could do “simple” navigation from the bridge, but when Seven-of-Nine wanted to do “real” navigation, she went to the navigation room with the big star maps and lots of controls and computer inputs and outputs to do “just navigation”. The doctor never tried to cure anyone in navigation, just as Seven-of-Nine did not try to navigate from Sickbay.
Even the bridge was a workstation. You could steer the ship from the emergency bridge when the saucer section was disengaged, but otherwise you controlled it from the WORKSTATION of the bridge.
Finally Scotty, the ultimate engineer. Sure he could do engineering anywhere, but when he wanted to run full diagnostics, he did it from “engineering”. Of course my favorite “Scottyism” is when he was back on earth in the twentieth century trying to save some whales, and when handed a mouse from a PC he tried to talk into it. Told that he had to type on the keyboard, he said "How quaint.
Now we are told that the interface of the future is going to be the “pad” or the phone. This is usually told to us by “industry analysts”, people that have three or more 30 inch monitors attached to their computer systems on their desk.
Sorry people, I do not plan my three-week long trips using my cell phone, nor do I input my income taxes on it.
I would like to use my cellular telephone as I am running through the airport trying to find the NEXT airplane to my destination after missing the LAST one because customs clearance took too long. I would like to say (not type):
“COMPUTER”, what is the next airplane from here to Boston?”
“It is American flight 383, leaving in an hour from Gate number E45. There is one seat left on it.”
I might stop long enough to try to book that seat, but wouldn't it be great if I could just tell the computer to “book it?”
Simple question, simple answer, and with just a bit more "smarts", real assistance.
I do not believe that “one interface rules all”, and I do not believe that we are currently writing “apps” to meet the users' needs even with all the interfaces available to us.
I will not object to a phone interface that is GUI, GUI, GUI. I am typically not in a position where I can put the phone down to type, or even “thumb”. This is why I love my Android. I TALK to it most of the time (it is, after all, a PHONE). For example, in researching this blog, I asked (via voice) my Google search input to be “red shirts in star trek”, and the phone came back with two possible inputs:
“Red Shirts In Star Trek"
"Red Shirts in Star Trek Always Die”
the second answer of which was (of course) the right search string. One tap (to accept the answer) and I had my search going.
I also talk to my Android phone (as opposed to talking to a human being on the phone) while in the car. I “talk in” directions, telephone numbers, people. It is a phone. I TALK to it.
In certain places I do not want to talk to the phone (and no, I am not worried about venereal diseases), so I use the soft keyboard. But if I have a lot of typing, my notebook (with its full-sized keyboard) is my preferred instrument of typing, not some gimpy phone keyboard that I have to thumb to death.
Now we come to the issue of graphical interfaces.
I have noticed a migration of one of my friends from KDE to Gnome (when KDE 4 came out), and Gnome to XFCE (when Gnome 3 came out). I will point out that this friend first lambasted Gnome when they were using KDE, but switched anyway. We will not even discuss Unity on Ubuntu.
It seems that all three of these interfaces (KDE, Gnome and Unity) are aiming towards the “mobile market”...the “beginner user” market, leaving people like my friend to try to make sense of their interfaces for use on his “workstation”, and for all the response he gets, he might as well try shouting at his mouse. He is the Scotty, the Dr. McCoy, the Captain Picard the....(no, I will not compare him to Seven-of-Nine).
I think we need to re-design solutions around the “Star Trek” methods of input....to fit the input method to the task and situation at hand. Do not expect to navigate the galaxy when all you have is a display that fits in your hand, but recognize that there are times when you just can not open up your notebook and you are too far away from your desk with the three 42" displays.
We may need to develop guidelines for developers and (dare I say it) a style guide for how to really interact with humans....to force them to re-think their applications to account for using the right interface at the right time.
Of course getting developers to understand this will be a big task, but we in the Free Software space have chosen to boldly go where no man has gone before, and we do have Star Trek as a model.
Live long and prosper!
Proper WorkstationI also can't belive that we will go away from real desk workstations to only touchscreen based mobile computers, thats just not how it works. If you need to do REAL work , even only typing a letter, you don't want to tap the letters into a touchscreen. What you WANT is either type them with a propert keyboard OR speak them. Tables,Phones and stuff is good if you want to look for emails while you have to wait for the train or check if the internet has a better price for the artice you want to buy. But thats it.
Also the html5/js movement that tries to force html5 apps on the desktop. This works for small apps that show the weather forecast for the next week, but not a REAL application that does REAL work. This is clearly targeted to the beginners group not for expirienced users.
Keybords in startrekThe interface that triggered my curiosity in star-trek was their keyboards, the idea to replace the slow typing of of written word with some kind of graphical language that will properly take time to learn, but I think when man hood crack this new way of interfacing to the computer will be the next quantum loop.
I know star-trek is fiction, but imagine to enter information into the computer with the speed they do in there.
thank you(sent from my workstation - with proper keyboard & screen)
The company is collaborating with Google and Intel to use Kubernetes as an engine for Fuel
Customers can take a free test drive of SLES for HPC on the Azure Cloud
San Francisco-based chip company announces their first fully open source chip platform.
The whole distro gets rebuilt on glibc 2.3
Ubuntu Vendor tries to solve app packaging and distribution problem across distributions.
Founder of ownCloud launches the Nextcloud project.
Will The Machine change the way future programmers think about memory?
The new Torus distributed storage system is available under an open source license on GitHub
Juries decides Google’s use of Java APIs Was Fair Use