16 2000 |
As I sit here, typing this, I'm on my bed, in nothing but my boxers, with my laptop & power supply across my lap (I forgot to recharge it). Typing is a bit ungainly; reasonably, the keyboard should be a little bit higher than lap-level, and I can't hold my knees up high enough to put it at a proper angle to acommodate the way I hold my hands. When it comes to interfaces, though, it's acommodating me quite well. I'm using a DOS word processor, typing in normal language, using the keyboard.
Computers may be able to think fast, but they sure aren't designed to interact with the real world. In a way, if computers worked seamlessly with the real world, their usefullness would be diminished. I've said it before, I'll say it again, the computer's greatest asset is that the things created within it operate outside of our normal laws of space, time, and relativity (within mathematical guidelines, however). That's where the benefit comes from -- a scientist can come up with velocities and accelleration and direction for an object without having to physically measure the object. All they need are the numbers and a computer, and the high-powered calculator can spit out all the numbers in a fraction of a second. That's also how "theoretical" particles like quarks, leptons, mesons, etc., are 'discovered'. You can't detect them, but someone, somewhere, used computers to calcuate their existence, without even knowing what they are or how they work. The insides of a computer operate outside of the bounds of reality as we know it. The programmer and the user work with the computer to create something entirely new.
*How* they work with the computer is another story all together. This 'wetware' real world that we all rely on for our existence doesn't have a seamless boundary with the internals of the computer. When I say internals, I don't mean the physical layer of computing. I'm talking about the spaceless, timeless void of computing power. For a computer to work with the real world, it has to slow down. It has to wait for input, pause for keystrokes, or count clicks as the mouseball rolls the tiny internal motion sensors. Anyone with a slow computer running a multitasker learns quickly is that if the computer slows due to heavy processing, you do not touch the computer under any circumstances. Forcing the computer to keep track of what you're doing and output the relative actions on a screen for you to see will bring the computer to a standstill, or at the worst, crash the system.
The computer, however, needs us to know what to do in the first place. Even virii need the hand of a human God to set it's actions into motion, a program written, an icon clicked. The virus that self-remails itself still needs a human's contact list to get itself around. A computer may be able to do unimaginable things, but it can only do the things a human tells it to do. The interface is how it's done.
A crude analogy is to compare a computer interface with teaching gorrillas sign language. The trainer and the ape can communicate, but both sides require a foreign communications medium to get the information across, and neither side are completely working on the same wavelength. The sign language slows down the human's ability to transfer information to the ape, but the gorrilla has no other way to directly communicate in a way that's readily legible to the human. The sign language interface is a common pathway for which the information can travel both ways. By converting thoughts into hand motions, both sides can transfer those ideas to each other.
A computer doesn't exactly have ideas, though. What the human and the computer need to transfer back and forth is logic. This ranges from 3D graphics being required to pbey general laws of physics, to clicking on an icon and the corresponding application being started. A computer can think in multiple directions (or be programmed to do so), but humans like linear, cause-and-effect activities.
The basic computer interface becan as a switch. On and Off. Entire computer programs were written in on-and-off patterns, which computer users translated from their program into binary code that the computer can understand. This binary sign language got things started, even if it was rather one-sided. The computer got the benefit; it wasn't required to translate in the other direction. As computers became more powerful, the keyboard was the next big step. As strange as it sounds to teach a computer english, there wasn't really any true understanding of language. Programs were patterns of commands, which another program would translate from the human-understandable language into the computer-understandable language. As time went on, this went from being a separate task run after the programming was done,to being done on-the-fly, with self-compiling programs and compilers built into shells.
The GUI brought the first real-world interface to computers. It takes an immense amount of processing to create a visual representation of the inner workings of a computer; the computer itself could care less about a visual description of itself. So much of computing today is wrapped around making a comfortable interface for humans, that we forget how much we have to adapt to using that interface.
The GUI brought with it the mouse. For as simple as this pointing device is, it's functionality has changed little in the past 30 years. It is still a palm sized object, with a button or two (or three), which has a sensor in the bottom to measure movement in two dimentions. That movement is then transferred to the computer screen, by moving a cursor or some other object. The learning curve for a mouse is probably lower than a keyboard, but most people have some experience with typing when they sit at a computer.
For all the advances in interface hardware, the keyboard and mouse seem to be the best meeting of human and machine possible. with both, you have language, you have movement, and when the mouse & keyboard are used together, a nearly infinite number of possible actions are available. The response of the computer, however, is a different story. The GUI screen seems to be a graphical representation of a desktop, with files, folders, and items spread around it. Projects are set on top of others, and the topmost one is dedicated the most attention. Sound also plays a part in it; tasks in the background may talk to you when they are ready, or if something has occurred which you cannot detect from outside the computer ("you've got mail!"). However, there is limited space and range for the things the computer is doing. A screen is only x" across, and it exists in two dimentions. moving from window to window doesn't translate across from performing multiple tasks at your desk, although the computer probably keeps better track of them them you. The subject of scrollbars brings up the largest real-world incongruity -- to move the page up and see the bottom, you drag the scroll bar down. Neither side gets the best of all worlds when it comes to working with the other, but both ends make the best of it.
The future may hold a time when the interface parallels the holodeck from Star Trek. Humans speak to the computer in natural language, and the computer presents it's actions in three dimentions, easily interactable with a humans hands, feet, eyes, and ears, and the human user can interact with the computer within the same three dimentions. Even still, the bridge of the Starship Enterprise is laden with keyboards and monitors. It depends on who is to benefit greater from the interface -- the humans or the computer. Would you trust natural-language to drive your car? "go forward...speedup--brake--slow down....turn left.....NOW." The interface used when controlling a car is a steering wheel and two pedals. These aren't designed to make the car human-friendly; they are designed to give the human the amount of control that is required to get the car to do exactly what is expected of it. There is a point where the conrol system of a car has progressed to make it more confortable for a human to use, but it still remains tailored to the car's requirements for control. However, a skateboard is more dynamic. You may have to use unnatural motion to propel yourself, but navigation is caused by manipulating your body within three dimentions to make use of inertia, resistance, and the weight on the board to cause the skateboard to do the things it is capable of. Each is a means for locomotion, but one interface is designed for the machine, the other is designed for the human. Each has its place, and each has its benefits and drawbacks. Computer interfaces have tried this; the Mattel Power Glove was a popular toy for tinkerers. It was a basic glove, but it had simple motion-reactive sensors which translated movement into computer controlled actions. The glove still allowed for typing, but three-deimentional interface was available. Interfaces will continue to be designed for the benefit of not just the user, and not just the computer, but for the system that the two create when the interface is being used. For me, typing text, the system of myself, keyboard, and computer are the most efficient for the type of work being done. Kinetics may be better studied in a synthetic three dimentional environment, and starships may be controlled better with a system of customized keyboards. Every task requires a different system of interface, and that requires accomodations on both sides to work.