What is a network, and what is not a network? The term LAN stands for local-area network. A LAN is a high-speed network that interconnects terminals and computers over very short distances; the equipment usually is contained within a single building. Because of the short distances over which it operates, the LAN is able to transmit information very quickly. And precisely because of the LAN's speed, it really is not a network.
LANs originally connected large populations of terminals to computers that were located close by. By using special heavy-duty wiring similar to the coaxial wiring used in cable television systems, computers were able to communicate with the terminals so fast that information could always be displayed on the screen almost instantly. For applications that involved the display of large amounts of information, this was an obvious advantage. Used in this way, a LAN is just what its name states: a network that operates in a local environment.
To understand how a LAN can be used in a way that makes it no longer a LAN, it is necessary to look at some of the underlying numbers that define the operating characteristics of this important technology. It may sound like this is getting pretty technical, but as the dentist says, "It won't take very long, and it won't hurt that bad."
118
What is a network? A network (sometimes called a net) is a mechanism for connecting computing devices. Where are networks found? Certainly connecting big computers and terminals, but they are also found in surprising places. Inside every computer is a network. Every computer consists of a collection of computer components: disks, printers, memory, the computing unit, status displays, and so on. A specialized, high-speed network connects all these components. Sometimes this net is called a backplane, sometimes a channel, and sometimes a bus. In this discussion, I use the term bus most often. In a mainframe, there are two levels of network inside the computer. Inside the box containing the main computer is a network of particularly high speed. Outside the box is a slower but still very high-speed network running out to all the disks. In between these two networks is a bus structure called the channel. The channel is significant because information can flow only from the computer, to its disks, and back at channel speeds. The channel in most mainframes built in the 1980s operated at about two million bytes per second.
To put these speeds in perspective, consider a modem high-speed modem: a transmission rate of 14.4 kilobits per second (Kbps) is considered pretty nice. A speed of 14.4 Kbps translates into about 1,800 bytes per second. In comparison, at two million bytes per second, the mainframe channel operates over 1,000 times faster than the high-speed modem. Little wonder that channels and buses can't be thought of as even remotely comparable to modern-based networks.
Like a mainframe, a personal computer has inside it a bus network that connects the memory, the central computer, the disks, the option slots, and so on. The option slots, which sit on this bus, are particularly significant because in a personal computer, most of the internal components are connected by plugging them into option slots. So in many ways, the speed of a personal computer is defined by the speed at which information can flow through those option slots. In most personal computers, that speed is about two million bytes per second, about the same speed as the 1980s mainframes.
Mainframes, minicomputers, and personal computers are built around interconnection structures that literally are networks, but nobody thinks about those computers in this way. For example, you would probably never go through the following thought sequence:
![]() | I need to add a hard disk to my personal computer . |
![]() | I want the entire computing system to be fast, so I'll
make sure I get a fast hard disk. |
![]() | To install it, I'll plug the new disk (or the new
controller card) into an option slot. |
![]() | The option slot is really part of a network, and networks are slow. I'll have to figure out a way to connect the disk directly to my computer's central computer. |
119
One reason you might not take that last logical step is because you are probably not aware of the network inside your computer. Another reason is that there is no performance disadvantage to the option slot approach. Both reasons together produce the point: for all practical purposes, your computer is a computer, not a network. In a real sense, the bus/slot structure is a network, but in another very real sense, it is not a network. The purpose of a network is to connect multiple computers and terminals, generally over long distances. The purpose of the bus structure inside a computer is to tie together discrete computers to create a functioning computer system. Thinking of an internal bus as a network is missing the whole point.
How fast is a LAN? An Ethernet runs at close to two million bytes per second, the same speed as the bus inside a mainframe and the option slot bus in most PCs! Aside from being an interesting numeric comparison, what does this mean? Suppose that you set up a server with a particularly fast disk. Often, a personal computer can consistently retrieve data from that server disk faster than it can retrieve the same data from its own local hard disk! The network is functioning as an extension of the personal computer's internal bus structure. Users can access components attached to the LAN as fast or faster than they can access components attached to the bus structure inside the personal computer itself. In a sense, the LAN is an extension of the personal computer. Or perhaps the personal computer has become an extension of the server attached to the LAN? Or both?
To understand the initial attraction of LAN-based computer architectures, it is necessary to hark back to the days of big computers and the budgetary perils associated with them. And, lest you forget, these times of fear are still very real for any Vice President of Computer Systems with a large mainframe or minicomputer in his or her budget.
Mainframes and minicomputers are very expensive. To put their prices into perspective, they cost as much as some good-sized buildings. Any capital acquisition costing millions of dollars requires careful justification, and gaining approval for big computers has always been an important part of the job of any senior computer person. Compared to most capital equipment, however, computers have an unusual characteristic: it is very hard to accurately predict how long they will be useful. The usefulness of a computer is based on three factors:
![]() | How long it takes to wear out |
![]() | The rate at which it becomes obsolete |
![]() | When users will eat through its capacity |
120
Obviously, the first factor is a joke when it comes to computers; they don't wear out. Obsolescence, however, can be a legitimate risk with computers because technology changes so quickly. Companies have occasionally been trapped by buying systems just before major new functionality or features became available in the next generation. Generally, though, vendors go out of their way to protect their customers. So obsolescence, scary as it sounds, is not a huge factor in limiting the useful life of a computer. The third factor, capacity, is the issue. Because computers typically run out of capacity so quickly, the other two factors never come into play.
When justifying the acquisition of a major new mainframe, the main task is forecasting how long the computer will last before a major upgrade or a complete new system is needed. Unfortunately, no matter how carefully the forecast is done, users' appetites for computer power are insatiable. Adding more capacity -- whether in the form of disk space, faster computers, or more RAM -- is just the tip of the iceberg. With the added capacity, response time is better than expected, more applications can be run, and away you (and your department) go.
In an age of personal computers costing under $5,000, saturating the server is hardly an issue; when the server runs out of storage or can't produce answers fast enough, buying one is fast and cheap. If that server were a $5 million mainframe, however, saturation could bring sweat to the brow. Picture the situation: management in your company has barely gotten over the cost of the last computer system, the schedule slips in getting the software running, and the complaints start coming in from users about apparently missing features or hard-to-use functions. Finally, the Management Information Systems department (MIS) comes back and reports that without a $2 million upgrade, the next round of functional improvements will have to be postponed. Angry to begin with, management asks how long the upgrade will last -- ten years would be fine. Instead, after considerable explanation, MIS admits that the upgrade is only an interim step; in 18 months the entire system will have to be replaced with its big sister at an incremental cost of over $3 million. To make matters worse, a long-delayed operating system conversion will tie up the entire development staff for six months at the time of that upgrade. Without the conversion, the upgrade won't work, and response time will grind to a halt.
As this process is unfolding, the company's chief information officer (CIO) is without a doubt polishing his or her resume just in case management runs out of patience or understanding -- commodities that are never in great supply when it comes to expensive computers.
Perhaps this story sounds exaggerated, but for many who lived or are living through the travails of large computer liability, it's close to home. Mainframes are large and expensive, and capacity comes in chunks that also are large and expensive. Buying a mainframe in the first place is a huge commitment for a business. Often, that big computer has too much capacity at first, and companies often consider selling their excess computer horsepower. Later, as more and more applications come on-line, capacity becomes more and more scarce. Because an upgrade is so expensive, businesses try to avoid it as long as possible, which results in poor service to users. With a mainframe, it's either feast or famine, and it's almost impossible to predict the cycle.
121
Mainframes are not unique in this respect; many other large capital acquisitions work the same way. Office buildings are often either largely vacant or so full that hiring must completely stop. Airports may have excess capacity for years, but new construction almost never begins until congestion has become unbearable. The problem is not that mainframes are somehow bad, they're just so expensive.
Is there an alternative? (Never ask the reader a question unless you know the answer.) Of course. The answer is not cheaper computer power, although that is always desirable, so much as it's computer power that can be acquired in bite-size pieces. To elaborate on the last paragraph, the precise problem with mainframes is not that they're so expensive, but that they come in such big chunks.
Think of mainframe costs as a staircase. The first step -- around $5 million -- is a killer; small companies can't climb even that one. The next step is also pretty big, again in the millions. The remaining steps -- every time more memory, more disks, or a faster computer is needed -- are also huge, until finally (and worst of all) you run completely out of steps.
Take that staircase and flatten it out. Make each of the steps smaller, but provide many more stairs so that you can still climb just as high. That's what minicomputers did: provided smaller steps, but more of them. The steps involved both smaller and less expensive computers. The lower expense made the entry cost more palatable. And by promising that many small computers could do the same job as a single big one, the minicomputer companies found a way to keep the stairs small.
In reality, the minicomputer staircase runs into a ceiling sooner than the mainframe. Eventually, even a large number of minicomputers runs out of capacity. But the combination of the expensive mainframe and a host of smaller minicomputers surrounding it offered a more attractive (that is, less scary) set of steps into the future for many corporations. This concept accounts for much of the success of Digital Equipment Corporation (DEC) and its competitors, but even this path contains some pretty big steps.
The attraction of LAN-based architecture is twofold: the steps go away, and the ceiling disappears. The steps go away because PC-based servers are so inexpensive. Adding servers one at a time as demand grows leads to a cost curve that is virtually flat -- a ramp instead of stairs. The ceiling goes away because the LAN-based architecture offers the potential of almost unlimited computer power, just by having enough PCs and servers. For shell-shocked MIS personnel, the attraction is that they might keep their jobs. After all, a $5,000 computer addition is much easier to think about than one that costs a thousand times that amount.
In the mid-1970s, Bob Metcalfe of Xerox's Palo Alto Research Center (PARC) invented local-area networks as mechanisms for connecting personal workstations, servers, and printers. At the time, PARC was creating a variety of components and
122
systems that foreshadowed much of today's office-oriented client/server revolution. However, the focus at PARC was personal. Systems facilitated individuals working alone, working in teams, and interacting with other individuals and teams. Ethernet, the first working LAN, was not seen as a tool for potentially replacing all the bigger computers and terminals used to run businesses. Instead, LANs were seen as a complement to the mainframes and minicomputers of the time.
As papers and articles began to focus on LANs, Datapoint Corporation, a Texas computer vendor, decided that the LAN concept, which was so compelling at the personal level, could be even more compelling in the world of big computers. In the late '70s, Datapoint introduced the Attached Resource Computer built around the ARCNET; a proprietary LAN technology. For the first time, MIS had a commercial alternative to the big-step staircase of mainframe capital.
ARCNET systems were built around two types of computers, which would be called clients and servers today. In addition, ARCNET systems provided two new ideas about this new style of computing:
![]() | Both clients and servers were relatively inexpensive. |
![]() | Because ARCNET was so fast, the entire system -- the combination of clients, servers, and the network -- could be thought of as one large computer. |
The second idea thoroughly changes the very conception of what a computer is.
In a Datapoint system, the local net is so fast that it really should not be thought of as a network. Instead, compare the entire Datapoint system to a classical computer system. A large Datapoint installation with hundreds of workstations and dozens of servers would service a large building with high-speed LAN cable running throughout the entire building.
If a mainframe provided the same services, the mainframe's computer would be in a special room, and the system would use a network (in the traditional sense) to bring the terminals to the computer. As I've already discussed, however, that computer would consist of a special network -- the computer itself -- but that internal network is so fast that it should not be considered a network. So in the classical environment, the computer is located inside a series of boxes: the largest box is the computer room, and inside it is a series of smaller boxes containing disks, memory, and the main system cabinet, in which the central computer lives. Finally, snaking through this set of nesting boxes is a set of wires and buses that, again, is really too fast to be called a network.
Here's the new thought: all of a sudden, the Datapoint system allows the entire office building that everybody works in to be the box that contains the computer system.
123
Fancy rooms are no longer needed, nor is a low-speed network needed to bring the terminals to the computer. The computer and the office building are now intertwined, just like the nervous system of any living being. This might sound neat, but is it revolutionary ? The answer is yes, but the full impact of this method of building computers is being seen only now, over a decade since Datapoint introduced the concept and over 20 years since the invention of Ethernet.
The short-term impact of attached resource computing was important but less than revolutionary. It offered MIS that first glimpse of a nonthreatening and adaptive architecture for buying and growing computer systems. The beauty of ARCNET was that each time a server or work-station was installed, the central computer itself was being expanded. With terminal-based computer systems, adding a terminal adds workload to the central computer without adding any computing capacity. How could a new terminal add capacity? After all, the terminal is a passive device, a kind of conduit through which the user approaches the computer itself. And as I discussed earlier, the central computer could be grown only in large chunks that cost a great deal of money.
In the LAN environment, there is no central computer. Instead, there's a distributed computer system. In fact, you can't point to a single box -- or even a single room -- and say, "There's the computer. I wonder how it's doing." Even if all the servers are kept in a computer room, you still can't equate that room with the computer or understand how much capacity is available by looking at that room because the clients -- the users' individual workstations -- are an integral part of the overall computer system. In a real client/server system, the workstations are more than just intelligent terminals. A major part of the overall work of the system is done in the workstation. This style of computing is sometimes called cooperative processing. For now, just remember that the network is a computer. The computer is the combination of all the servers, all the workstations, and all the high-speed cables connecting them. The network is the computer in exactly the same sense that a mainframe -- which consists of computing components connected by a bus -- is a computer .
The first conceptual impact of the client/server LAN-based revolution was the introduction of a smoothly growing, adaptive computer system. Starting with a single workstation, an organization could grow an integrated computer system, adding servers and workstations at any rate that made sense. That computer system would smoothly adapt its capacity to the number of users connected to it. If you add a user, you add a workstation and the associated computing capacity required to service that user. As a result, the entire configuration gets bigger and faster, not overloaded and more attenuated. Add servers periodically, but put them close to the users. The servers aren't very expensive, so don't worry too much about the decision. Add a server to meet the needs of a workgroup or department, and take the cost out of that workgroup's or department's own budget, without having to revise the capital budget for the entire organization. Build a system, grow that system, and respond to user's needs. You no longer have to worry about making a single capital decision so big that a miscalculation could leave thousands of users with inadequate service and put your career at risk along the way. Perhaps this new idea is not revolutionary from an overall organizational or societal perspective, but it's not hard to see why it would be compelling for MIS workers.
124
Although ARCNET itself is not widely used today, the Datapoint architecture established an important precedent. It demonstrated not only that the network could be the computer, but also that the building could be the box. When I described the classical computer system in terms of a set of nesting boxes, I said that the biggest box (which enclosed the overall system) was the computer room. In the LAN environment, the biggest box is the entire building. Aside from the important cost implications I just talked about, the LAN approach also made people think about computers in a different way -- a way that in the long-term had even more impact than the reduction of cost-based fears. Putting the computer in a central location also implies putting all the associated peripheral facilities (printers, disks, and so on) in the same central location. In the classical system, computers printed reports, but that happened centrally. The reports were distributed through interoffice mail. Databases were maintained on computers and accessible through terminals, but the databases were centrally controlled and were maintained for the convenience of the organization, not to be truly responsive to the needs of individuals.
The second major impact of the ARC system was to make people understand that business computers, like personal computers, could be responsive to the needs of single users. On an ARC system, for instance, a printer could be located anywhere. Furthermore, if a user wanted a custom report, it might take a long time for his or her own workstation or departmental server to produce that report, but producing the report did not drag down the single computer servicing the entire company. And be- cause the printer could be located conveniently down the hall, producing that custom report on demand might save the user time because he or she could request it whenever necessary .
Today, of course, this sounds routine; personal computers provide this functionality constantly. Yet, even in 1995, although personal computers are now on most desks, they are still not used to run the business. Most large organizations still depend on the central mainframe or minicomputer for that need. The case is worse for many small businesses. Big central computers are too expensive, and the personal computers can't handle shared data well enough; therefore, data is still processed by hand. While the hardware reality that Datapoint introduced in the early '80s is definitely here, the business reality -- the opportunity to really capitalize on that hardware -- is still ahead of us. The key point is to finally build on the fact that not only can you make the net- work be the computer and have the building be the box, but you also can apply that style of computing to running the business. So if the potential has been there to realize this new vision for so long, why hasn't it happened?
PARC, followed by Datapoint, created this conceptual legacy for businesses to capitalize on. But Datapoint's development of the vision still missed some elements that were required to enable a true revolution in the way organizations run. Although the actual computing resources of an ARC system were distributed around entire buildings, the style of applications being built on those computing resources was virtually
125
identical to the style of mainframe applications. In theory, the workstations were capable of changing the way people interacted with computers, but in practice, they still looked like terminals. The servers were capable of providing business applications customized to the needs of individual workgroups, but in practice, the servers pro- vided standard, inflexible applications exactly like those running on mainframes. The cost and capacity adaptability of the system represented a breakthrough, but the ways in which the system was used fell short of fueling a revolution. What was wrong with the equation?
For a revolution to occur, the equation had to include the lessons learned from the fact that millions of personal computers were being sold at the very same time that Datapoint was installing ARCNETs.
George Hegel, one of the more opaque philosophers of the modem era, proposed that great breakthroughs in thinking happen in three stages:
1. Thesis: First, an idea is proposed.
2. Antithesis: Second, the opposite idea, the antithesis, is considered. This is often frustrating because both the thesis and the antithesis have merit, but they are apparently irreconcilable.
3. Synthesis: Finally (typically in a dramatic breakthrough), a completely new idea arises, based on a combination of the two old ideas. The new idea brings the two old ideas together -- synthesizing the thesis and its antithesis.
The first step to finding such a synthetic breakthrough is to clearly articulate what the key idea and its opposite are. This enables you to see what the apparently impossible reconciliation needs to accomplish. The opposites in the case of client/server, once stated, come into sharp focus quickly.
Historically, the mainframe and its sibling minicomputer have stood for things shared. As boxes too expensive to devote to individual goals, the big computers had the job of providing shared access to information, coordinating use of scarce resources, and being a constantly available enforcer of corporate policies. By definition, the big computer is an organizational asset whose very purpose in life is to meet the needs of the many and follow the lead of the hierarchy that runs the company.
By comparison, the personal computer is (as its name states) personal. It sits on a person's desk, holds his or her information only, probably has no way of allowing that information to be shared, and coordinates nothing. Instead of enforcing policies, the PC provides the user with the very means for adapting those policies and frameworks to his or her needs. The personal computer is an individual asset whose reason for being is to meet the needs of one user and do exactly what that user tells it to do.
126
This dramatic contrast shows the problem: there's nothing in the middle. Having a personal computer is great, but what about when you need to share information with others or have an agent to coordinate resources? Using that big mainframe to keep everything coordinated is fine, too, but what about the needs of the individual? The differences appear irreconcilable (exactly the hint needed to seek a breakthrough). More important, these opposing needs deal with more than computers. I'm talking about finding a way to reconcile the needs of the individual with those of the organization, a theme that echoed through organizational halls long before computers were around. Can the two be related? Is it possible to suggest, with a straight face, that the client/server revolution might have something to do with terms such as democracy, empowerment, and finding a balance between the needs of the big organization and those of the tiny individual? That is precisely the point I am inching toward, but first I'll try for a slightly more modest synthesis. The journey toward changing society, or at least organizations, begins by showing what happens when the personal computer finally learns how to cohabitate with the mainframe.
In other chapters of this book, I explore the client, the server, and the network in some detail. Compared to terminal-based interaction, the client provides the user with a completely new way of working with computers based on the idea of the graphical user interface, or QUI (which in turn is based on precepts of virtual reality). The promise (or consequence) of the QUI is that terminal-based interaction will never be acceptable again. By tapping into the enormous amount of computer horsepower and memory that can now be put onto people's desks (and even under their arms to carry around), you can take the world of data and convert it into a virtual world of live information -- a world that the user literally can walk around in. Pages spring into existence, houses appear on the screen, and by moving the mouse or pointing at the screen, you can drive around on streets. All this happens with almost no physical movement. Completing the picture, direct manipulation enables the user to reach into ( or out of) the virtual world and control its parts naturally, instead of forcing that user to remember a set of arcane commands.
There are two prices to pay for this awesome creation of virtual new worlds, only one of which I fully explored in the preceding chapter. The first price is that all users must have their own computer, under their own control, sitting on their desks, under their arms, or on the grass in front of them as they explore and control those virtual worlds. That world might be as mundane as a printed page or as exciting as an aircraft simulation that includes photorealistic scenery from around the world. Regardless, all the power, realism, and intimately interactive control over those virtual worlds are made possible only by the dedicated devotion of huge amounts of processing power and memory to the single user. The first price is committing to personal computers: each person needs at least one computer. A second price, however, didn't appear until the client/server concept appeared.
127
Virtual worlds of all kinds are built on huge amounts of data. The term scenery from around the world initiates three questions:
![]() | Where does the scenery come from? |
![]() | Where are the pictures stored? |
![]() | How did those pictures get into the computer? |
The second question -- Where are the pictures stored?-- is the easiest to answer by itself, and the answer helps elaborate on the other two questions. There is only one place those pictures can be stored: in a very personal computer that projects the virtual world. When I discussed the huge amount of processing a personal computer does just to display a spreadsheet on the screen, I pointed out that a great deal of that processing revolved around that actual process of display. One of the key tasks of a personal computer is providing visualizations of virtual worlds. Visualizations require huge amounts of data, and all that data must be processed at incredible speeds so that the user can move around in his or her own world, without apparent delay, hesitation, or degradation in image quality.
What can virtual worlds possibly have to do with computers, organizations, and business? Aircraft designers, architects, and maybe even page-layout artists might want to live in business-oriented virtual worlds, but what about normal, everyday people who don't know what a bodysuit is (it's the apparatus that researchers wear while exploring virtual worlds)? A marketing manager exploring pricing alternatives is living in a virtual world. The manager's ideal discussion certainly revolves around what ifs, alternative scenarios, and best case/worst case models -- all of which describe alternative universes. Planning production schedules for a factory requires the creation of a small virtual world. Explaining driving directions to the closest service center works better if you can drive around the streets on your screen. Even placing an order or recording a complaint is easier if you can see a virtual image of the product while talking to the customer on the phone.
The scale may be smaller for some business applications, but the very point of the GUI -- its capability to simplify, streamline, and facilitate interaction with the computer-revolves around the creation of some form of virtual world. Virtual worlds are created by displaying high-quality pictures (and perhaps sounds). Even if the pictures portray only numbers and words, the presentation is better when the pictorial representation is richer. Graphical spreadsheets work better precisely because they are graphical; in other words, they display numbers and graphs richly in many fonts and colors, and they can combine many different kinds of information on the screen at one time. Better pictures mean better virtual worlds, which mean better applications.
128
Creating virtual worlds requires instant access to huge amounts of data. The data has to be stored in the personal computer. About a page ago, I asked three questions: Where does the scenery come from, where are the pictures stored, and how did they get there? I've answered the second question, but what about the other two? Where do the pictures and the information to create the pictures come from? There's the rub.
Most of the information you work with in your personal-computer-generated virtual worlds comes from somewhere else. True, computer games can be self-contained. With the advent of CD-ROMs, which can store and transport huge amounts of information on very small, inexpensive disks, an aircraft simulator could have scenery for dozens of airports and cities all on a single CD. In organizations, however, the situation is more challenging. First, organizations revolve around huge amounts of data, more than could be put on any reasonable number of CD-ROMs. Worse, that information is constantly changing. Insisting that all the information you work with must be stored inside only your own personal computer is like claiming that the same aircraft simulator now enables you to fly real airplanes, as well. Imagine controlling an airplane, taking off, flying somewhere, and landing, all the while being blindfolded. To make it better, instead of being blindfolded, you get to look at a screen that shows the scenery for the route you're taking, but the images you see are the ones photographed several years ago. You're watching five-year-old images, but you're landing a real airplane today! What about all the other airplanes, the dog running across the runway, or the sudden patch of wind? Of course, the whole thing is preposterous. But what if the alternative is flying the same plane, having access to the real information, but the information is fed to you on a computer screen in the form of words and numbers? Not much better, right?
Client/server combines the virtual worlds made possible by personal computers with the real and realtime information contained only in the organizational computers. Now there is an answer to the question of where the scenery comes from. It comes from the organizational computer- the server or the mainframe. By itself, this is the correct answer, the only answer, but also an answer that doesn't work. The reason it doesn't work revolves around the third question: How did those pictures get into the computer?
For virtual worlds to be useful, they must be part of a shared, realtime universe. Orders must be placed around actual inventory .Appointments at service centers must reflect commitments for time slots that will be honored. Price scenarios must be built on real, recent sales data. Factory production runs must be planned only around actual customer demand, real parts availability, and the true availability schedules for equipment and people. So all these virtual worlds require access to data that can come from only one place: the organizational computer. Not only does the data have to come from that shared computer, but as the data changes, the new data has to be put back into the organizational computer immediately. After all, after a product unit is sold, it can't be sold again. After an appointment is given out, that time is gone. And so on.
129
The need to work with realtime, up-to-date information implies that computer users must have direct access to the data in the central organizational computer. After all, as I said in the discussion of the server database, the whole point of the shared computer is that it coordinates access to information for the many so that the individual can access that information directly without human help. That's how Bambi meets Godzilla.
In the infamous short filmstrip Bambi Meets Godzilla, lovable little Bambi, the fawn, meets Godzilla, the huge monster, who immediately steps on Bambi, crushing him. Who's Bambi and who's Godzilla? Bambi, of course, is the cute little personal computer. The very name implies a connection, often emotional, to its owner. People -- no, individuals -- buy personal computers precisely because they are personal. Personal means friendliness, power, and most of all, control. Any teenager's parent appreciates just how much having control over one's life means. A personal computer brings direct, no-questions-asked control over a personal information appliance. A PC is a tireless servant who does whatever you tell it to.
Godzilla? The mainframe. Mainframes are the opposite of personal: central, unapproachable, and full of forbidding interfaces. In addition, mainframes are controlled by a central bureaucracy. Worst of all, individuals do not control the mainframe; the mainframe controls them. In the classical central application I discussed earlier, every task processed through the mainframe is controlled by the mainframe. The user may initiate the task, but the central computer lays out the steps, one by one. In many big companies, the mainframe lays out not only the small steps, but the big ones, too: each day it lays out the production schedule, the delivery schedule, and the list of orders to be processed for approval. Orwellian, perhaps, but Godzilla? The mainframe may be the ultimate control freak, but isn't it a bit unfair to portray it as somehow crushing the personal, adorable, desktop computer?
The opportunity for Godzilla to break loose comes from that first question: Where does the scenery come from? Suppose that a product manager has developed a sophisticated model for analyzing prospective price changes -- a small virtual world. To run the model quickly, producing complex three-dimensional graphs that portray various tradeoffs (production costs versus market demand versus competitor response), the marketer needs access to a large amount of data in his or her own computer. As long as that data doesn't have to be up-to-date, no problem. Unfortunately, this product manager's pricing model becomes very successful. Unfortunately?
As the company depends more and more on the pricing model, the company decides to use it in several highly competitive, high-volume, and volatile markets. The product manager is being considered for a promotion, and a challenge arises. How can the company alter the model to deal with rapidly changing data in an environment where many people are involved in each pricing decision? Obviously, the data can't be stored in each user's personal computer anymore. For one thing, the prices are
130
changing too rapidly to justify continuous updating of the individual PCs. Worse, because several people are involved in each pricing decision, their models all must be working from the same data, looking at the same what-if scenarios, and considering the same changes. The only way to facilitate that kind of data sharing is to store the data somewhere else.
How about the mainframe?
Here's the catch. As soon as the data and the GUI are no longer in the same computer, the virtual world stops working. The entire basis for the construction of the graphical interface -- the portrayal of virtual realities and the ability to directly manipulate parts of that real world -- was that the user's own computer could access and manipulate huge amounts of information constantly and instantly. Virtual worlds are possible only when the information driving the application is in the same computer as the application.
From the user's perspective, as the application becomes too successful, the data driving the application is moved to a minicomputer or mainframe, the graphical interface becomes unworkable, and the ability to work in a powerful virtual world suddenly disappears. Somehow the mainframe reached out, co-opted the personal computer, and turned it into a slightly better terminal. Godzilla has crushed Bambi.
Overstatement? Do personal computers really become terminals when connected to mainframes? Can you take advantage of that personal computer's power even when connected to the mainframe, to bring some GUI to the world of sharing? Isn't there some way to have both -- graphical worlds and sharing? No. Yes. Yes and No. And finally, Yes! Consider these questions one at a time.
Millions of people have personal computers that talk to mainframe- and minicomputer-based applications. Most of the time, the PC becomes a terminal while talking to the bigger computer. These PCs display exactly the same information a terminal would display, but they display it on their computer screen. In a multitasking environment, you can add a touch of sophistication by confining the terminal screen to a window while running personal tools (such as word processors, spreadsheets, and mail) in other windows. This configuration allows these personal tools to assist you in dealing with the mainframe. Seeing these windows side-by-side on the screen, how- ever, makes it painfully obvious how primitive and limited the terminal interface is compared to everything else on the screen.
Many technical designers reached an obvious conclusion: there must be a way to take the terminal interface and make it simpler and more powerful by making it more graphical. Designers created a variety of tools they hoped would transform a terminal
131
screen into a graphical window-based form. In the end, that hope turned out to be just that: a hope. Although the tools described in the next few paragraphs work, making mainframe applications easy to use requires more than a simple transformation of the screens the user sees.
Normally, a mainframe sends sets of data, or forms, to terminals that display the data on the screen. Replacing the terminal with a personal computer introduces a new layer of intelligence into the equation. You can program the personal computer to be somewhat self-aware. When the mainframe sends a form to the personal computer, the PC is capable of doing something other than just painting it on the screen. A class of tools called screen scrapers or graphical veneers traps the form and, instead of displaying the original form, dresses it up to look prettier. Products such as Easel, Viewpoint, and Rhumba all fall into the category of screen scrapers.
The veneer concept is possible because the screen scraper can intercept the form (which otherwise would go straight to the screen) and keep the form in its memory. Then the screen scraper analyzes that form. The instructions that normally tell the terminal what to do with the data become the basis for the screen scraper's analysis. Instead of displaying the form on its screen, the personal computer stores the form in its memory .The PC then takes the parts of the form one at a time and converts them into more understandable elements of an equivalent graphical form.
Where the old form insisted on entry of a part number, the new form displays a list of part names enabling the user to pick the right one. Where the old form used cryptic codes to indicate mode of shipment, the new form provides a list of radio buttons (Express, Overnight, or Normal delivery) to pick from. Cryptic error messages become help boxes with varying levels of detail selected by the user. By the time the scraper is done, the final form looks nothing like the original. Each element of the form is more attractive and easier to understand. In many cases, entire forms disappear as multiple repetitive sequences are folded into smaller, more powerful, more intuitive graphical forms. The screen scraper appears to offer the user a new, more graphical world -- a world where the mainframe can't even tell anything has happened. Magical? Maybe, maybe not.
The magic is accomplished by a combination of the new powerful screen-scraping graphical veneer tools and programmers in whose hands the screen transformation takes place. Graphical transformation of applications is a very attractive concept. The transformation process is relatively rapid and painless, the users do end up with improved applications, and no changes are required to the mainframe resident code. So what's the catch? How do you get from magical transformations to Godzilla crushing Bambi? First, you have to know what's really happening when the tools scrape the screen.
132
Unfortunately, not much at all is happening. Just as gluing a veneer of fine wood onto a particle board base does not convert a table into solid oak, scraping off the terminal-oriented screen and replacing it with a graphical veneer does not convert the mainframe resident code into a graphical application. In addition, most users see through a graphical veneer in a matter of days. Graphical veneers really don't transform main- frame-based applications after all. The problem, though, is worse than that.
In an ironic sense, mainframe applications are supposed to be hard to use when centrally directed. They're not designed to make individual users more powerful. So the fact that a screen scraper doesn't change the mainframe's ease of use, while perhaps disappointing, is hardly surprising. Personal computers, on the other hand, are easy to use and do make individual users more powerful, but they work only with personal data. Whenever the user needs to work with up-to-date or shared data, the data has to come from the mainframe. As I said earlier, even if the mainframe has a screen scraper, lead does not become gold. In addition, the powerful, graphical application the user already had reverts back to the terminal world. In a real sense, not only has lead not turned into gold, but by connecting the PC to the mainframe, gold turns into lead.
But let's be honest. Nobody expects mainframe-based applications to get better just because users access those applications through a PC. And everybody is excited about the newfound power the personal computer gives users to work with data in new ways. The catch comes when the applications built on the personal computer grow up. Soon, data requirements become sophisticated, and applications grow. To enable those same applications -- that were not originally written for the mainframe -- to work with realtime, shared data, the applications must move to the mainframe. That's when Godzilla crushes Bambi -- when gold turns into lead.
Yes, personal computers really become terminals when connected to mainframes. And yes, users can take advantage of the power of that personal computer, even when the PC is connected to the mainframe, to bring some GUI to the world of sharing. There is some way to have both graphical worlds and sharing, but neither personal computers nor mainframes alone are enough. A new kind of computer must accomplish this trick -- a computer that combines the characteristics of both personal computers and mainframes: the LAN.
The design of this new type of computer revolves around the three questions at the center of every virtual world: where does the scenery come from, where are the pictures stored, and how did they get there? You know almost the whole answer.
The pictures come from a single computer acting as a shared computer, an organizational computer, and a database computer. The point of the last section is that although a virtual world may start out revolving around personal information, sitting on
133
a single person's desktop in an organizational environment, the virtual world quickly has to be shared. The information from which the virtual world is constructed has to be totally up-to-the-minute, and changes made by you have to be visible by everybody else as soon as you make those changes. The information at the center of virtual worlds comes from another computer, not from the personal computer.
At the same time, that information must be stored in the same computer you work with; otherwise, you can't generate a realistic virtual world. Whether the virtual world is a simple three-dimensional pricing model or a complex geographical map, in all cases the personal computer has to work with basic information stored in its own memory. The only way to create virtual, graphical worlds is to have the application, the information, and the graphical display in the same computer.
That is the great contradiction: how did the data get there? If the data is in the personal computer, the result is great applications, virtual worlds, and irrelevant data. You can fly your plane, but the runway you're seeing, the planes you're avoiding, and the dogs on the runway are based on three-year-old data. You might as well be flying with your eyes closed. If the data, pictures, and numbers are in the mainframe -- the shared computer -- you can't access them fast enough from your personal computer to have graphical displays. You fly the plane, but instead of looking out the window, your only source of information (although realtime) is based on words slowly being printed on a single screen. Now you get to fly the plane in realtime, but by the time the teletype tells you about the dog on the runway, you've already hit it.
The whole problem revolves around the data location contradiction. Put the data in the personal computer, and virtual worlds come into existence. However, those worlds are static, not shared, and based on history only. Put the data in the shared computer, and the world becomes dynamic, shared, and up-to-date. However, the virtual worlds disappear, and the real-time world is flat, one-dimensional, hard to work with, and limiting. Not an appealing choice.
From this contradiction arises not just an opportunity, but also a driving need to view the LAN as a new kind of computer, not a network. In this new perception of the system the personal computers, the network, and the servers are all viewed as one large computer system. The network is the computer; the computer is the network.
Where are the data, the pictures, and the numbers? In the computer. But now, the computer has both personally dedicated and organizationally shared elements. Accessing data from the disk drive inside your personal computer is fast, but accessing data from a server on the LAN is faster. The data needs to be in your computer, and the LAN is your computer. One part of the LAN -- the personal computer on your desk -- is dedicated to you; another part is dedicated to sharing. Through the LAN, they're all one big computer -- a single integrated system.
134
Where do the pictures and the data come from? They come from the shared computer so that you have an up-to-date virtual world and your changes are immediately shared. How did those pictures, numbers, and data get there? On a LAN, the question loses its meaning. Your computer and the organizational computer are the same. It's one big computer system.
Precisely because a LAN is not a network -- precisely because it is so fast that information can be accessed as quickly across the LAN as across the internal bus structure of my personal computer -- there is a new synthesis. The result combines the virtues of Bambi (personal computing, graphics, and virtual worlds) with its antithesis, Godzilla (shared computing, databases, and coordinated control), to deliver a new synthesis: shared virtual worlds. If the personal computer is the electronic desk, then the LAN is the electronic office.
At this point, you should have a basic understanding of the truly new foundation element in the computing systems of the '90s. Personal computers play an important role, providing the engine for generating graphical worlds. Servers are the sharing engines, coordinating access to a variety of shared resources ranging from printers to databases. The LAN links personal computers and servers, forming a completely new type of computer system in which personal data and shared data are the same. This system provides a base on which shared virtual worlds can be built where we all share the same information and where changes I make are seen by you instantly. The beauty of the LAN is that it facilitates this shared world while leaving each user the power and autonomy offered by his or her own personal computer. At the beginning of this chapter, I introduced the LAN as the network that's not a network. What do you do if you really need a network after all, for example, to connect offices all around the world? That's what the next chapter is about.