Local Area Networking
This article describes the characteristics and lists some of the advantages and disadvantages of centralized, client/server, and peer- to-peer networking models.
Part 2: Local Area Network Models
Centralized Computing Model
The centralized computing model uses one powerful, centralized computer to which are attached many "dumb terminals"
Dumb terminals are simply a keyboard and a monitor and have no local processor and no local storage, these being provided by the central computer.
This model was developed when processing and storage were extremely expensive and required a lot of physical space.
Initially the central computer was a mainframe (one or more rooms full of equipment). Mainframes gradually gave way to minicomputers (one or several "washing machines").
Eventually, (low-cost) stand-alone PC's appeared and became more cost-effective than mainframes and mini's for many, but not all jobs. Weather forecasting and scientific number-crunching still required the enormous processing power and speed of mainframes.
Eventually, PC's were networked as thick clients in an attempt to re-create some of the advantages of centralized computing such as common peripherals and file sharing.
As the relative cost of PCs began rising during the 1990s, a new centralized computing model made an appearance in the form of thin client computing.
Thin clients are low-cost PC's attached to a powerful central desktop computer. While thin clients are equipped with their own processor (contrast with dumb terminals), they rely more heavily upon server resources than do thick clients.
Citrix is an example of thin client network software. Its products have evolved towards desktop virtualization and cloud computing.
In the client/server model, "intelligent" PC's are attached to a central computer. The central computer is the server and the attached pc's are called clients, nodes, or workstations.
Processing is done locally on the workstations while security and resources are managed centrally on a server.
Novell has always had a strong orientation towards workstation/server architecture, while MS's networking grew out of peer-to-peer and that approach has carried over to today. (Minasi, p.32)
(Microsoft terminology likes to call the attached machines clients, but contradicts itself by calling the client software NT Workstation).
Clients use their own processing power but rely on the server for resources and services such as:
The server is usually a dedicated server, meaning that it is dedicated to its network functions and is not used locally to perform standard day-to-day applications.
The client/server model uses two types of software:
One example of client software is the redirector program. Requests by applications to the native operating system for files that reside on a network drive are redirected to the server.
Client/server networks are often distributed: more than one dedicated server is used in order to maximize network performance. Server tasks can be distributed in a variety of ways:
In a peer-to-peer network model there is not necessarily a single designated server: each node has the potential to be a server and share its resources.
And, peer-to-peer servers function simultaneously as servers and workstations.
Network security is handled not by one central server, but by all the server/workstations. Each server/workstation can control access to its files by other nodes on the network.
Peer-to-peer networks have many advantages in small network environments:
Running an operating system and client and server software tends to use up a lot of memory on peer-to-peer machines.
In addition, as the size of a network grows disadvantages increase:
Bruce Miller, 2002, 2014