Local Area Networking

This article discusses the purpose of segmenting LANS and describes and compares and contrasts segmenting using bridges and switches.

Part 11: Segmenting Local Area Networks

(Andrews, pp.915-16; Derfler, pp.146-51)

Why segment a LAN?

Typically, network traffic is concentrated between certain specific groups of users: workgroups.

While traffic is heavier within workgroups, there is less traffic between workgroups.

If all the workgroups are put on the same network then the available bandwidth is reduced due to the total traffic.

Therefore, it is a common practice to segment LANs so that traffic within a workgroup is contained within its segment and only traffic bound for other segments/workgroups is broadcast beyond the workgroup's segment.

For example: use a bridge or switch to isolate a group of computers on the network which share the same printers or files. The heavy traffic directed to those network resources remains within that segment - making traffic on the rest of the network lighter.

Segmenting means that available bandwidth is used more efficiently because there is less traffic both within and between segments.

  • Performance is also increased because computers are not wasting time processing unnecessary net traffic
  • And network stability is increased because when faults occur they are limited to segments and don't affect the entire network

Bridges and switches both make intelligent decisions, based upon the MAC address of the source and destination, about whether or not to pass along a signal to the next segment.

Switches usually have more ports than bridges.

Segmenting LAN's with Bridges

There are two types of Bridges used to segment a LAN:

Transparent Bridges

Used mostly to interconnect Ethernet segments. They make decisions for themselves concerning frame routing.

More, much more, see Sportack, pp.132-39).

Source Routing Bridges

Used mostly to interconnect Token ring segments. They rely only on the host for routing decisions.

For more, see Sportack, pp.139-41.

As mentioned previously, when used to segment a LAN, they reduce the total amount of traffic on a network by isolating traffic within segments.

The problem with segmenting a LAN with bridges (Sportack, p.145) is that saturation will recur as soon as large numbers of users begin requesting connections that are outside of their own network segment. This is because if it needs to forward a packet (see below), it forwards it to all other segments, and not just the one for which it is intended.

Therefore, bridges are a good solution (as good or better than switches) only if network segments require little, if any, cross-segment contact.

A bridge reads the station address (or MAC address) but it does not dig deeper into the packet or frame to read the IPX OR TCP/IP addresses as does a router.

Remember that bridges (and switches) know physical layer (segments), while routers know logical layer (subnets).

How does it work?

A bridge builds and maintains an internal table that matches the MAC addresses of those machines which are connected to each of its ports. Then, when it receives a packet on one of its ports it checks to see if the MAC address of the destination is associated with the same port - if so it doesn't bother forwarding the packet.

How does it build its table of MAC addresses?

Each time it receives a packet on one of its ports it looks at the MAC address of the source machine and adds that to the list of addresses associated with that port if it is not already on the list.

Since bridges work at the lower two levels of the OSI model - the Physical (#1) layer, and the Data-link (#2) layer, they can only connect segments which use the same:

  • Data-link protocol, such as NetBeui to NetBeui
  • Architecture/NOS: Ethernet to Ethernet, etc.

Sportack says that using bridges and routers to segregate intersegment traffic requires a high degree of skill and patience, and lots of network traffic analysis (Sportack, p.146).

Segmenting LAN's with Switches

(Sportack, pp.145-53; Brierley, pp.2.22-2.23)

Like bridges, switches can be used to segment a LAN. Unlike bridges, a major strength of switches is that they can handle multiple cross-segment communications well.

Therefore, ethernet switching is a common solution to the problem of saturated networks that require substantial cross-segment traffic.

Dynamic switches maintain a table that associates individual nodes with the specific ports to which they are connected. This makes them forward frames more efficiently because they will go across only the necessary segments and not all the segments.

How do switches which are not dynamic get their info? Who cares - static switches should probably be avoided as they are really just hubs.

(Andrews, p.915; Derfler, p.149)

There are two kinds of switches: segment and port. Port switches are more efficient but cost lots due to increased cabling and numbers of switches. Segment switches reduce the amount of cabling and numbers of switches required.

Port switching

Port switches place each workstation, server, or device on their own individual port. This means lots of cabling and lots of switches.

Segment switching

Segment switches can handle an entire network segment on each of their ports. Therefore, for the same network, fewer ports (switches) are required than for the port switching model

Segment switches are also capable of handling a single workstation on each port - creating, in essence, a one-node segment.

This capability allows the network to be arranged with low-traffic nodes sharing the same segments, while network and database servers and other high-bandwidth devices (optical drives, for example) can be connected using a one device-one port scheme.

Switching Variations

Cut-through switching doesn't wait for an entire packet to arrive before sending it ahead - it goes ahead and begins blasting it through as soon as it knows the destination. This reduces latency but increases errors which eat up bandwidth. The effects of the latter can be reduced by configuring the switch to delay slightly between the receipt and forwarding of packets. In other words, compromise in the delay setting to reduce errors. Sounds like quite a bit of trouble for marginal improvement.

Store and Forward uses the opposite approach from cut-through. It buffers the entire packet before sending it on. This lets the switch verify the packet's CRC (Cyclable Redundancy Check) and makes for highly reliable data transmission. This effectively makes the network "faster" (or gives it more bandwidth) because it eliminates the extra transmissions that would be necessitated by error-correction.

Network management - switch should provide or support diagnostic and stats software for network management.

Virtual networks - create them on-the-fly by configuring the switch.

Comparison between Switches and Bridges

Summarizing the preceding two sections, bridges cut down on inter-segment traffic by restricting traffic between nodes on the same segment to that segment. However, traffic bound for other segments is forwarded to ALL other segments. They say, if effect, "This traffic is going somewhere outside of its source segment so I'll forward it to all the other segments".

Switches take a more intelligent approach. In addition to restricting intra-segment traffic to the source segment, they also make an intelligent choice as to which segment inter-segment traffic is bound and forwards it only there. A switch says, "Oh, this traffic is going to a node on segment "C", therefore I will forward it only to that segment".

Now we can see why a switch will help reduce network traffic on all segments when there is a lot of inter-segment traffic, but a bridge will not.

Would one use two switches or two routers to create a "backbone" connection between a Token Ring network and an Ethernet network.? Neither. We'd have to use a router.

Other Performance Issues

Although switches are touted as a kind of panacea when it comes to improving performance, it should also be kept in mind that the rest of the network must be up to par before a switch is going to help.

It's the "weakest link in the chain" effect:

  • The fastest port can only forward as quickly as it receives from a slow port, and
  • Slow ports can't use the bandwidth provided by fast ports

Either way, a bottleneck anywhere is going to put bandwidth to waste.

So, other factors to consider include:

  • Server NIC (host adapter) - should be high performance
  • Workstations (and other network devices)
  • Interswitch Connections, if used (which is unlikely) should be connected by high-speed media

Also, when high performance is needed, consider the relative low cost and ease of implementing switches and 100Mbps Fast Ethernet (probably a "re-wire", though, for CAT5) when compared with the hassle and large resources required for sophisticated technologies like ATM (Asynchronous Transfer Mode), FDDI (Fiber Distributed Data Interface), and ADSL (Asymmetrical Digital Subscriber Line).

ATM (Asynchronous Transfer Mode)

See ATM.

FDDI (Fiber Distributed Data Interface)

See FDDI.

See section 2.3.4, FDDI

ADSL (Asymmetrical Digital Subscriber Line).

The asymmetrice version of DSL (Digital Subscriber Line). DSL is a software and electronics-based technology which creates a higher bandwidth connection over existing copper telephone lines. ADSL can achieve speeds of up to 8.448Mbps to the customer and up to 60 Kbps to the "network" (function of distance).


Bruce Miller, 2002, 2014