Network Switch Definition

Network Switch Definition

In a network, a switch is a device that channels incoming data from any of multiple input ports to the specific output port that will take it toward its intended destination.

In a local area network (LAN) using Ethernet, a network switch determines where to send each incoming message frame by looking at the physical device address (also known as the Media Access Control address or MAC address). Switches maintain tables that match each MAC address to the port from which the MAC address has been received. If a frame is to be forwarded to a MAC address that is unknown to the switch infrastructure, it is flooded to all ports in the switching domain. Broadcast and multicast frames are also flooded. This is known as BUM flooding — broadcast, unknown unicast, and multicast flooding.   This capability makes a switch a Layer 2 or data-link layer device in the Open Systems Interconnection (OSI) communications model.

Types of networking switches

There are several types of switches in networking in addition to physical devices:

  • Virtual switches are software-only switches instantiated inside virtual machine (VM) hosting environments.
  • routing switch connects LANs; in addition to doing MAC-based Layer 2 switching it can also perform routing functions at OSI Layer 3 (the network layer) directing traffic based on the Internet Protocol (IP) address in each packet.

How a network switch works

Switches, physical and virtual, comprise the vast majority of network devices in modern data networks. They provide the wired connections to desktop computers, wireless access points, industrial machinery and some internet of things (IoT) devices such as card entry systems. They interconnect the computers that host virtual machines in data centers, as well as the dedicated physical servers, and much of the storage infrastructure. They carry vast amounts of traffic in telecommunications provider networks.

A network switch can be deployed in the following ways:

  • Edge, or access, switches: These switches manage traffic either coming into or exiting the network. Devices like computers and access points connect to edge switches.
  • Aggregation, or distribution, switches: These switches are placed within an optional middle layer. Edge switches connect into these and they can send traffic from switch to switch or send it up to core switches.
  • Core switches: These network switches comprise the backbone of the network, connecting either aggregation or edge switches to each other, connecting user or device edge networks to data center networks and, typically, connecting enterprise LANs to the routers that connect them to the internet.

Many data centers adopt a leaf/spine architecture, which eliminates the aggregation layer. In this design, servers and storage connect to leaf switches (edge switches) and every leaf switch connects into two or more spine (core) switches. This minimizes the number of hopsdata has to take getting from source to destination, and, thereby, minimizes the time spent in transit, or latency.

Some data centers establish a fabric or mesh network design that makes every device appear to be on a single, large switch. This approach reduces latency to its minimum and is used for highly demanding applications such as high-performance computing (HPC) in financial services or engineering.

Not all networks use switches. For example, a network may be (and often was, in the 1980s and 1990s) organized in a token ring or connected via a bus or a hub or repeater. In these networks, every connected device sees all traffic and reads the traffic addressed to it. A network can also be established by directly connecting computers to one another, without a separate layer of network devices; this approach is mostly of interest in HPC contexts where sub-5-microsecond latencies are desired and can become quite complex to design, wire and manage.