top-of-rack switching

top-of-rack switching

Top-of-rack switching is a network architecture design in which computing equipment like servers, appliances and other switches located within the same or adjacent rack are connected to an in-rack network switch. The in-rack network switch, in turn, is connected to aggregation switches via fiber optic cables.

Despite its name, top-of-rack (ToR) switches can be placed anywhere in the rack. But to enable access to the horizontal fiber optic cabling plant that connects in-rack switches to aggregation switches, they are usually positioned near the top of the rack.

ToR deployment options

In high-density data center deployments, in-rack switches are placed in every rack and, in most cases, connected to a computing device such as a bare-metal server or blade server chassis. Each in-rack switch, in turn, is connected to the aggregation switch block using fiber. Connections within the rack can be any combination of copper, fiber or direct access cabling.

Among other capabilities, ToR switches handle operations, including Layer 2 and Layer 3 frame and packet forwarding, data center bridging and the transport of Fibre Channel over Ethernet for the racks of servers connected to them.

Weighing the advantages and disadvantages of ToR switching

ToR offers many benefits, but also some drawbacks. Cabling complexity is reduced because all of the servers connect to the switch on the same rack and only a few wires need to run outside the rack to reach the aggregation switch. Cable length and the amount of cabling are reduced, as well. A ToR design can often be easily upgraded to 10 Gigabit Ethernet (GbE), 40 GbE or 100 GbE, without incurring many costs or necessitating a change in cabling.

Other benefits of ToR include combining one network switch for two or three racks, if they are low-density deployed racks. In other instances, ToR architecture boosts modular deployment. A pre-assembled rack with all the necessary cabling and switches can be quickly connected and deployed on site.

Conversely, capital and maintenance costs might be higher. The distributed architecture of a ToR design requires the need for more physical switches. There also is the potential that several ToR switches may end up being underutilized. This can result in unnecessary power usage and cooling increases without a direct benefit to performance. Finally, if the ToR architecture calls for a single in-rack switch to be deployed per rack, if that switch fails, an entire rack will be taken offline.

ToR switching versus end of row switching

ToR and end-of-row (EoR) switching are both popular options for data centers and other network arrangements that call for a large number of servers to connect. EoR switching differs from ToR switching because EoR design calls for each server in a rack to be directly connected to a common aggregation switch, without connecting to individual switches within the rack.

EoR designs almost always require a much larger horizontal cable plant. Often, there will be multiple EoR switches in a data center, one per row or sometimes one to connect a certain number of racks. However, if the cable plant is already in place in a pre-existing data center, it’s often easier and more cost effective to reuse it when switch hardware is upgraded. This is as opposed to ripping and replacing the horizontal cable plant to convert it to a ToR cabling design. So, the re-cabling cost, in addition to the more expensive capital expense due to the additional hardware requirements of a ToR architecture, can sway data center operators to stick with an EoR architecture. But if this is a greenfield data center deployment where an existing EoR cable plant doesn’t exist, the flexibility benefits of a ToR architecture supersede the nominal increase in hardware capital expenditures.