Throughput Definition
Historically, throughput has been a measure of the comparative effectiveness of large commercial computers that run many programs concurrently. An early throughput measure was the number of batch jobs completed in a day. More recent measures assume either a more complicated mixture of work or focus on some particular aspect of computer operation. Units like “trillion floating-point operations per second (TeraFLOPs or TFLOPS)” provide a metric for comparing the cost of raw computing over time or by manufacturer. A benchmark can be used to measure throughput. In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).
Likewise, in storage systems, throughput refers to either the amount of data that can be received and written to the storage medium or read from media and returned to the requesting system, typically measured in bytes per second (Bps). It can also refer to the number of discrete input or output (I/O) operations responded to in a second (IOPS).
Throughput applies at higher levels of the IT infrastructure as well. Databases or other middleware can be discussed in terms of “transactions per second” (TPS); Web servers can be discussed in terms of page-views per minute.
Throughput also applies to the people and organizations using these systems: Independent of the TPS rating of its help desk software, for example, a help desk has its own throughput rate that includes the time staff spend on developing responses to requests.