Amplification and Reflection
The bandwidth problem in denial of service is simple to state: to saturate a target's network link, the attacker must direct more traffic at it than it can receive. For a target with a 100 Gbps uplink, this requires generating 100 Gbps of traffic. For a long time this was the binding constraint on attack scale, the attacker needed comparable outbound bandwidth to the target's inbound bandwidth, which meant that only adversaries with significant infrastructure could take down well-provisioned targets.
Amplification attacks break this constraint. The insight is that the attacker does not need to generate the traffic themselves, they need to cause a third party to generate it, and cause it to be directed at the target. If the third party generates significantly more traffic in response to each stimulus than the attacker sent to trigger it, the attacker achieves a force multiplication effect. Their effective outbound bandwidth becomes their actual outbound bandwidth multiplied by the amplification factor of the reflector.
The Mechanics of Reflection
Reflection requires a protocol where the server sends a response to a specified address. The attacker spoofs the source address of the request to be the target's address. The server sends the response to the target. The target receives traffic it did not request and did not want, from what appears to be a legitimate server operating a standard service.
This is not a vulnerability in the reflector. The reflector is doing exactly what the protocol specifies: receiving a query and sending a response to the address in the query's source field. The vulnerability is in the protocol design: the protocol allows a small query to elicit a large response, and it does so without verifying that the querying address is authentic. UDP enables this because UDP is connectionless and stateless at the protocol level, there is no handshake that would reveal the address mismatch.
TCP cannot generally be used for reflection because the three-way handshake reveals the spoofed source: the server sends a SYN-ACK to the target, the target has no record of initiating a connection and sends a RST, and the handshake fails before any application data is exchanged. The response payload, the amplified traffic, never arrives because the connection never establishes.
TCP's statefulness, which makes it vulnerable to SYN flooding, simultaneously makes it resistant to use as a reflection vector. UDP's statelessness, which makes it efficient for request-response services, makes it trivially usable for reflection.
Amplification Factor
The amplification factor for a given reflector is the ratio of response size to request size. It determines the force multiplication the attacker achieves. An amplification factor of 100 means the attacker can generate 100 Gbps of attack traffic at the target using 1 Gbps of their own outbound bandwidth.
Different protocols offer radically different amplification factors depending on how they are configured and what they are asked to do.
DNS: a minimal DNS query is approximately 40 bytes. A DNS response with a large TXT record can reach the UDP payload limit of roughly 4,000 bytes when EDNS0 is enabled, or require TCP fallback for larger responses. Amplification factors between 28 and 70 are typical for standard queries. Queries for ANY records or for domains with large TXT records push the upper end. Queries directed at open resolvers, resolvers that answer queries from any source, are the attack vector; authoritative servers with rate limiting are significantly less useful as reflectors.
NTP: the Network Time Protocol's monlist command returns a list of the last 600 hosts that synchronized with the server. A 234-byte request elicits up to 100 UDP packets totaling approximately 48 kilobytes. The amplification factor is approximately 206x by packet count and higher by byte count when response fragmentation is included. The monlist command was disabled in NTP version 4.2.7p26, but servers running older software remained in operation in large numbers for years afterward. The Spamhaus attack in 2013, which reached approximately 300 Gbps, used a combination of DNS and NTP amplification.
SSDP: the Simple Service Discovery Protocol, used for UPnP device discovery, responds to a small multicast search request with a unicast response containing a device description URL. The amplification factor is approximately 30x. Consumer-grade routers and network devices expose SSDP on public interfaces due to misconfiguration, providing a large pool of reflectors that are genuinely difficult for network operators to track.
CLDAP: Connectionless LDAP, used for Windows domain controller queries, responds to a small attribute query with a response containing domain controller metadata. Amplification factors of approximately 56 to 70x are typical. This was observed in the wild in 2016 as a relatively new amplification vector, demonstrating that even decades-old protocols can yield new attack surfaces when examined with this lens.
Memcached: the most extreme documented case to date. Memcached servers exposed on UDP port 11211 without authentication respond to a small stats command with a large response containing cache statistics or, more significantly, respond to a get command by returning whatever is stored at the requested key. An attacker who previously stored a large payload, a string of up to 1 MB, in an exposed Memcached server can retrieve it with a small request. Amplification factors of approximately 50,000x have been documented. The GitHub attack in February 2018, peaking at approximately 1.35 Tbps, used Memcached amplification almost exclusively. Memcached has no authentication in its default configuration and was not designed to be exposed to the public internet, its documentation explicitly warns against this, but misconfigured deployments exist in significant numbers.
CharGen: the character generation service (port 19, RFC 864) sends a continuous stream of characters once a UDP packet is received. The amplification factor is in principle unbounded, as the stream continues until the source stops receiving. In practice, timeouts and packet loss limit it, but amplification factors exceeding 358x have been measured. CharGen has no legitimate modern use and its continued presence on the internet is purely legacy. The number of reachable CharGen servers has decreased substantially, but they still appear in amplification attack reports.
Bandwidth Arithmetic at Scale
The Memcached amplification factor makes the arithmetic worth examining explicitly. An attacker with a 10 Gbps outbound connection, sending requests to Memcached servers at 10 Gbps, generates approximately 500 Tbps of attack traffic at the target. This is not achievable in practice, there are not enough exposed Memcached servers with sufficient bandwidth to the target, network paths saturate before the traffic reaches a single destination, and filtering typically intervenes, but it illustrates why the theoretical maximum is not a useful ceiling.
More practically: the GitHub attack used approximately 100 Gbps of attack traffic from Memcached servers to generate the 1.35 Tbps that reached GitHub's network. The actual outbound bandwidth required from the attacker's infrastructure was a small fraction of that. The attack was sourced from over 1,000 autonomous systems, making source-based filtering intractable and requiring that Akamai, which provides DDoS mitigation to GitHub, absorb the traffic at their scrubbing infrastructure.
The arithmetic changes the nature of the problem for defenders. It is no longer sufficient to have more bandwidth than the attacker. It is necessary to have more bandwidth than the attacker multiplied by the best available amplification factor. For an organization not using a purpose-built DDoS mitigation provider with massive bandwidth capacity, this is simply not achievable at current amplification factors.
The Reflector Population
Amplification attacks are bounded not by the attacker's bandwidth but by the number of available reflectors and their aggregate bandwidth. This makes the reflector population itself an important parameter in understanding attack capability.
Open DNS resolvers are the most numerous. Measurements from various periods have found millions of open resolvers on the public internet. Their distribution across autonomous systems means they are connected through diverse paths to most targets, making geographically distributed amplification traffic that is difficult to filter in bulk.
NTP servers running old software with monlist enabled have decreased substantially following coordinated disclosure and the widely reported 2013 attacks. However, "decreased substantially" in a population that was previously measured in millions still leaves large numbers.
Memcached servers on the public internet declined sharply following the February 2018 attacks, as cloud providers and hosting companies blocked UDP port 11211 at their perimeters. But server-side fixes require operators to apply them, and operators do not always receive or respond to notifications. Periodic scanning still finds tens of thousands of exposed servers.
SSDP reflectors are primarily consumer devices, home routers, smart TVs, network printers, that are difficult to patch in bulk and whose operators have no meaningful way to detect that their device is being used as a reflector. These reflectors are replenished naturally as new devices are deployed without security configuration.
The reflector population is not static. New protocols are periodically discovered to have amplification properties. CoAP (Constrained Application Protocol), designed for IoT devices, was documented as an amplification vector in 2018–2019 with amplification factors between 6x and 34x depending on query type. The IoT device population continues to grow with minimal security oversight, and many devices run services with amplification properties that their operators are not aware of.
Spoofing as the Enabling Condition
Amplification attacks require source address spoofing. Without the ability to send packets with a forged source address, the attacker cannot redirect the amplified response to the target. The attacker's real address would receive the response instead.
BCP 38, published in 2000, specifies ingress filtering: the practice of network operators dropping packets at their network boundaries whose source addresses are not within the address space routable through the interface on which they arrive. If an ISP's customer block is 192.0.2.0/24, the ISP should not forward packets from that customer with source addresses outside that range. If universally deployed, BCP 38 would make source address spoofing impractical, because a spoofed packet would be dropped before it reaches the reflector.
BCP 38 has not been universally deployed. More than twenty years after its publication, a substantial fraction of ASes still permit source address spoofing. CAIDA's Spoofer project maintains ongoing measurements of this and consistently finds that a significant minority of network endpoints can send spoofed packets that reach external destinations.
The reasons for incomplete deployment are economic and organizational. Implementing ingress filtering requires router CPU for packet inspection. It requires operators to maintain accurate knowledge of their address space allocation. It requires a deliberate action with local cost and diffuse benefit, the network implementing filtering does not directly benefit from doing so, since it does not receive the amplified traffic.
MANRS (Mutually Agreed Norms for Routing Security) and similar initiatives attempt to address this through peer pressure and operator commitment. Progress has been made, but not enough.
Botnet-Sourced Amplification
The combination of a botnet with amplification multiplies the already extreme leverage. Each bot sends spoofed requests to reflectors, and the aggregate amplified traffic converges on the target from a large and diverse set of source addresses. The attacker's infrastructure, the command and control system, may generate very little traffic directly.
This approach is operationally significant because it separates attribution from effect. The traffic the target receives comes from reflectors, not from the attack infrastructure. The attack infrastructure communicates with bots, not with reflectors. The bots are compromised third-party systems. Tracing the attack back to its origin requires correlating logs across reflectors, autonomous systems, and compromised hosts, each operated by a different entity with different logging practices and different levels of cooperation.
From the perspective of a defender conducting incident response, the traffic arriving at the target tells you the reflector addresses and the attack traffic volume. It does not tell you the attack source. The forensic chain required to establish attribution crosses multiple administrative boundaries and typically takes far longer than the attack itself.
Detection and Characterization
Amplification attacks have distinctive traffic patterns that enable detection and characterization. The response packets from reflectors typically have:
- Source addresses belonging to well-known services (port 53 for DNS, port 123 for NTP, port 11211 for Memcached)
- Packet sizes consistent with the reflector protocol's responses
- High packet-to-flow ratios: many packets from the same source IP, appearing as a single unidirectional flow with no corresponding outbound traffic from the target
The absence of corresponding outbound traffic is the clearest signature. Legitimate DNS responses arrive because the target sent queries. Amplification traffic arrives without corresponding outbound queries. Traffic analysis that tracks flow symmetry can identify this pattern.
NetFlow or sFlow data, if available at the upstream provider or internet exchange, allows characterization of the source distribution and protocol mix. A single reflector protocol dominating the traffic, Memcached on UDP/11211, for example, indicates which mitigation approach is appropriate.
Mitigation Architecture
The standard mitigation approach for amplification attacks involves upstream traffic filtering at a provider with sufficient capacity to absorb the traffic volume, combined with protocol-specific blocking of the reflector traffic.
Blocking by source port is effective when a single reflector protocol is involved: drop all traffic from UDP/123 (NTP), or UDP/11211 (Memcached), or UDP/53 (DNS, though DNS is operationally necessary and source-port blocking requires finer-grained rules). This is a blunt instrument, it blocks legitimate traffic from those protocols, but under active attack, the traffic mix from the attack protocol vastly exceeds legitimate traffic, making the trade-off acceptable.
Anycast-based scrubbing distributes the traffic across a geographically distributed scrubbing network. The attack traffic is absorbed across many points of presence, each handling a fraction of the total. Filtered traffic is tunneled back to the target's origin. This architecture is the basis of commercial DDoS mitigation services and requires infrastructure investment that only large providers can sustain.
Remotely Triggered Black Hole (RTBH) routing is a more drastic option: the target's address is announced with a blackhole community, causing upstream providers to drop all traffic to that address. This stops the attack and stops all legitimate traffic simultaneously. It is operationally useful when the alternative is complete service unavailability due to attack traffic saturating the uplink, but it achieves availability by abandoning the target address entirely.
The Open Problem
The amplification problem does not have a complete technical solution at the protocol design level. As long as UDP services exist that send larger responses than they receive, and as long as source address spoofing is possible in some fraction of the network, amplification attacks are feasible.
Incremental approaches, closing specific reflectors, deploying rate limiting on individual services, improving BCP 38 compliance, reduce the available amplification capacity without eliminating it. The discovery of new amplification vectors in new protocols (CoAP, WS-Discovery, QUIC under some conditions) demonstrates that the class of vulnerable protocols grows as new protocols are deployed, not only shrinks as old ones are secured.
What changes is the economics. As scrubbing infrastructure scales, as reflector populations shrink, and as amplification factors for common protocols decrease, the cost of executing an effective amplification attack increases. The goal is to make attacks expensive enough that the attacker's cost exceeds the damage they can cause.