Varol Cagdas Tok

Personal notes and articles.

Amplification and Reflection

To saturate a target's network link, the attacker must direct more traffic at it than it can receive. For a long time this was the binding constraint on attack scale: the attacker needed outbound bandwidth comparable to the target's inbound bandwidth, meaning only adversaries with significant infrastructure could take down well-provisioned targets.

Amplification attacks break this constraint. The attacker does not generate the traffic themselves; they cause a third party to generate it and direct it at the target. If the third party generates significantly more traffic per stimulus than the attacker sent to trigger it, the attacker achieves force multiplication. Their effective outbound bandwidth becomes their actual bandwidth multiplied by the amplification factor of the reflector.


The Mechanics of Reflection

Reflection requires a protocol where the server sends a response to a specified address. The attacker spoofs the source address of the request to be the target's address. The server sends the response to the target. The target receives traffic it did not request from what appears to be a legitimate server.

This is not a vulnerability in the reflector. The reflector does exactly what the protocol specifies: receiving a query and sending a response to the address in the source field. The vulnerability is in protocol design: a small query elicits a large response, and the querying address is not authenticated. UDP enables this because it is connectionless and stateless; there is no handshake to reveal the address mismatch.

TCP cannot generally be used for reflection because the three-way handshake reveals the spoofed source: the server sends a SYN-ACK to the target, the target has no record of initiating a connection and sends a RST, and the handshake fails before any application data is exchanged. TCP's statefulness, which makes it vulnerable to SYN flooding, simultaneously makes it resistant to use as a reflection vector. UDP's statelessness, which makes it efficient for request-response services, makes it trivially usable for reflection.


Amplification Factor

The amplification factor is the ratio of response size to request size. It determines the force multiplication the attacker achieves. An amplification factor of 100 means the attacker can generate 100 Gbps of attack traffic using 1 Gbps of their own outbound bandwidth.

DNS: a minimal DNS query is approximately 40 bytes. A DNS response with a large TXT record can reach the UDP payload limit of roughly 4,000 bytes when EDNS0 is enabled. Amplification factors between 28 and 70 are typical for standard queries; queries for ANY records or large TXT records push the upper end. Open resolvers, resolvers that answer queries from any source, are the attack vector.

NTP: the Network Time Protocol's monlist command returns a list of the last 600 hosts that synchronized with the server. A 234-byte request elicits up to 100 UDP packets totaling approximately 48 kilobytes, an amplification factor of approximately 206x by packet count. The monlist command was disabled in NTP version 4.2.7p26, but servers running older software remained in operation in large numbers for years. The Spamhaus attack in 2013, which reached approximately 300 Gbps, used a combination of DNS and NTP amplification.

SSDP: the Simple Service Discovery Protocol, used for UPnP device discovery, responds to a small multicast search request with a unicast response containing a device description URL. Amplification factor approximately 30x. Consumer-grade routers and network devices expose SSDP on public interfaces due to misconfiguration, providing a large pool of reflectors genuinely difficult for network operators to track.

CLDAP: Connectionless LDAP, used for Windows domain controller queries, produces amplification factors of approximately 56 to 70x. This was observed in the wild in 2016 as a relatively new vector, demonstrating that decades-old protocols can yield new attack surfaces when examined with this lens.

Memcached: the most extreme documented case. Memcached servers exposed on UDP port 11211 without authentication respond to a small stats command with a large response, or return whatever is stored at a requested key. An attacker who previously stored a large payload, up to 1 MB, in an exposed Memcached server can retrieve it with a small request. Amplification factors of approximately 50,000x have been documented. The GitHub attack in February 2018, peaking at approximately 1.35 Tbps, used Memcached amplification almost exclusively. Memcached has no authentication in its default configuration and was not designed to be exposed to the public internet; its documentation explicitly warns against this.

CharGen: the character generation service (port 19, RFC 864) sends a continuous stream of characters once a UDP packet is received. Amplification factors exceeding 358x have been measured. CharGen has no legitimate modern use; its continued presence on the internet is purely legacy.


Bandwidth Arithmetic at Scale

The Memcached amplification factor makes the arithmetic worth examining. An attacker with a 10 Gbps connection sending requests to Memcached servers at 10 Gbps theoretically generates approximately 500 Tbps at the target. This is not achievable in practice: there are not enough exposed servers with sufficient bandwidth, network paths saturate before traffic reaches a single destination, and filtering typically intervenes.

More practically: the GitHub attack used approximately 100 Gbps of traffic from Memcached servers to generate the 1.35 Tbps that reached GitHub's network. The actual outbound bandwidth required from the attacker's infrastructure was a small fraction of that. The attack was sourced from over 1,000 autonomous systems, making source-based filtering intractable and requiring Akamai to absorb the traffic at their scrubbing infrastructure.

This changes the nature of the problem for defenders. It is no longer sufficient to have more bandwidth than the attacker. It is necessary to have more bandwidth than the attacker multiplied by the best available amplification factor. For an organization not using a purpose-built DDoS mitigation provider with massive bandwidth capacity, this is simply not achievable at current amplification factors.


The Reflector Population

Amplification attacks are bounded not by the attacker's bandwidth but by the number of available reflectors and their aggregate bandwidth.

Open DNS resolvers are the most numerous. Measurements from various periods have found millions of open resolvers on the public internet, distributed across autonomous systems and connected through diverse paths to most targets.

NTP servers running old software with monlist enabled have decreased substantially following coordinated disclosure and the widely reported 2013 attacks. However, "decreased substantially" in a population previously measured in millions still leaves large numbers.

Memcached servers on the public internet declined sharply following the February 2018 attacks, as cloud providers and hosting companies blocked UDP port 11211 at their perimeters. Periodic scanning still finds tens of thousands of exposed servers.

SSDP reflectors are primarily consumer devices, home routers, smart TVs, network printers, that are difficult to patch in bulk and whose operators have no meaningful way to detect that their device is being used as a reflector. These reflectors are replenished naturally as new devices are deployed without security configuration.

The reflector population is not static. CoAP (Constrained Application Protocol), designed for IoT devices, was documented as an amplification vector in 2018-2019 with amplification factors between 6x and 34x. The IoT device population continues to grow with minimal security oversight, and many devices run services with amplification properties their operators are unaware of.


Spoofing as the Enabling Condition

Amplification attacks require source address spoofing. Without the ability to send packets with a forged source address, the attacker's real address would receive the response instead.

BCP 38, published in 2000, specifies ingress filtering: the practice of network operators dropping packets at their boundaries whose source addresses are not within the address space routable through the arriving interface. If universally deployed, BCP 38 would make source address spoofing impractical.

BCP 38 has not been universally deployed. More than twenty years after its publication, a substantial fraction of ASes still permit source address spoofing. CAIDA's Spoofer project maintains ongoing measurements and consistently finds that a significant minority of network endpoints can send spoofed packets that reach external destinations.

The reasons for incomplete deployment are economic and organizational. Implementing ingress filtering requires router CPU for packet inspection and accurate knowledge of address space allocation. It requires deliberate action with local cost and diffuse benefit: the network implementing filtering does not directly receive the amplified traffic. MANRS (Mutually Agreed Norms for Routing Security) and similar initiatives attempt to address this through peer pressure and operator commitment, but progress has not been sufficient.


Botnet-Sourced Amplification

Combining a botnet with amplification multiplies the leverage further. Each bot sends spoofed requests to reflectors, and the aggregate amplified traffic converges on the target from a large and diverse set of source addresses. The attacker's infrastructure may generate very little traffic directly.

This approach separates attribution from effect. The traffic the target receives comes from reflectors, not from the attack infrastructure. The attack infrastructure communicates with bots, not with reflectors. The bots are compromised third-party systems. Tracing the attack back to its origin requires correlating logs across reflectors, autonomous systems, and compromised hosts, each operated by a different entity with different logging practices and different levels of cooperation. The forensic chain required to establish attribution typically takes far longer than the attack itself.


Detection and Characterization

Amplification attacks have distinctive traffic patterns. Response packets from reflectors typically have:

The absence of corresponding outbound traffic is the clearest signature. Legitimate DNS responses arrive because the target sent queries. Amplification traffic arrives without corresponding outbound queries. Traffic analysis tracking flow symmetry can identify this pattern.

NetFlow or sFlow data at the upstream provider or internet exchange allows characterization of source distribution and protocol mix. A single reflector protocol dominating the traffic, such as Memcached on UDP/11211, indicates which mitigation approach is appropriate.


Mitigation Architecture

The standard mitigation approach involves upstream traffic filtering at a provider with sufficient capacity to absorb the traffic volume, combined with protocol-specific blocking of the reflector traffic.

Blocking by source port is effective when a single reflector protocol is involved: drop all traffic from UDP/123 (NTP), or UDP/11211 (Memcached), or UDP/53 (DNS, though DNS is operationally necessary and source-port blocking requires finer-grained rules). This is a blunt instrument that blocks legitimate traffic from those protocols, but under active attack the traffic mix from the attack protocol vastly exceeds legitimate traffic, making the trade-off acceptable.

Anycast-based scrubbing distributes traffic across a geographically distributed scrubbing network. The attack traffic is absorbed across many points of presence, each handling a fraction of the total. Filtered traffic is tunneled back to the target's origin. This architecture is the basis of commercial DDoS mitigation services and requires infrastructure investment that only large providers can sustain.

Remotely Triggered Black Hole (RTBH) routing is a more drastic option: the target's address is announced with a blackhole community, causing upstream providers to drop all traffic to that address. This stops the attack and stops all legitimate traffic simultaneously. It is operationally useful when the alternative is complete service unavailability due to attack traffic saturating the uplink, but it achieves availability by abandoning the target address entirely.


The Open Problem

The amplification problem does not have a complete technical solution at the protocol design level. As long as UDP services exist that send larger responses than they receive, and as long as source address spoofing is possible in some fraction of the network, amplification attacks are feasible.

Incremental approaches, closing specific reflectors, deploying rate limiting on individual services, improving BCP 38 compliance, reduce available amplification capacity without eliminating it. The discovery of new amplification vectors in new protocols (CoAP, WS-Discovery, QUIC under some conditions) demonstrates that the class of vulnerable protocols grows as new protocols are deployed, not only shrinks as old ones are secured.

What changes is the economics. As scrubbing infrastructure scales, as reflector populations shrink, and as amplification factors for common protocols decrease, the cost of executing an effective amplification attack increases. The goal is to make attacks expensive enough that the attacker's cost exceeds the damage they can cause.