Infrastructure Weaponization
The distinction between attack infrastructure and production infrastructure has eroded. Modern DDoS campaigns increasingly use legitimate, well-provisioned cloud providers, content delivery networks, and trusted internet services as attack sources, relays, or amplification vectors. The traffic arrives from IP addresses belonging to major cloud providers or well-known services. It conforms to expected protocols and passes syntactic validation. It cannot be blocked without also blocking legitimate traffic from the same sources.
The shift follows from two trends: defenders have gotten better at blocking traffic from attack-specific infrastructure, botnets, open reflectors, historically abusive hosting providers, and cloud infrastructure has become cheap and globally distributed.
Cloud Providers as Attack Sources
Major cloud providers, AWS, Google Cloud, Microsoft Azure, and their smaller competitors, provide globally distributed computing capacity with outbound connectivity measured in gigabits per second per instance. Provisioning an instance takes minutes and requires only a valid payment method. The IP addresses belong to the cloud provider's netblocks, which appear in routing tables as legitimate, well-maintained infrastructure.
The attack workflow using cloud infrastructure:
- Provision instances in multiple regions and providers
- Deploy attack tooling (HTTP flood tools, UDP flood generators, connection exhaustion tools)
- Direct traffic at the target
- Tear down instances after the attack to avoid attribution and billing
- The attacker creates accounts at CDN providers and configures CDN delivery for content they control.
- The CDN caches the content at edge nodes.
- The attacker issues requests (or causes others to request) the CDN-hosted content, generating edge-to-client traffic.
- If the attacker can configure request headers or URLs that cause the CDN to forward requests to the victim's infrastructure as the "origin," the CDN becomes a relay.
The cost is low. A small fleet of cloud instances generating a few gigabits per second per instance can produce tens to hundreds of gigabits of traffic. The duration can be controlled precisely. The source addresses span multiple ASes and geographic regions, defeating geographic-based filtering. The attribution trail leads to a payment method that may be a prepaid card or cryptocurrency, behind the cloud provider's API, in a jurisdiction with varying levels of law enforcement cooperation.
Cloud-Specific Characteristics
Cloud attack traffic has distinctive characteristics that differ from botnet or amplification traffic, though not always in ways that are easy to filter:
Source ASes: traffic from major cloud providers originates in netblocks operated by those providers (AWS ranges are in WHOIS, Google Cloud's ranges are published, etc.). An organization that blocks all traffic from cloud provider ASes would eliminate some attack traffic but would also eliminate legitimate traffic from applications and services hosted in those clouds.
Egress patterns: cloud instances typically have high, constant egress capacity. A single instance generating 5 Gbps of HTTP requests produces a very high request rate from a single source IP. Per-source rate limiting can identify this pattern, but the cloud provider's network address translation may aggregate many instances behind fewer external IPs, obscuring the per-instance count.
No ingress matching: unlike legitimate web clients that both send requests and receive responses, HTTP flood instances send many requests and typically discard responses. Traffic analysis that tracks request/response ratio for connections can identify this pattern, but it requires stateful inspection at the application layer.
CDN Abuse for Amplification
Content delivery networks are explicitly designed to amplify traffic: they cache content at edge nodes close to users and serve it from those edges, reducing origin load while serving high volumes of traffic. This amplification function, a CDN can serve far more outbound traffic than its origin generates, becomes a liability when the CDN's capacity is exploited for attack purposes.
The mechanism requires the attacker to control content served through a CDN:
The subtler version of CDN abuse exploits cache purging and invalidation mechanisms. Some CDN APIs allow programmatic cache purging. An attacker who repeatedly purges cached content forces the CDN to refetch from origin. If the attacker can control when purges happen, they can drive sustained origin traffic at the CDN's full fetch capacity, which is substantial.
Layer 7 Amplification Through CDN
Some CDN configurations involve request transformation: a simple incoming request is expanded into multiple backend requests (for different content fragments, for API calls, for authentication checks). An attacker who understands a target's CDN configuration might craft requests that trigger expensive multi-part origin fetches.
This is distinct from traditional amplification: the CDN is not amplifying a UDP packet into a large UDP response, but it is causing the origin to handle more requests than the attacker explicitly sent. The amplification is through application logic, not network protocol asymmetry.
DNS as a Weapon: Beyond Amplification
DNS has long been exploited as an amplification reflector. A more targeted use of DNS as an attack vector involves exploiting DNS infrastructure to redirect or misdirect traffic rather than flood the target.
NXDOMAIN Attacks on Authoritative DNS
An authoritative DNS server receives queries for domains it is authoritative for. An attacker who sends a high volume of queries for non-existent subdomains (NXDOMAIN queries) causes the authoritative server to generate negative responses. If the query volume is high enough, it saturates the DNS server's processing capacity, degrading resolution for the domain's legitimate records.
This attack is notable because it is difficult to block at the recursive resolver level, the resolver receives the same queries from legitimate clients and from attack sources, and must forward them to the authoritative server to determine if they exist. The attack goes through the normal recursive resolution path and lands directly on the authoritative infrastructure.
Mitigations include rate limiting at the recursive resolver (per-domain query rate limiting to prevent a single resolver from generating disproportionate traffic to one domain), response rate limiting (RRL) at the authoritative server (limiting the rate of NXDOMAIN responses to any source), and scaling authoritative infrastructure.
DNS Water Torture
A specific variant of NXDOMAIN attacks generates queries with random labels prepended to the target domain: a1b2c3.example.com, x9y8z7.example.com, etc. The random labels ensure no caching occurs, each query is unique and must reach the authoritative server. The traffic cannot be absorbed by recursive resolver caches.
This technique, sometimes called DNS water torture or random subdomain attack, was documented in operation during the 2014 attacks on the French internet infrastructure and in numerous subsequent campaigns. It achieves a significant amplification effect through the recursive resolution infrastructure: a botnet issuing water torture queries causes every recursive resolver the queries traverse to forward them to the authoritative DNS, multiplying the traffic reaching the authoritative infrastructure.
NTP and Legitimate Time Infrastructure
NTP servers are widely deployed for time synchronization, including on critical infrastructure. The monlist extension for diagnostic purposes was the basis of high-amplification NTP reflection attacks. But the exploitation of NTP extends beyond simple reflection.
NTP Amplification attacks using monlist generated amplification factors of approximately 206x. After the wide documentation of this vulnerability in 2013 and the subsequent patching campaigns, monlist was disabled on most NTP servers. However, NTP servers remain reachable on UDP port 123, continue to respond to standard NTP queries with some amplification potential (though much lower than monlist), and represent trusted infrastructure that may be exempted from aggressive rate limiting.
The pattern of exploiting trusted infrastructure, systems whose traffic is expected and whose IPs are considered clean, recurs across multiple attack types. SMTP servers, IMAP servers, LDAP servers, and similar services that must be reachable for legitimate communication purposes can all be used as traffic sources when misconfigured or when their protocol design allows it.
Abusing Content and Communication Platforms
Web services with open APIs or open content retrieval endpoints can be used as relays or amplifiers. The pattern: the attacker configures a URL at a content platform that, when fetched, causes the platform to make a request to the victim. The attacker triggers many fetches of this URL, causing the platform to flood the victim with requests.
Webhook endpoints are a common surface. A webhook sends an HTTP POST to a configured URL when an event occurs. If an attacker can configure webhooks pointing to a victim's server and then repeatedly trigger the events that fire the webhook, the platform's servers send POST requests to the victim. The attacker's direct traffic to the victim is zero; all the attacking traffic originates from the platform's servers.
Service meshes, CI/CD callback URLs, payment gateway callbacks, and similar endpoint notification mechanisms all follow this pattern. The attacker's challenge is triggering the events at high rate, which may require either access to a high-volume account on the platform or a vulnerability that allows triggering events without valid authorization.
The Search Engine and Crawler Ecosystem
Web crawlers operated by search engines and other indexing services represent a substantial legitimate source of HTTP traffic. These crawlers are well-behaved under normal circumstances: they respect robots.txt, observe crawl delay directives, and target specific pages rather than flooding.
However, crawler infrastructure can be abused or impersonated:
Impersonation: an attacker sends HTTP requests with user agent strings belonging to legitimate crawlers (Googlebot, Bingbot). Many applications whitelist known crawler user agents for rate limiting purposes, assuming they represent legitimate traffic from trusted sources. Traffic with a Googlebot user agent that is exempt from rate limiting can flood the application at higher rates than traffic from unknown user agents.
Triggering crawls: some applications have mechanisms that submit URLs for crawling (sitemap submission, ping endpoints). An attacker who can repeatedly submit URLs for a victim domain to multiple crawling services might generate elevated crawler traffic to the victim, though the crawler rate controls built into major search engines limit the effectiveness of this approach.
Legitimate Traffic as the Ultimate Evasion
The logical conclusion of infrastructure weaponization is the use of genuine legitimate traffic for DDoS. If an attacker can cause a large population of real users to make requests to a target, the resulting traffic is indistinguishable from organic load, because it is organic load.
This is achieved through:
Coordinated action (voluntary): organized communities directing members to visit or interact with a target. The DDoS resulting from coordinated click campaigns by activist groups or hostile communities falls into this category. The "Twitter effect", a celebrity link causing sudden traffic spikes, is the benign version; the hostile version is organized link-dropping to content with technical mechanisms or social motivations to drive repeated reloads.
Invisible iframes and redirect chains: a high-traffic page embeds an invisible iframe pointing to the target. Visitors to the embedding page make an additional request to the target. A page with millions of daily visitors generates millions of requests to the target from real browsers at real residential IP addresses. This technique was observed in historical web attack campaigns.
JavaScript-based flooding: a high-traffic site with compromised or maliciously injected JavaScript executes fetch requests or XMLHttpRequest calls to the target from every visitor's browser. The requests originate from real browsers at real IP addresses. Challenge-response mechanisms that verify JavaScript execution cannot distinguish this from genuine browsing.
The defense against this category of attack is essentially impossible at the network or protocol level. The traffic is genuinely legitimate in every observable characteristic. The only mitigation is load-based capacity management, autoscaling, request queuing, degraded-mode responses, rather than attack-traffic filtering. The service degrades under load rather than becoming completely unavailable, but the attacker achieves some degree of denial without the need for dedicated attack infrastructure.
The Defense Problem in Converged Infrastructure
The convergence of attack and production infrastructure creates a fundamental problem for defenders: the IP reputation systems, network ACL policies, and block lists that work against dedicated attack infrastructure do not work against traffic originating from AWS, Google Cloud, or Fastly's CDN network.
The necessary response is to move from infrastructure-based filtering (block traffic from this IP range because this IP range is an attack source) to behavior-based filtering (block traffic exhibiting this behavioral pattern regardless of source). Behavioral filtering operates at the application layer, requires stateful analysis, and is computationally expensive. It is the only approach that works when the attack traffic comes from the same IP space as legitimate traffic.
This shift has operational implications. Behavioral analysis requires processing every request at the application layer to classify it, which means the defense infrastructure must be able to handle the full attack volume at application-layer processing speed. This is significantly more expensive than hardware-accelerated L3/L4 filtering. The economics of defense become more favorable to the attacker as the attack moves toward legitimate infrastructure.
Every improvement in network infrastructure, content distribution, and API integration that benefits legitimate applications also improves the available attack surface. The infrastructure that makes the modern internet fast and reliable is the same infrastructure that makes large-scale denial of service cheap and difficult to prevent.