PHP Server-Side Request Forgery (SSRF)

High Risk Server-Side Request Forgery
phpssrfhttp-requestsurl-validationinternal-services

What it is

The PHP application makes HTTP requests to URLs controlled by user input without proper validation, enabling Server-Side Request Forgery attacks. Attackers can exploit this to access internal services, cloud metadata endpoints, or perform port scanning on internal networks.

// Vulnerable: Direct user input in HTTP requests function fetchUrl($url) { // Dangerous: User-controlled URL without validation $context = stream_context_create([ 'http' => [ 'timeout' => 10 ] ]); $content = file_get_contents($url, false, $context); return $content; } // Alternative vulnerable pattern with cURL function fetchUrlCurl($url) { $ch = curl_init(); // Dangerous: Direct use of user input curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 10); $response = curl_exec($ch); curl_close($ch); return $response; } // Usage allowing SSRF $userUrl = $_GET['url']; // Could be http://localhost/admin or http://169.254.169.254/ $content = fetchUrl($userUrl);
// Secure: URL validation and allowlisting class SecureHttpClient { private $allowedDomains = [ 'api.example.com', 'trusted-service.com', 'safe-api.org' ]; private $blockedIpRanges = [ '127.0.0.0/8', // Localhost '10.0.0.0/8', // Private network '172.16.0.0/12', // Private network '192.168.0.0/16', // Private network '169.254.0.0/16', // Link-local (AWS metadata) '::1/128', // IPv6 localhost 'fc00::/7' // IPv6 private ]; public function fetchUrl($url) { // Validate URL format if (!filter_var($url, FILTER_VALIDATE_URL)) { throw new InvalidArgumentException('Invalid URL format'); } $parsedUrl = parse_url($url); // Validate scheme if (!in_array($parsedUrl['scheme'], ['http', 'https'], true)) { throw new InvalidArgumentException('Only HTTP/HTTPS schemes allowed'); } // Validate domain against allowlist if (!in_array($parsedUrl['host'], $this->allowedDomains, true)) { throw new InvalidArgumentException('Domain not in allowlist'); } // Resolve and validate IP address $ip = gethostbyname($parsedUrl['host']); if ($this->isBlockedIp($ip)) { throw new InvalidArgumentException('IP address is blocked'); } // Make secure request return $this->makeRequest($url); } private function isBlockedIp($ip) { foreach ($this->blockedIpRanges as $range) { if ($this->ipInRange($ip, $range)) { return true; } } return false; } private function ipInRange($ip, $range) { list($subnet, $mask) = explode('/', $range); return (ip2long($ip) & ~((1 << (32 - $mask)) - 1)) === ip2long($subnet); } private function makeRequest($url) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_MAXREDIRS, 3); curl_setopt($ch, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS); curl_setopt($ch, CURLOPT_REDIR_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS); curl_setopt($ch, CURLOPT_USERAGENT, 'SecureApp/1.0'); $response = curl_exec($ch); $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE); if (curl_error($ch)) { $error = curl_error($ch); curl_close($ch); throw new Exception('HTTP request failed: ' . $error); } curl_close($ch); if ($httpCode >= 400) { throw new Exception('HTTP request failed with status: ' . $httpCode); } return $response; } } // Secure usage try { $client = new SecureHttpClient(); $userUrl = filter_input(INPUT_GET, 'url', FILTER_SANITIZE_URL); if ($userUrl) { $content = $client->fetchUrl($userUrl); } } catch (Exception $e) { error_log('HTTP request error: ' . $e->getMessage()); // Handle error appropriately }

💡 Why This Fix Works

See fix suggestions for detailed explanation.

Why it happens

PHP applications accept user-supplied URLs and directly pass them to HTTP client functions like file_get_contents($_GET['url']), curl_setopt($ch, CURLOPT_URL, $_POST['url']), or HTTP client libraries (Guzzle, HTTPlug) without validation. Common vulnerable patterns include image proxy services: file_get_contents($_GET['image_url']) intended to fetch external images but exploitable to access internal services, webhook implementations accepting callback URLs: curl_exec(curl_setopt_array($ch, [CURLOPT_URL => $webhook_url])) allowing attackers to specify internal endpoints, RSS feed readers fetching user-provided feed URLs: file_get_contents($_POST['feed_url']) enabling access to internal APIs, URL shortener expansions following user-submitted short URLs to retrieve final destinations, and PDF generators rendering user-supplied URLs into documents. The vulnerability enables multiple attack vectors: accessing cloud metadata endpoints (http://169.254.169.254/latest/meta-data/ on AWS, http://metadata.google.internal/computeMetadata/v1/ on GCP) extracting API credentials and configuration, port scanning internal networks by observing response times or error messages revealing open ports, accessing internal admin interfaces (http://localhost/admin, http://127.0.0.1:8080/management) bypassing firewall restrictions, reading local files via file:// protocol (file:///etc/passwd), and conducting blind SSRF where response content isn't returned but timing/error differences reveal information. Attackers exploit these to pivot into internal networks, steal cloud credentials, enumerate internal services, or access sensitive data stored in internal systems.

Root causes

Using User Input Directly in file_get_contents() or cURL Requests

PHP applications accept user-supplied URLs and directly pass them to HTTP client functions like file_get_contents($_GET['url']), curl_setopt($ch, CURLOPT_URL, $_POST['url']), or HTTP client libraries (Guzzle, HTTPlug) without validation. Common vulnerable patterns include image proxy services: file_get_contents($_GET['image_url']) intended to fetch external images but exploitable to access internal services, webhook implementations accepting callback URLs: curl_exec(curl_setopt_array($ch, [CURLOPT_URL => $webhook_url])) allowing attackers to specify internal endpoints, RSS feed readers fetching user-provided feed URLs: file_get_contents($_POST['feed_url']) enabling access to internal APIs, URL shortener expansions following user-submitted short URLs to retrieve final destinations, and PDF generators rendering user-supplied URLs into documents. The vulnerability enables multiple attack vectors: accessing cloud metadata endpoints (http://169.254.169.254/latest/meta-data/ on AWS, http://metadata.google.internal/computeMetadata/v1/ on GCP) extracting API credentials and configuration, port scanning internal networks by observing response times or error messages revealing open ports, accessing internal admin interfaces (http://localhost/admin, http://127.0.0.1:8080/management) bypassing firewall restrictions, reading local files via file:// protocol (file:///etc/passwd), and conducting blind SSRF where response content isn't returned but timing/error differences reveal information. Attackers exploit these to pivot into internal networks, steal cloud credentials, enumerate internal services, or access sensitive data stored in internal systems.

Insufficient URL Validation Before Making HTTP Requests

Developers implement URL validation that appears secure but contains logic flaws allowing attackers to bypass restrictions and perform SSRF attacks. Common inadequate validation patterns include checking only URL scheme without host validation: if (strpos($url, 'http://') === 0) { file_get_contents($url); } which allows http://localhost or http://169.254.169.254, simple hostname blacklisting: if (strpos($url, 'localhost') === false) { /* fetch */ } bypassable through alternative representations (127.0.0.1, 0.0.0.0, [::1], decimal IP notation 2130706433, octal 0177.0.0.1, hexadecimal 0x7f.0x0.0x0.0x1), regex-based validation with errors: preg_match('/^https?:\/\/(?!localhost)/', $url) defeated by http://localhos.t or http://localhost.attacker.com, validating URL after following redirects allowing redirect-based bypasses where initial URL appears safe but redirects to internal addresses, and checking against blacklists of internal IP ranges missing edge cases (IPv6 addresses, CIDR notation variations, DNS rebinding where hostname resolves differently on subsequent requests). URL parsing vulnerabilities compound the issue: PHP's parse_url() can be confused by malformed URLs, different parsers (browser vs. PHP vs. cURL) interpreting URLs inconsistently enabling parser differentials where validation uses one parser but request uses another, and Unicode/encoding issues allowing bypass through %00 null byte injection, Unicode normalization, or URL encoding variations. Attackers study validation logic identifying bypass techniques: using @ symbol to specify credentials in URLs (http://expected.com@internal.server), leveraging parser bugs accepting unusual URL formats, exploiting DNS rebinding attacks where domain resolves to external IP during validation then internal IP during request.

Missing Allowlist for Permitted Domains or IP Ranges

Applications fail to implement positive validation allowlists restricting HTTP requests to explicitly permitted domains, instead using blacklist approaches attempting to block dangerous destinations which are inherently incomplete and bypassable. Code patterns lacking allowlists include: accepting any external URL validating only that it's not obviously internal, implementing generic HTTP proxy features without destination restrictions, using URL validation libraries that check format correctness but not whether destination is permitted, and relying solely on network firewalls to restrict outbound traffic without application-layer validation. The absence of allowlists creates several problems: applications can access any internet-accessible URL including attacker-controlled servers (http://attacker.com/ssrf-logger receiving internal data in requests), no protection against future attack techniques when new internal service discovery methods emerge, inability to prevent access to internal services listening on non-standard ports (http://localhost:9200 for Elasticsearch, :6379 for Redis, :27017 for MongoDB), and vulnerability to DNS rebinding where attacker-controlled domain initially resolves to external IP passing validation then to internal IP during actual request. Microservice architectures particularly suffer from missing allowlists: services making requests to other microservices based on service discovery or configuration, inter-service communication using HTTP without restrictions, API gateways forwarding requests to backend services, and service meshes lacking egress control allowing any outbound destination. The fundamental issue is treating SSRF protection as blocking bad destinations (impossible to enumerate comprehensively) rather than permitting good destinations (feasible to enumerate for specific use cases). Proper allowlisting requires: identifying legitimate external services the application needs to access, creating explicit allowlists of permitted domains/IPs, validating every outbound request against allowlists before execution, and regularly reviewing/updating allowlists as requirements change.

Inadequate Parsing and Validation of User-Provided URLs

URL parsing and validation implementation contains subtle bugs enabling attackers to craft URLs that bypass security checks while still accessing restricted targets. PHP's parse_url() function has limitations and quirks: returns false for malformed URLs rather than throwing exceptions (developers may not check return value), handles authentication sections in URLs inconsistently (http://user:pass@host), accepts unusual URL formats like protocol-relative URLs (//example.com) that may confuse validation logic, and parses URLs differently than cURL or browsers creating parser differentials. Common parsing vulnerabilities include: not validating URL components individually after parsing (checking host but ignoring port allowing http://safe.com:@evil.com construction), failing to normalize URLs before validation (http://LOCALHOST vs http://localhost, trailing dots http://localhost., URL encoding http://127.0.0.1 as http://%31%32%37%2E%30%2E%30%2E%31), accepting URLs with credentials sections that redirect parsing: http://expected.com@internal.server/ where browsers and some HTTP clients interpret internal.server as the actual host, overlooking fragment identifiers and query parameters that affect request routing, and insufficient handling of internationalized domain names (IDN) and Punycode allowing unicode homograph attacks. Validation logic errors compound parsing issues: comparing parsed['host'] against allowlists but reconstructing URL with user input, validating initial URL but not following redirects enabling redirect-based SSRF, checking parse_url() output without verifying parsing succeeded, and failing to account for URL normalization differences between validation and request execution. Real-world bypasses exploit these gaps: using exotic IPv4 representations (2130706433, 0x7f.0.0.1, 017700000001), IPv6 zone identifiers and alternate representations ([0:0:0:0:0:ffff:7f00:1], [::ffff:127.0.0.1]), registering lookalike domains (1ocalhost.com, localhost.attacker.com), and leveraging open redirect vulnerabilities on allowlisted domains.

Direct Use of User Input in HTTP Client Libraries

Applications integrate HTTP client libraries (Guzzle, HTTPlug, Symfony HttpClient) using user input to construct requests without understanding library-specific security considerations and configuration requirements. Guzzle client misconfiguration: new Client()->get($_GET['url']) without base_uri restrictions or custom middleware validating destinations, allowing arbitrary destination access including file:// protocol support in older Guzzle versions. cURL extension vulnerabilities from improper option configuration: missing CURLOPT_PROTOCOLS and CURLOPT_REDIR_PROTOCOLS allowing file://, dict://, gopher://, ldap:// protocols for local file reads and internal service exploitation, CURLOPT_FOLLOWLOCATION without CURLOPT_MAXREDIRS allowing infinite redirects or redirect-based SSRF, and CURLOPT_SSL_VERIFYPEER set to false allowing man-in-the-middle attacks during SSRF exploitation. Symfony HttpClient misuse: HttpClient::create()->request('GET', $_POST['url']) without client configuration restricting destinations or implementing response handlers validating redirect destinations. Stream context options in file_get_contents() enabling insecure protocols: stream_context_create(['http' => ['follow_location' => 1]]) allowing redirects without destination validation, supporting various stream wrappers (php://, data://, phar://) for exploitation beyond HTTP. Library-specific features increasing risk: HTTP client automatic decompression of responses potentially causing denial of service, automatic handling of authentication challenges enabling credential theft, support for persistent connections pooling allowing socket hijacking, and streaming responses without size limits enabling memory exhaustion. Third-party library vulnerabilities: outdated HTTP client libraries with known SSRF bypasses, dependencies with SSRF vulnerabilities in URL handling, and inadequate security defaults requiring explicit secure configuration. Proper usage requires: understanding each library's security implications, configuring explicit protocol allowlists, implementing custom middleware or handlers for URL validation, setting appropriate timeouts and size limits, and keeping libraries updated to patch known vulnerabilities.

Fixes

1

Implement Strict URL Validation and Allowlisting

Create comprehensive URL validation enforcing allowlists of permitted domains, blocking internal IP ranges, and rejecting dangerous protocols. Implement domain allowlist validation: $allowedDomains = ['api.example.com', 'cdn.example.com', 'trusted-partner.com']; $parsed = parse_url($url); if (!isset($parsed['host']) || !in_array($parsed['host'], $allowedDomains, true)) { throw new SecurityException('Domain not allowed'); } Use strict comparison preventing subdomain/suffix matching bypasses. For wildcard subdomain support, validate carefully: $allowedPattern = '/^([a-z0-9-]+\.)?example\.com$/i'; if (!preg_match($allowedPattern, $parsed['host'])) { throw new SecurityException(); } ensuring pattern anchoring prevents bypass. Block internal IP ranges using CIDR notation: $blockedRanges = ['127.0.0.0/8', '10.0.0.0/8', '172.16.0.0/12', '192.168.0.0/16', '169.254.0.0/16', '::1/128', 'fc00::/7']; $ip = gethostbyname($parsed['host']); foreach ($blockedRanges as $range) { if ($this->ipInRange($ip, $range)) { throw new SecurityException('Blocked IP range'); } } Implement CIDR matching correctly handling both IPv4 and IPv6. Validate URL scheme strictly: if (!in_array($parsed['scheme'], ['http', 'https'], true)) { throw new SecurityException('Invalid scheme'); } preventing file://, gopher://, dict://, php://, data://, ftp:// and other protocols. Validate port numbers: $allowedPorts = [80, 443]; if (isset($parsed['port']) && !in_array($parsed['port'], $allowedPorts, true)) { throw new SecurityException('Invalid port'); } preventing access to internal services on non-standard ports. Implement DNS resolution validation checking resolved IP after DNS lookup: after initial validation, resolve hostname to IP, validate IP against blocked ranges, use resolved IP for connection ensuring DNS rebinding attacks use consistently blocked IP. Use validation libraries like symfony/validator or custom validation classes encapsulating logic: use App\Security\UrlValidator; $validator = new UrlValidator($allowedDomains, $blockedIpRanges); $validator->validate($url); centralizing security logic preventing inconsistent validation across codebase. Implement monitoring and logging: log all URL validation failures with details (attempted URL, source IP, user ID, timestamp) enabling attack detection.

2

Use URL Parsing to Validate Scheme, Host, and Port

Parse URLs into components and validate each component individually before making HTTP requests, avoiding validation bypasses from malformed URLs or parser quirks. Use parse_url() correctly: $parsed = parse_url($url); if ($parsed === false || !isset($parsed['scheme'], $parsed['host'])) { throw new InvalidArgumentException('Invalid URL format'); } checking for parsing failure and required components. Validate scheme allowlist: $allowedSchemes = ['http', 'https']; if (!in_array(strtolower($parsed['scheme']), $allowedSchemes, true)) { throw new SecurityException('Scheme not allowed'); } using case-insensitive comparison with strtolower() but strict array comparison. Normalize and validate hostname: $host = strtolower(trim($parsed['host'])); if (filter_var($host, FILTER_VALIDATE_DOMAIN, FILTER_FLAG_HOSTNAME) === false) { throw new InvalidArgumentException('Invalid hostname'); } though filter_var() has limitations with IDN domains and unusual formats. Validate port explicitly: $port = $parsed['port'] ?? ($parsed['scheme'] === 'https' ? 443 : 80); $allowedPorts = [80, 443, 8080, 8443]; if (!in_array($port, $allowedPorts, true)) { throw new SecurityException('Port not allowed'); } defining permitted ports for application requirements. Reconstruct URL from validated components: $validatedUrl = $parsed['scheme'] . '://' . $parsed['host'] . ($parsed['port'] ? ':' . $parsed['port'] : '') . ($parsed['path'] ?? '/') . ($parsed['query'] ? '?' . $parsed['query'] : ''); ensuring reconstructed URL contains only validated components, not original user input. Handle URL normalization: use libraries like league/uri or guzzlehttp/psr7 providing RFC-compliant URL parsing and normalization: use League\Uri\Http; $uri = Http::createFromString($url); $host = $uri->getHost(); $scheme = $uri->getScheme(); benefiting from robust parsing handling edge cases. Validate against homograph attacks and IDN: if (preg_match('/[^\x20-\x7E]/', $parsed['host'])) { /* contains non-ASCII */ $punycode = idn_to_ascii($parsed['host'], IDNA_DEFAULT, INTL_IDNA_VARIANT_UTS46); /* validate punycode */ } protecting against unicode domain name confusion. Test parser behavior: write unit tests with malicious URLs (http://localhost@attacker.com, http://127.0.0.1:8080@evil.com, http://[::1]/, http://0x7f.0.0.1) ensuring parser behaves as expected and validation catches attacks.

3

Restrict Outbound Requests to Known Safe Domains

Configure application and infrastructure to restrict all outbound HTTP requests to explicitly permitted domains using application-layer and network-layer controls. Application-layer allowlisting: maintain configuration file or database table containing permitted domains: $config['allowed_domains'] = ['api.partner.com', 'cdn.example.com', 'payment-gateway.com']; validate every outbound request: if (!$this->isDomainAllowed($url, $config['allowed_domains'])) { throw new SecurityException('Outbound request to ' . $url . ' blocked'); }. Use environment-specific allowlists: development environments may need broader access for testing while production restricts to production dependencies only. Implement HTTP client wrapper classes enforcing domain restrictions: class RestrictedHttpClient { private $allowedDomains; public function __construct(array $allowedDomains) { $this->allowedDomains = $allowedDomains; } public function get(string $url) { $this->validateDomain($url); return $this->httpClient->get($url); } } forcing all HTTP requests through validated client. Configure network-layer egress filtering: use security groups (AWS), network security groups (Azure), or firewall rules (GCP) restricting outbound traffic from application servers to specific IP addresses/ranges corresponding to permitted services. Implement egress proxy requiring all outbound HTTP traffic route through proxy server that enforces allowlists, logs all requests, and blocks unauthorized destinations. Use service mesh (Istio, Linkerd) or API gateway egress controls: configure service mesh policies allowing outbound traffic only to registered external services, implementing mutual TLS authentication for external service calls, monitoring and alerting on blocked egress attempts. For microservices, implement service-to-service authentication preventing services from making unauthorized outbound calls: require OAuth tokens or mutual TLS for inter-service communication, validate service identity before allowing requests, implement per-service egress policies defining which external services each microservice can access. Document and maintain allowlists: create documentation explaining why each domain is allowlisted (business justification, service description, alternative if service becomes unavailable), review allowlists quarterly removing unused entries, require security review and approval for allowlist additions. Monitor outbound requests: log all allowed and blocked requests with metrics, alert on unusual patterns (sudden spike in requests to specific domain, blocked request attempts indicating attack), implement rate limiting per destination preventing abuse of allowed destinations.

4

Implement Network-Level Controls to Block Internal IP Ranges

Deploy network infrastructure controls preventing application servers from accessing internal IP ranges, cloud metadata endpoints, and localhost, providing defense-in-depth beyond application-layer validation. Configure network security groups blocking outbound traffic from application servers to RFC1918 private IP ranges: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, preventing access to internal networks even if application validation bypassed. Block cloud metadata services at network level: AWS metadata endpoint 169.254.169.254 (use IMDSv2 requiring session tokens reducing SSRF risk), GCP metadata endpoint 169.254.169.254 and metadata.google.internal, Azure metadata endpoint 169.254.169.254, enforce metadata service authentication (AWS IMDSv2) requiring PUT request to obtain token before GET metadata access. Implement DNS filtering preventing resolution of internal hostnames: configure recursive DNS servers to block queries for internal TLDs (.internal, .local), return NXDOMAIN for RFC1918 reverse DNS lookups preventing internal IP discovery via DNS, use DNS-based security services (Cisco Umbrella, AWS Route 53 Resolver DNS Firewall) blocking malicious domains and internal IP ranges. Deploy Web Application Firewall (WAF) with egress filtering: configure WAF rules inspecting outbound requests from application, block requests to internal IP ranges based on destination IP, implement allowlist-based egress allowing only approved external services, use WAF managed rulesets detecting common SSRF patterns. Configure application server network interfaces: run application servers on subnets with no routes to internal networks (DMZ architecture), use separate VPCs/VNets for application tier and internal services requiring explicit VPC peering or PrivateLink connections, implement least-privilege network access granting applications only necessary network routes. Use host-based firewalls (iptables, Windows Firewall): configure outbound rules blocking connection attempts to internal IP ranges from application processes, implement egress filtering at host level as last defense layer, use application-aware firewalls (AppArmor, SELinux) restricting which processes can make network connections. Monitor and alert on blocked traffic: log all blocked outbound connection attempts at firewall/security group level, alert security team on repeated blocks indicating attack attempts or misconfiguration, analyze patterns identifying compromised servers attempting internal scanning or command-and-control communication.

5

Validate and Sanitize All User Input Before HTTP Requests

Apply comprehensive input validation and sanitization to user data used in URL construction or HTTP requests, implementing multiple validation layers and defense-in-depth strategies. Input format validation using filter_var(): $url = filter_input(INPUT_GET, 'url', FILTER_VALIDATE_URL); if ($url === false || $url === null) { throw new InvalidArgumentException('Invalid URL format'); } or filter_var($userInput, FILTER_VALIDATE_URL, FILTER_FLAG_PATH_REQUIRED | FILTER_FLAG_QUERY_REQUIRED) enforcing URL structure. Implement length limits preventing resource exhaustion: if (strlen($url) > 2048) { throw new InvalidArgumentException('URL too long'); } using reasonable maximums for application requirements. Character allowlist validation for URL components: if (!preg_match('/^[a-zA-Z0-9\\-._~:\\/?#\\[\\]@!$&'()*+,;=%]+$/', $url)) { throw new InvalidArgumentException('Invalid characters'); } though comprehensive URL character validation is complex, RFC 3986 defines valid characters. Use sanitization libraries removing dangerous characters: $sanitized = filter_var($url, FILTER_SANITIZE_URL); though sanitization risks creating bypasses through character removal rather than rejection. Validate URL components after parsing: $parsed = parse_url($url); validate each: scheme must be http/https, host must be valid domain/IP, port must be in allowed range, path must not contain .. directory traversal or suspicious patterns, query parameters must not contain suspicious values. Implement context-specific validation: for image fetching, validate URL points to image resource (check Content-Type header, file extension), for webhook URLs, validate URL belongs to user's verified domains, for feed URLs, check feed format validation after fetch. Rate limit URL fetching per user/IP preventing abuse: track request counts per source in Redis/Memcached, enforce limits (10 requests per minute per user), return 429 Too Many Requests when limit exceeded, implement CAPTCHA for suspicious request patterns. Use indirect object references avoiding direct URL input when possible: instead of accepting URLs directly, accept resource IDs that map to pre-configured URLs in database, store user webhook configurations after validation and reference by ID, implement resource catalog where users select from permitted options rather than providing arbitrary URLs. Log all input validation failures: record rejected URLs, source IP, user identifier, timestamp for security analysis, detect attack patterns (multiple validation failures from same source indicating probing), alert on coordinated attacks or vulnerability scanning attempts. Implement validation exceptions carefully: if certain administrators need SSRF-like functionality for legitimate purposes (system monitoring, debugging), implement separate privileged endpoints with additional authentication, strong authorization checks, audit logging, and explicit user acknowledgment of security risks.

6

Use Indirect Object References Instead of Direct URLs

Refactor application architecture to avoid accepting user-provided URLs directly, using indirect object references that map to pre-validated destinations stored server-side, eliminating SSRF attack surface. Implement resource identifier approach: when users need to specify URLs (webhooks, callbacks, RSS feeds), accept URL during setup, validate and store in database with unique identifier, reference by ID in subsequent operations: $webhook = $db->query('SELECT url FROM webhooks WHERE id = ? AND user_id = ?', [$webhookId, $userId])->fetch(); $client->post($webhook['url'], $data); ensuring URL validation occurs once during configuration, not on every use. For image proxying, implement allowed image catalog: maintain table of approved image URLs or domains, users select images by ID not URL, application fetches from pre-validated sources. Use configuration-based URL templates: define URL patterns in configuration: $urlTemplate = $config['api_urls']['user_data']; $url = sprintf($urlTemplate, $userId); where templates contain placeholders for safe substitution but base URL structure is fixed. Implement service registry for microservices: register service endpoints in service discovery (Consul, Eureka, Kubernetes DNS), services reference others by service name not URL, service mesh resolves names to validated endpoints preventing SSRF. Use OpenAPI/Swagger specifications defining valid API endpoints: generate HTTP clients from specifications, enforce API operations match specification, validate all requests against schema preventing arbitrary URL construction. For webhook systems, implement webhook verification: users must verify webhook ownership (respond to challenge request, click confirmation email), webhook URLs must point to verified domains, implement mutual TLS or webhook signatures for authentication. Create resource adapters abstracting HTTP requests: interface ResourceFetcher { public function fetch(ResourceIdentifier $id): string; } class ImageFetcher implements ResourceFetcher { public function fetch(ResourceIdentifier $id): string { $validatedUrl = $this->lookupValidatedUrl($id); return $this->httpClient->get($validatedUrl); } } forcing all external resource access through validated abstraction layers. Implement user permission models: allow only certain user roles to configure external URLs, require admin approval for webhook configurations, audit all URL configuration changes logging who added what URL when. For development flexibility, implement URL allowlist configuration: administrators configure permitted domains through secure admin interface, configuration changes require approval workflow, audit logging tracks all configuration modifications, testing environments use separate more permissive allowlists than production. Document indirect reference patterns in architecture decision records: explain why direct URL acceptance is avoided, describe security benefits of indirect references, provide guidance for new features needing external resource access.

Detect This Vulnerability in Your Code

Sourcery automatically identifies php server-side request forgery (ssrf) and many other security issues in your codebase.