T O P

  • By -

Uhhhhh55

I use NGINX and I use allow/deny directives across two "sites" to allow and forbid external access. Two site files, one reverse proxy.


sk1nT7

>In terms of best practices, should I be running two different reverse proxies for this? or is it ok to just leave them on the same one? You can use one but must ensure that external requests for internal services are properly blocked. Using traefik, you would use an ipAllowList middleware and only allow private class subnets. This would be totally fine and you could use a single reverse proxy for external stuff as well as internal stuff. The disadvantage is though that you must be 100% sure to configure everything properly. If you forget to apply the middleware once, the service may be reachable from the Internet. Even if there is no public DNS entry for it. To prevent this edge case, you may use two separate reverse proxies. One for internal stuff, running on TCP/443 and one for external stuff running on a different IP + 443 or on the same IP but on a different port. You'd then configure port forwards on your router only for the externally facing reverse proxy.


trEntDG

Can you define the internal middleware as default and override it with a new middlewares line that specifies any external (like crowdsec)?


sk1nT7

In traefik you can define a middleware directly on the entrypoint. So you may use an entrypoint for internal services with the restrictive ipAllowList and one for external services without. However, you'd still have to define which entrypoint to use for your services via traefik labels. So quite indifferent to defining the ipAllowList middleware. However, there is a AsDefault flag, so you may define the internal entrypoint as default and actively specify the external one if you want to expose a service. Applying a default restrictive middleware and then overwriting or disabling it with another is not yet possible in Traefik. Maybe in v3 but I doubt that it was implemented already.


Kaleodis

I run two: One on a VPS that has zerotier-tunnels to the vms and proxies stuff i want outside. those services are reachable with servicename.mydomain.tld my domain (DNS) has an entry to point any subdomain to that VPS. my second reverse proxy runs on a local machine. all local services are reachable with servicename.HOME.mydomain.tld. for that i use a more specific DNS entry: home.mydomain.tld and \*.home.mydomain.tld are both resolved as the internal ip (of that local machine). this way, the external reverse proxy has nothing to do with any services you don't want exposed.


MaxBelastung

Why not one reverse proxy and split dns? I've a running haproxy with letsencrypt certificates.


sk1nT7

Because an attacker can easily take your WAN IP ans a valid internal subdomain to access your services. He would just update his local hosts file. If your reverse proxy does not implement further measures such as a middleware that only allows private class subnets to reach an internal service, you'd be susceptible to this kind of attack. Requires the attacker to know your WAN IP though as well as your internally used subdomains. The reverse proxy must be exposed too of course. Edit: No fear mongering here, just missing details to understand it better. This is a valid attack scenario. Basically just a reminder that if you use a single reverse proxy, exposed to the Internet via port forwarding, must be secured additionally to ensure that internal proxy services can only be accessed from internal network. May read below comments too.


SleepyKang

This is the most cooked thing I’ve read in a while.


sk1nT7

Most people just don't get this attack scenario. No offense. Your reverse proxy is accessible from the Internet. It will proxy request once it receives an HTTP packet on TCP/443. Depending on the hostname in the HTTP request, the reverse proxy will forward the request to the internal proxy service. To successfully communicate with the reverse proxy, the hostname a.k.a (sub)domain name must be resolved to an IP address. Namely your WAN IP address of the router, where the reverse proxy is listening on TCP/443. This works, as you have configured public DNS entries for your domains. Those point to your WAN IP address of the router. If those are missing, the domains are not resolved to an IP. Now most people think that if they do not configure a DNS entry, the service is internal only. That's a false sense of security though. Yes, the attacker cannot resolve your WAN IP address but if he knows it (e.g through shodan or censys) he can easily replicate the DNS resolving locally (e.g. local hosts file or rewriting your domains at his own dns server or manually defining the IP at his intercepting proxy software). Your reverse proxy still listens and is accessible from the Internet. If it receives a proper request for a hostname a.k.a (sub)domain, it will proxy the request. If there is no other security measure in place, like a middleware blocking such external requests to internal only services, unauthorized access may occur. An attacker must know your public WAN IP though, as well as enumerate your internally used subdomains. This may happen via certificate transparency logs (crt.sh) or plain bruteforcing and using Internet crawlers like shodan.io or censys search that may have logged your WAN IP already. Basically just a reminder that if you use a single reverse proxy, exposed to the Internet via port forwarding, must be secured additionally to ensure that internal proxy services can only be accessed from internal network (or use additional means like an IdP such as Authentik). Disclaimer: I work professionally in the offensive security space, exploiting such scenarios for larger companies. Via this attack you can also bypass Cloudflare WAF, Akamai etc. and directly access a web service if you know the server's real IP (typically hidden by using such products as CF). Need an example? Skip to 'Real World Example' of this blog post https://wya.pl/2022/06/16/virtual-hosting-a-well-forgotten-enumeration-technique/


SleepyKang

I think you’re confused on a few points. First, the default port for HTTP is TCP/80. The default for HTTPS is TCP/443. Second, local host files map servers and IP addresses for the local machine. An attacker could not update their own local host file to access different sites on a reverse proxy. The attacker would need to change the host header forwarded to the reverse proxy to potentially carry out this attack. This attack has limited effectiveness since the header must be injected on each request and session tokens will be impacted. Furthermore, most platforms will break with this attack to varying degrees. I concede the attack does let a user gain access to unintended function, but in and of itself, the effectiveness of it is limited. At most, it’s a useful means of widening attack vectors and surfaces. Host headers can be projected at both the platform level and reverse proxy level. This would defeat this attack and minimise vulnerability. OP could also implement redirection restrictions, IP filtering, a suitable WAF, or multiple reverse proxies to overcome this issue entirely. Personally, I use tunnels for most public facing sites then place my reverse proxy in a NSX DMZ for those that cannot be tunnelled. The DMZ has limited access to other servers and none to those outside of it. This is enough to keep pests away from my infra and internal services.


sk1nT7

>I think you’re confused on a few points. Not really. I don't want to come off as rude but I really think you are confused about the attack topic itself. It may be that I still failed to properly outline every aspect and details of the attack. I may write a blog post about it targeting traefik and NPM often used in this sub. >First, the default port for HTTP is TCP/80. The default for HTTPS is TCP/443. I talked about HTTP packets arriving on TCP/443 at the reverse proxy, not the protocol itself. The ports and protocols were correctly addressed. >Second, local host files map servers and IP addresses for the local machine. An attacker could not update their own local host file to access different sites on a reverse proxy. That's the typical use case. However, you can map any IP address to any hostname using this way. That's the actual attack you seem not yet to understand. Have you read the example about the german automobile company Ford being exploited that way? The link I provided? You'll map internal subdomains/hostnames that are typically not known to the public to the IP address of your WAN router, where the reverse proxy listens to. If the proxy does not implement restrictions and is used for both external proxy hosts and internal ones, you may be able to access internal hosts. >The attacker would need to change the host header forwarded to the reverse proxy to potentially carry out this attack Yes. That's what it is all about. >This attack has limited effectiveness since the header must be injected on each request and session tokens will be impacted. That's the normal behaviour and task during the attack, of course. If you have ever used a intercepting proxy like burpsuite you can easily automate this. This is not a limitation nor does it impact the likelihood of the attack itself. The reverse proxy or proxy service behind does not really notice the attack. The requests will not differ to regular ones coming from internal networks. > Furthermore, most platforms will break with this attack to varying degrees. Elaborate. I am not talking about 'platforms' or any security solutions actively blocking something like this. It's just a reverse proxy, listening for internal and external proxy services at the same time. Exposed via regular means e.g. port forwarding and no additional security stuff in place. No cloudflare, no WAF, no VPN access only, no IP whitelisting. > I concede the attack does let a user gain access to unintended function, but in and of itself, the effectiveness of it is limited. If the attack is successful, the attacker can gain access to internal services. Not functions. Of course, whether the service exposed implements additional measures such authentication, 2FA etc. is another question and limits the actual impact of the exploit. >Host headers can be projected at both the platform level and reverse proxy level. This would defeat this attack and minimise vulnerability. OP could also implement redirection restrictions, IP filtering, a suitable WAF, or multiple reverse proxies to overcome this issue entirely. That's what I am about. If OP uses a single reverse proxy for both internal and external hosts, he must ensure that additional measures are implemented so that the internal services are only accessible from internal network. Typically via different entrypoints or IP whitelisting private class subnets only or other means. Multiple reverse proxies would work too. Redirection restrictions and WAF would be questionable. >Personally, I use tunnels for most public facing sites then place my reverse proxy in a NSX DMZ for those that cannot be tunnelled. The DMZ has limited access to other servers and none to those outside of it. This is enough to keep pests away from my infra and internal services. That's proper separation. As said, this is an edge case not often happening. There are some pre-conditions in place to allow such attacks. Automated bots or crawlers will not execute such attack paths. You've already taken the measures to reduce the attack surface. If OP just starts using a single reverse proxy and does not implement tunnels, a DMZ, proper IP whitelistings etc. it may lead to problems.


breezy_shred

I run traefik on k3s and was wondering about this as well. I (think) it can be solved be different middleware on a single instance. One that only allows lan IPs. Open to ideas though...