VPN Blocking Traffic? Let’s Break Down Routing, Metrics, and Conflicts Step by Step
Content of the article
- How vpn routing works and where logic usually breaks down
- Basic diagnostics: where to start
- Metrics and priorities: how they work on windows, linux, and macos
- Gateways, nat, and asymmetric routing
- Split tunneling vs full tunnel: how to choose and configure without headaches
- Practical cases: home routers, clouds, and remote work
- 2026 tools: observability, telemetry, and new tricks
- Security, performance, and fine tuning
- Checklists, playbooks, and automation
- Faq: quick answers to common questions
Ever connected to a VPN only to feel like your internet just went on vacation without telling you? Or maybe some internal resources work fine, while others act like you’re a ghost? We’ve been there. We know the pain. VPN routing is a delicate art. Set a route metric wrong or misconfigure the default gateway, and your traffic goes astray. Sometimes a return route disappears somewhere. Other times, DNS points in all the wrong directions. And occasionally, MTU plays a cruel joke, chopping packets like a chef at a cooking class. But no need to panic. Calm, step-by-step, checklist in hand — we’ll unravel it all.
By 2026, VPNs aren’t just "set up a tunnel and forget it" anymore. We’re living in a world of Zero Trust, SASE, ZTNA, and a ton of hybrid scenarios: cloud, branch offices, remote work, mobile networks, all intertwined. On the table at once: WireGuard, IKEv2/IPsec, SSL-VPN over QUIC, even tunnels layered over proxies and HTTP/3. And yes, you often have DoH, corporate DNS via split-horizon, IPv6-only segments with NAT64/DNS64, and a bunch of policies fighting for priority over your traffic. Frustrating? Definitely. Fascinating? Absolutely.
This article is your practical guide. No dry theory for theory’s sake. We’ll break down specific route conflicts, understand how metrics and priorities work on Windows, Linux, and macOS, examine where gateways fail and why traffic ends up "one-way," configure split tunneling without headaches, and master solid diagnostics. Expect real cases, handy commands, checklists, and automation tips. Plus a must-keep FAQ at the end. Ready? Let’s go!
How VPN Routing Works and Where Logic Usually Breaks Down
Routing Table: Your Traffic’s Chief Storyteller
When you connect to a VPN, new routes get added to the system. Each route includes a destination network, mask (or prefix), next hop (gateway), interface, and metric. Priority rule is simple: first the longest prefix match, then metric comparison. The shorter the path and lower the metric, the more eager the system is to pick that route. And yes, VPN clients sometimes add "broad" routes (like 0.0.0.0/0), pulling all traffic their way. Without exclusions set, your internet disappears like a magic trick. It sounds obvious but always start your troubleshooting here.
Another nuance is interface order and automatic metrics. On Windows and macOS, the system sometimes "thinks for you," assigning auto-metrics based on interface speed. Connected VPN over Wi-Fi but have Ethernet nearby? Surprises may pop up. Linux has its own story: with policy-based routing (PBR) and multiple routing tables, you might see one route in the main table and a totally different one in a policy capturing specific traffic. Result: packets take different paths even though everything looks right at first glance.
Common Conflicts: Overlapping Subnets, Duplicates, and Black Holes
A classic issue: overlapping RFC1918 networks. Your company uses 10.0.0.0/8 internally, while an employee’s home router assigns 10.0.0.0/24. Or more fun — VPN includes several overlapping subnets absorbed by BGP aggregation. So the route to the subnet you need might be overshadowed by a broader announcement, and you end up with the less desirable path. See both 10.20.0.0/16 and 10.20.5.0/24 but the /16 has the lower metric? Traffic will go the wrong way, and the hunt for "black holes" begins.
Another frequent problem is route duplication by the VPN client: for example, OpenVPN might pull both 0.0.0.0/1 and 128.0.0.0/1 (used for full tunnel), while someone else already set a default route. The system picks one, but reverse traffic control can get lost. Third scenario: routes to internal resources exist, but the server lacks a return route. The packet goes inside, but the reply takes a roundabout way via the ISP and is dropped. That’s asymmetry: client pings get replies, server sees silence.
VPN Client and Server: Who Controls Routes and When
Different clients behave differently. WireGuard relies on AllowedIPs: both a filter and router. Add 0.0.0.0/0 and you get full tunnel; narrow prefixes give split tunneling. OpenVPN often uses redirect-gateway def1, route-nopull options, and server-pushed routes. IKEv2/IPsec leverages traffic selectors and policies, and with BGP you can dynamically announce prefixes. Servers may enforce routes on clients or delegate control locally.
In large infrastructures, VPN servers interface with SD-WAN, PBR, and firewall policies. Segment routes come via BGP or static settings; clients get only necessary subnets. It’s vital to know who’s "in charge" in your topology: the client deciding traffic flow or the server/controller enforcing rules. That affects where to look for root causes. Sometimes it’s easier to tweak client behavior (like disabling auto-metric and setting explicit params) than to "break" server policy.
Basic Diagnostics: Where to Start
Network Tests: Ping, Traceroute, MTR, and DNS Checks
Start simple. Ping the internal resource’s IP — if it works, base connectivity’s good. Ping by name tests DNS. If IP pings succeed but names don’t, check resolvers, split-horizon DNS, or DNS server order. Traceroute or tracepath (Linux) reveals which interface and path traffic actually follows. MTR is great for long routes and flaky paths, showing delays and packet loss simultaneously.
Verify where internet traffic flows. Try traceroute to 8.8.8.8 or another public IP. If after VPN connects the route breaks and the first hop is inside the tunnel, you have a full tunnel — expected. But if no full tunnel exists and internet still "disappears," suspect DNS or MTU issues. Simple check: load a small page, then a heavy one. Consistent stalls on "heavy" pages? Remember: that could be MTU or PMTUD blocking.
Routing Tables: Windows, Linux, macOS — Finding Discrepancies
On Windows, use route print and Get-NetRoute; if needed, add Get-NetIPInterface to see interface metrics. Compare who owns 0.0.0.0/0, what specific routes exist, and VPN vs local network interface metrics. Often, simply disabling auto-metric and manually setting priorities can get traffic back on track. Also check IPv6 tables: route print -6 and Get-NetRoute -AddressFamily IPv6.
On Linux, look at ip route show, ip -6 route. If you suspect PBR, run ip rule list and check multiple tables (ip route show table 100, etc.). Pay attention to rule priorities and fwmark policies. Sometimes an app marks packets with fwmark, directing traffic down unexpected paths. On macOS, netstat -rn, route -n get <address>, networksetup -listallnetworkservices and scutil --dns help you see resolver order and interface priorities. Often the issue is the VPN interface getting added but "service order" not updating, so the system stubbornly picks Wi-Fi.
Packet Capture: Wireshark, tcpdump, and Built-in Tools
If routing tables don’t give clear answers, grab a sniffer. On Linux: tcpdump -i wg0 host target_address or tcpdump -i any port 53 for DNS. Observe where traffic actually flows, whether replies arrive, and watch TTL changes en route. On Windows 2026, pktmon and traditional Event Viewer are common, but Wireshark remains king: filter on VPN interface and destination. If you see SYN but no SYN-ACK, check return paths and firewall.
Another tip — test PMTUD: enable the DF bit and send large packets. If they stall mid-route, likely ICMP Fragmentation Needed is blocked. DNS chaos demands separate debugging: scutil --dns (macOS) reveals which domains go to which resolver; resolvectl status (Linux) shows actual server used. Sometimes just reordering resolvers or adding conditional forwarding for internal domains fixes the magic.
Metrics and Priorities: How They Work on Windows, Linux, and macOS
Windows: auto-metric, InterfaceMetric, and RouteMetric
Windows auto-priority can be generous but not always smart. Faster interfaces often get lower metrics, so the system “decides” they’re more important. VPN interfaces have virtual speeds and odd metrics. The best practice: disable auto-metric on the VPN interface and explicitly set InterfaceMetric (say, 5 or 15, depending on design). Then manage RouteMetric for specific routes: lower numbers make routes preferred.
Verify via Get-NetIPInterface and adjust with Set-NetIPInterface -InterfaceMetric. For routes use New-NetRoute or Set-NetRoute with RouteMetric. If your VPN pushes a default route but you want split tunneling, use client policies like OpenVPN’s route-nopull and add explicit routes for subnets. In corporate Always On VPN and modern clients, you can configure include/exclude rules to protect internet while sending only needed prefixes through the tunnel.
Linux: Priorities, Policy-Based Routing, and Multiple Tables
Linux routing metrics tell only part of the story. With ip rule, you get several routing tables, and rule priority decides which table handles a packet. Powerful but risky: you can craft complex rules by source, fwmark, or TOS—but also accidentally isolate apps. If the VPN client adds a table and high-priority rule, all traffic might tunnel even if main default routes to internet.
Practical recipe: use ip rule list, then ip route show table main and others. Check for conflicting rules and verify the VPN table has correct return paths. IPv6 works similarly: ip -6 rule. For WireGuard, note AllowedIPs both filter and create routes. Break them into precise prefixes for split tunneling. When using iptables/nftables, apply marks and tables carefully and document rule order—otherwise in a month no one recalls why browsers route differently from command line tools.
macOS: Service Order, ifscope, and Resolver Priorities
On macOS routing obeys service order: higher network services win. Configure via UI or networksetup. Routes may bind to ifscope, where the system picks an interface for each destination. For diagnostics, route -n get <address> shows the interface and gateway chosen. If VPN should be "main" for certain subnets, raise its priority and set precise routes.
DNS on macOS deserves extra care: scutil --dns reveals split scenarios where internal domains resolve via corporate DNS, and others through public resolvers. If order’s off, you get mysterious failures: IP access works but names don’t. Fix with search domain config, reordering resolvers, and clear mapping of which interface handles which domain. By 2026 many corporate VPN clients auto-configure per-domain rules, but manual checks still pay off.
Gateways, NAT, and Asymmetric Routing
Default Gateway: Default Route Hijacking and Kill Switches
When a VPN captures 0.0.0.0/0, that’s expected for full tunnels. But some clever setups replace one default with two halves: 0.0.0.0/1 and 128.0.0.0/1 — splitting the world and sending both through the tunnel smoothly. Risks arise if the local default isn’t disabled, making routes flip randomly based on metrics. The result is chaotic internet access. Better to explicitly set priorities or enable a kill switch that blocks traffic outside VPN. Keep in mind: kill switches can make the internet vanish if the tunnel drops.
Double gateways and multi-WAN spice things up: two providers but one VPN means return traffic might exit the wrong way. On routers, solve this with policy routing and marks; on hosts, use careful metric configuration and ensure symmetry. Vital: packets must return the way they came, or stateful firewalls rightly drop "foreign" replies. Logs will show odd entries and confusion: "ping works but apps don’t?"
NAT-T, Hairpinning, and Return Path Symmetry
IPsec over NAT (NAT-T) is standard. But if your client sits behind carrier-grade NAT and the server uses a strict firewall, you need keepalives, consistent outgoing ports, and gentle timeouts. Hairpin NAT — when accessing an internal server by its external address — often breaks with VPN: client tunnels in, server replies outside, losing the return path. The fix: local DNS entries for internal domains and avoiding hairpin where it’s unnecessary.
ECMP and link load balancing can cause asymmetry: packets of a session take different paths. Internal firewalls sometimes dislike this and cut connections. When VPN traverses multiple providers, enable stickiness by source or 5-tuple, and double-check return routes match. Symmetry is key to robust TCP, especially with inspection along the path.
One-Way Traffic: rp_filter, Return Routes, and Firewalls
On Linux, rp_filter may drop packets if expected return routes don’t match actual ones. Complex PBR setups make this painful: request goes via table 100 through VPN, reply tries main internet route—the kernel blocks it. Fix by setting rp_filter to loose or restoring symmetry. Windows and macOS hosts have spoof protection too, and firewalls might discard suspicious streams if they detect route mismatches.
Check firewalls and app inspections: SSL-VPNs, proxies over 443, DPI—all can interfere and unexpectedly drop unusual fragments. Sometimes disabling "smart" inspection temporarily reveals if it’s the culprit. If so, create proper exceptions for VPN traffic, then re-enable inspections with refined rules.
Split Tunneling vs Full Tunnel: How to Choose and Configure Without Headaches
When Split Tunneling Is Your Best Friend
Split tunneling saves bandwidth, reduces latency to public services, and eases VPN concentrators. In 2026 it’s especially crucial: videoconferencing, CDNs, SaaS all want local breakout. Simple example: only 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 and internal domains tunnel via VPN, while rest of internet goes direct. Users happy, admins happy—as long as routes and DNS are configured right. Risks? Weaker control of external traffic, requiring local DLP and filtering.
Proper split means exact prefixes and clean DNS. Set conditional domain forwarding so internal names don’t leak to public resolvers. In WireGuard, carefully list AllowedIPs. In OpenVPN, disable redirect-gateway and push specific routes. In IKEv2, correctly define selectors and include lists. Consider exceptions for banks, government, and sensitive services that must always or never use tunnel per corporate policy.
When Full Tunnel Is the Right and Safe Choice
Full tunnel fits where compliance and security trump speed: critical data, tight regulations, strict boundaries. You grab all traffic through the VPN, enable perimeter filtering and inspection, and get centralized control. It’s a no-surprise path if you have enough capacity and MTU is tuned well. Plus, no DNS leaks or local policy collisions. By 2026 many SSL-VPNs use QUIC over UDP maintaining solid speeds even with full tunnels.
Downsides: heavier load on concentrators and potential latency spikes. A middle ground is "smart full": all traffic tunnels, but gateway permits local breakout for categories like video, CDN. Or use a SASE architecture: clients connect to nearest points applying policies and releasing traffic locally. Capacity planning and monitoring metrics upfront are essential: a saturated tunnel quickly annoys users.
Design Patterns: Include/Exclude Lists, DNS Split, and PAC Files
Clearly define include and exclude lists. Include-lists suit split: you know exactly what networks go via VPN. Exclude-lists fit full, to cut noisy categories. For DNS, use split-horizon: corporate zones resolved internally, others via public servers, preferably with DoH/DoQ where allowed. Another trick: PAC proxy files to route web apps properly when mixing scenarios.
Document these setups as playbooks: "To add new SaaS, apply these rules; when a new VPC appears, add this prefix and check return routes." Save hours later. And add tests: small sets of curl, dig, traceroute run automatically after config changes—your safety net for outages.
Practical Cases: Home Routers, Clouds, and Remote Work
RFC1918 Conflicts: Everyone Uses 10.0.0.0/8 and Nobody’s to Blame
An employee connects from home using 10.0.0.0/24 locally, while company uses 10.0.0.0/8 internally. Routing tables have specific 10.10.20.0/24 via VPN, and broader 10.0.0.0/8 via local gateway. If the more general route has lower metric, it wins and bypasses the tunnel. Diagnose with route print or ip route, then ping internal IPs. Fix by raising local route metric, adding more specific VPN prefixes, or, as a last resort, NAT at endpoints to avoid overlap.
Long-term, moving away from "DIY" 10.0.0.0/8 to neat addressing and segmentation is better. In 2026 many migrate to well-documented blocks stored centrally. For slow transitions, use policy routing and SNAT at gateways: force traffic to troublesome subnets through VPN, local otherwise. Don’t forget return routes: servers need to know how to reply to clients from odd ranges.
Clouds: AWS, Azure, GCP — P2S, BGP, and VPC Routing
A classic pain point: different addressing across multiple clouds. VPC peering, transit hubs, firewalls — and you add P2S VPN for staff. If routes aren’t cleanly announced, some subnets won’t be visible client-side. Solution: centralized route control—BGP where possible, or static prefix export from cloud to VPN concentrators with filters. Also monitor client priorities: if client gets default route via VPN, make sure cloud’s return packets use the same concentrator.
Another case: overlapping CIDRs between clouds. Fix either by readdressing over time or temporary NAT. In critical cases, enable hop-by-hop tracing: from client to cloud IP and back. Use MTR to internal cloud IP, then tcpdump on tunnel interface and cloud firewall to catch packet loss. Once you see the full picture, solutions become clear: adjust announcements, tweak metrics, or repair return routes.
Mobile and IPv6-Only Networks: NAT64, DNS64, and CGNAT
Mobile providers often offer IPv6-only with NAT64/DNS64 to reach IPv4. VPN over such stacks works but has quirks. If your VPN ignores IPv6, traffic might leak outside the tunnel over v6 and some services act strangely. Solution: full IPv6 support in VPN — add prefixes, verify routes, enable filtering. Also configure DNS so internal IPv4 resources resolve correctly even with DNS64 in play.
CGNAT behind clients breaks some tunnels if they have aggressive timeouts and no keepalives. In WireGuard, set PersistentKeepalive; in IKEv2, check DPDP/DPD and lifetimes. If VPN supports QUIC over 443, try it—it often traverses better. And if apps work by hostname but not IP, verify split DNS: wrong resolver answers outside VPN even if corporate DNS is correct. A subtle but common trap.
2026 Tools: Observability, Telemetry, and New Tricks
eBPF and Streaming Telemetry: Seeing Traffic End-to-End
By 2026, eBPF is mainstream not just in clusters but on workstations. It shows which process created a socket, which route was chosen, and where packets got lost. Tools like Cilium Hubble for servers and light agents on hosts help catch tricky PBR cases and asymmetry. Why is this useful? We finally see that the browser’s traffic goes through VPN, while update tools connect straight to the internet because fwmark and table 200 grabbed the stream.
Windows sees progress with pktmon integration and network logs. macOS has better per-app profiles, and on Linux bpftrace scripts highlight “who’s going where.” Add centralized dashboards: tunnel latency, MTU errors, split/full traffic share, top domains. With visualization, the “it doesn’t work” talk turns into “at 11:42 yesterday 30% of clients lost PMTUD on Far East link.”
Synthetic Tests and Health Checks: Don’t Wait for Fire
Set up synthetic probes: pings to key subnets, HTTPS to internal portals, DNS queries to zones—run from various points under different policies. Let tests run every minute and alert on anomalies. Client-side, a light agent holds hosts and targets list. Concentrators expose health check APIs showing tunnel states, creation time, auth errors. Properly tuned alerts save nerves and time.
Also implement "canary" routing: a small group of test clients gets configs early. Failures affect fewer users. This DevOps staple works great in networking. Add change logs showing who approved updates and when. Rollbacks become one click. Transparency isn’t luxury—it’s protection from human error.
Smart Assistance and Tips: From LLMs to Client Advisors
Not everyone loves "AI everywhere," but in real life it helps. A console helper analyzing ip route and traceroute outputs to spot metric conflicts is a lifesaver at 3 AM. Runs locally without external calls. For example, it spots 10.20.0.0/16 and 10.20.5.0/24 with a lower metric on /16—suggests raising the metric or adding a specific route. Or it notices internal.corp DNS points to a public resolver—recommends conditional forwarding.
Many VPN clients in 2026 come with built-in checks: automatic MTU diagnosis, DNS leak tests, validating split lists before applying. If a client warns, listen up. These catch issues we otherwise find only in production after complaints. And yes, enable verbose logs. When logs are silent, we guess. When logs speak, we get facts.
Security, Performance, and Fine Tuning
MTU, MSS Clamping, and PMTUD Black Holes
Too large an MTU in a tunnel leads to strange hangs. Pages load halfway then freeze. Fix by choosing correct MTU and enabling MSS clamping for TCP. On Linux, this is an nftables/iptables rule to lower MSS to a safe range (say 1360–1380 for most UDP tunnels). Check PMTUD: if ICMP is blocked en route, the smart mechanism fails. Hard-setting MSS or allowing ICMP on firewalls often helps. Run A/B tests before and after—results usually clear.
QUIC and HTTP/3 over UDP behave differently with MTU sensitivity but issues remain. Lost fragments and blocked large datagrams degrade connections. Heuristic: start with conservative MTU and raise it as needed, not the opposite. Always document settings in playbooks to avoid forgetting what worked.
DNS: Split-Horizon, DoH/DoQ, and Resolver Order
DNS can make or break your week. If internal domains resolve through public resolvers, expect NXDOMAIN or worse. Use split-horizon: corporate zones go to internal DNS, others to public servers, preferably with DoH/DoQ as policies allow. On Windows check interface DNS order, on macOS use scutil --dns, on Linux resolvectl. If your VPN client can assign domains to resolvers, enable that.
Fighting DNS leaks is standard in 2026. Many clients check where queries actually go. Regularly run tests: internal zones must query through tunnel, public as per policy. Don’t forget caches—they can hide issues. Clearing cache and retrying is a simple useful step.
IPv6-First, ULA, and Happy Eyeballs
IPv6 isn’t a guest anymore—it’s the host. Ignoring IPv6 in VPN causes policy bypass and unpredictable behavior. Add routes for ULA and global IPv6, ensure filters allow needed ports and protocols. Check Happy Eyeballs: apps pick v4 or v6 based on latency. If v6 goes outside the tunnel and v4 inside, expect chaos. Solution: one policy—either both stacks tunnel or use clear split with controlled DNS and routing.
IPv6 networking drops NAT as we know it, so asymmetry issues become clearer. Carefully configure return routes. Remember large v6 MTUs are a plus, but only if PMTUD works well. Otherwise you return to "page loads then stalls" symptoms. Keep your checklist close.
Checklists, Playbooks, and Automation
"Don’t Panic" Checklist: Quick Steps in 10 Minutes
First: test connectivity by IP and name. Second: traceroute to internal and external addresses. Third: review routing table and interface metrics. Fourth: check DNS resolvers and split configuration. Fifth: inspect MTU and try lowering MSS. Sixth: capture packets on VPN and local interfaces. Seventh: verify return routes from server. Eighth: temporarily disable "smart" inspection and reassess behavior. Ninth: compare VPN client and server configs. Tenth: document findings and fixes.
This checklist may seem simple but saves hours. Follow steps in order, no skipping. Sometimes the fix is step two, other times step nine. Main thing: keep track and note results. One month later, you’ll thank past you for those notes.
Playbooks for Windows, Linux, and macOS
Windows: disable auto-metric on VPN interface, set InterfaceMetric manually, check RouteMetric for conflicting subnets. Diagnose with route print and PowerShell. DNS: prioritize interfaces and correct resolvers for internal domains. Linux: review ip rule and tables, order priorities, configure fwmark if needed. WireGuard requires careful AllowedIPs. Set MSS clamping and verify PMTUD. macOS: service order via networksetup, scutil --dns for resolvers, route -n get for interface choice. Always collect VPN client logs and packet captures.
Don’t forget change templates: YAML files listing subnets, metrics, domains, and rules. Store in Git, run reviews, test on canaries. When problems arise, rollback is a single commit away. This is standard practice and works brilliantly in networking. Automation won’t replace brains but frees them for what truly matters.
GitOps for Routes: Safe Checking and Deployment
Infrastructure as code reached routing too. Keep your prefix and exclusion lists in a repo. Open Pull Requests trigger automated synthetic tests in staging, then on canary groups. If all’s good, deploy to all clients or VPN servers. If not, rollback and investigate. No more "forgot to update on Pete’s box while Mary’s works." Transparent and repeatable.
Add static checks: CIDR validator, ban on overlapping prefixes without flags, verify new routes don’t "break" users’ internet. And keep an approval log: who authorized what. You turn chaos into a managed process. Users stop being beta testers on production changes.
FAQ: Quick Answers to Common Questions
Why Does Internet Disappear After Connecting to VPN?
Most often, the VPN client hijacks the default route (full tunnel), but the return path or kill switch is missing or blocking traffic. Check routing table for 0.0.0.0/0, 0.0.0.0/1 and 128.0.0.0/1 and their metrics. Run traceroute to a public IP: if first hop is inside tunnel, internet should flow through VPN. If not, examine DNS (resolver might point to internal servers unreachable externally) or MTU (large packets stuck). Quick tests: reduce MSS, temporarily switch to public DNS, check if strict kill switch is on, and restore routing symmetry.
How to Fix Conflicting Identical Subnets at Client and Company?
Three options: 1) company-side readdressing (reliable but slow), 2) temporary NAT for conflicting subnet on VPN edge (quick but complex), 3) policy-based routing with explicit routes for desired prefixes and raised metric on the general route at client. Start with diagnosis: route print or ip route to see which route wins. Add more specific prefixes to VPN config to override broad ones. Check server return routes and firewall filters. Register the conflict in address registry to fix permanently, not just treat symptoms monthly.
Split Tunneling or Full Tunnel — What to Choose?
If security and control are priorities, go full tunnel. If performance and saving bandwidth, especially for public services, matter more, choose split. Hybrid approach: full with local breakout at gateway or SASE-style where nearest point enforces policies and releases internet traffic. Don’t overlook DNS and MTU—wrong setup in split causes invisible internal services or leaks. Ideally, pilot on canaries, measure metrics, then roll out widely. Blind choices usually mean rework.
Why Do Pings Work but Websites Don’t Load?
Ping uses ICMP; websites use TCP/UDP over HTTP(S). If ICMP passes but TCP hangs, check MTU and MSS—likely large segments are cut mid-path and PMTUD fails due to blocked ICMP Fragmentation Needed messages. Another cause: DNS—access by IP works but name resolution fails? Verify resolver paths and whether queries bypass VPN. Third: firewalls or SSL inspection blocking unexpected traffic (e.g., QUIC). Use a sniffer—if you see SYN without SYN-ACK, investigate return routes and server filters.
How to Set Network Interface and Route Priorities?
On Windows, disable auto metric and set InterfaceMetric for VPN interface, then assign RouteMetric on key routes. Check with Get-NetRoute and route print. On Linux, review ip route and ip rule: rule priorities may redirect traffic to different tables. On macOS, adjust service orders via networksetup and verify with route -n get. Across all systems: lower metric and longer prefix beats the rest. Document your configurations to prevent future magic surprises.
How to Diagnose IPv6-Only Problems?
First, ensure your VPN supports IPv6 and has relevant routes (ULAs and global). Run ping6/tracepath6 to internal IPv6; check ip -6 route or route print -6. Look over DNS: internal domains’ AAAA records must resolve via corporate resolver. MTU is critical in v6; ICMPv6 blocking breaks connections. If an app prefers IPv6 and bypasses VPN due to Happy Eyeballs, configure policy so both stacks align: either both tunnelled or v6 disabled on specific routes. VPN client logs and packet capture provide the final word.