--- title: 'Tunneled IPv6 To Your Entire LAN' date: '2020-09-12T19:57:44+00:00' author: dewdude layout: page list: bullet permalink: /dev/2020/SEP/12-tunneledipv6.php --- While most of the major ISP’s are starting to support IPv6; some of us on the old Ma Bell leftovers are still waiting. Sure, they’ll run a fiber-optic line all the way to my house and pump me more bandwidth than I know what to do with; but if you want that to carry IPv6 you’re out of luck. While IPv6 tunneling was popular in the very early days when no ISP offered it; it is still very much a thing and a lot of people who require IPv6 still use it. Granted most of the IPv6-only stuff I see isn’t anything important; it was something I’d wanted to play around with and in fact did so on a couple of occasions. But doing a single tunnel connection to just my PC wasn’t what I considered “complex enough” to satisfy my need to learn something. I wanted to provide the tunnel connection to my entire LAN; full automatic configuration of every IPv6 capable device. My first attempt was met with some success a couple of years ago. In fact I actually wrote a blog entry on the old pickmy.org about it since there were a number of things I wound up having to feel my way through. Two years later when I got the Xen setup; I was happy to be able to go back, look at it, and build a new tunnel-router. The second version was even better than the first; which had been plagued by instability and my desire to provide network-wide firewall before realizing that was a thing ip6tables would do. This same machine would go on to act as the router between my man LAN and the storage’s private LAN as well as a network bridge for some wired segments since I ran out of switch ports. But then I screwed up, accidentally nuked the VM’s disk when I powered it down to make some RAM adjustments, broke my *entire* network trying to re-bridge my Ethernet segments, climbed to where the server hides, dropped in a network switch, actually had to use the console on the Xenserver to kill the VM breaking my network, and then go build a new VM to replace the one I nuked. I figured that was as good of time as any to document my entire process, streamline it a little, and do a new write up. ***Update*:** *Due to some issues with what I think is HE; I started having more problems with this setup, so I took it down. 26-1-2021* ***Update*:** *Verizon has finally enabled native IPv6 on my FiOS connection.* ### VM Specifications & Duties The VM in Xen was provisioned with the following setup: - 10 vCPU (this is a 12-core hypervisor) - 1GB RAM - 10GB HDD - 4 Network Interfaces - eth0 – connection to main LAN (192.168.1.0/24) - eth1 – connection to Synology LAN (10.1.1.0/24) - eth2 – connection to Ethernet segment #1 - eth3 – connection to Ethernet segment #2 The OS is straight-up Ubuntu-server 20.04 with just an OpenSSH install to start. The VM itself will be providing just networking related tasks: - making the connection to the IPv6 tunnel end-point. - routing our IPv6 /64 subnet to our LAN using stateless autoconfiguration - provide firewall for the /64 subnet - bridge eth0, eth2, & eth3 (br0) - route between main LAN and Synology LAN - provide DHCP for the Synology (which I’ll explain below) The Synology itself is just plugged in to network port #3 (Network 2) of the PowerEdge R610; and all of the other VMs are attached to the network bridge Xen provides for that port. My ultimate goal was to avoid a bottleneck in my network, so that a VM serving data from the Synology was able to pump it out one gigabit interface as fast as it could suck it from another gigabit interface. Rather than setting a static assignment in my Synology, it instead gets it’s network assignment from a dhcp-server configured for a pool containing a single IP; the Synology is the only device on that network configured to request a DHCP assignment. I did this as a way of lazy future-proofing; if I ever have a situation where I have enough time to just yank the Synology before getting out of dodge, I won’t have to fight as much (in theory) getting access to it again later. ### Initial Setup & Configuration After you boot in to your fresh installation; it will probably tell you there are updates to install. So let’s just do the usual thing of updating everything: ``` sudo apt-get update sudo apt-get upgrade ``` With the system now updated, we’re going to start installing some things and writing some configuration files. If you’re just wanting a router for your tunneled IPv6; then a lot of this stuff won’t apply to you. [You can skip down for the IPv6 stuff.](#ipv6) The vast majority of the installs and configurations I do right now are just to get the IPv4 side of things going. (My music is stored on the Synology; priorities you know.) Surprisingly, this is all I have to install for my IPv4 configuration; the rest of the work is writing all the config files. ``` sudo apt-get install bridge-utils isc-dhcp-server ``` The isc-dhcp-server uses two config files; one lives at `/etc/dhcp/dhcpd.conf `and the other at `/etc/default/isc-dhcp-server`. Neither one is going to be very complicated. `/etc/dhcp/dhcpd.conf` comes with a lot of examples commented out and a few default settings left in. You can comment everything out you don’t need; I just usually rm the file and start a blank one: ``` /etc/dhcp/dhcpd.conf default-lease-time 600; max-lease-time 7200; subnet 10.1.1.0 netmask 255.255.255.0 { range 10.1.1.2 10.1.1.2; } ``` When I said a single IP pool, I literally meant it. Everything is pretty much hard-coded in it’s configurations that the Synology lives at 10.1.1.2; so we’ll make sure it lives at 10.1.1.2. I know someone out there is probably cringing at this; but I’ll reiterate: *The Synology is the only device on that network configured for DHCP. All the other VMs have straight up static assignments.* ``` /etc/default/isc-dhcp-server INTERFACES=eth1 ``` I can’t remember what else is in /etc/default/isc-dhcp-server; but the main thing you want here is to set the interface the server will listen to. In my case, this is eth1; the virtual interface attached to Xen’s “Network 2” bridge. You obviously need to replace this with your network port if you’re bothering to do this. (You probably don’t.) #### Netplan Configuration Somewhere in /etc/netplan you should have at least one .yaml file. Open it up in your editor of choice and configure all the interfaces: ```
``` version: 2 renderer: networkd ethernets: eth0: dhcp4: no eth1: addresses: [10.1.1.1/24] eth2: dhcp4: no eth3: dhcp4: no bridges: br0: macaddress: 3a:c6:d3:bd:37:7d interfaces: - eth0 - eth2 - eth3 dhcp4: yes ``` As part of my stream-lined process, I also went ahead and added my HE tunnel to a new .yaml file. So don’t be surprised if I repeat this for the folks who skipped my IPv4 setup: ``` network: version: 2 tunnels: he-ipv6: mode: sit remote: 216.66.22.2 local: 192.168.1.253 addresses: - "2001:470:7:3c2::2/64" gateway6: "2001:470:7:3c2::1" ``` I will further explain this in the IPv6 setup section; however since I was building “the whole shebang” in one go; I didn’t completely break my setup in to sections. I also at this point edited /etc/sysctl.conf to enable v4 and v6 packet routing; but that’s because my next step was the ultimate way of applying settings. ``` sudo reboot ``` I rebooted for no other reason than just in case the system wanted it after doing the initial batch of updates. It would be a good idea to run `sudo netplan apply` to make sure your configuration works before rebooting; .yaml can be a pain. I was pretty sure I had the spacing right and figured if I botched the config; I’d either fix it after boot or just start over. I’ve done this entirely too much because things came right up. I ran a couple of network tests, but I’ll just just show my checks for the IPv4 stuff; no reason in being even more redundant. First thing I did was check my network interfaces; followed by dhcp and v4 routing: ``` dewdude@ipv6:~$ ip addr 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq master br0 state UP group default qlen 1000 link/ether 3a:c6:d3:bd:37:7d brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 16:77:54:ec:a1:22 brd ff:ff:ff:ff:ff:ff inet 10.1.1.1/24 brd 10.1.1.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::1477:54ff:feec:a122/64 scope link valid_lft forever preferred_lft forever 4: eth2: mtu 1500 qdisc mq master br0 state UP group default qlen 1000 link/ether a6:52:b8:fc:5d:18 brd ff:ff:ff:ff:ff:ff 5: eth3: mtu 1500 qdisc mq master br0 state UP group default qlen 1000 link/ether ca:2d:2a:74:58:a9 brd ff:ff:ff:ff:ff:ff 6: br0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 3a:c6:d3:bd:37:7d brd ff:ff:ff:ff:ff:ff inet 192.168.1.253/24 brd 192.168.1.255 scope global dynamic br0 valid_lft 86314sec preferred_lft 86314sec inet6 fe80::38c6:d3ff:febd:377d/64 scope link valid_lft forever preferred_lft forever 7: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 8: he-ipv6@NONE: mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000 link/sit 192.168.1.253 peer 216.66.22.2 inet6 2001:470:7:3c2::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fd/64 scope link valid_lft forever preferred_lft forever dewdude@ipv6:~$ dhcp-lease-list To get manufacturer names please download http://standards.ieee.org/regauth/oui/oui.txt to /usr/local/etc/oui.txt Reading leases from /var/lib/dhcp/dhcpd.leases MAC IP hostname valid until manufacturer 00:11:32:b5:9a:51 10.1.1.2 synology 2020-09-12 01:56:32 -NA- D:\Downloads>ping 10.1.1.2 Pinging 10.1.1.2 with 32 bytes of data: Reply from 10.1.1.2: bytes=32 time=2ms TTL=62 Reply from 10.1.1.2: bytes=32 time=1ms TTL=62 ``` The synology has it’s IP via DHCP and kernel routing is gladly sending my packets bound for 10.1.1.2 over to it. Before I get on with the IPv6 stuff, I’m going to add some rules to my firewall and flip it on: ``` sudo ufw allow from 192.168.1.0/24 sudo ufw route allow from 192.168.1.0/24 to 10.1.1.0/24 sudo ufw enable ``` I could have actually told it to just allow to/from everything on the v4 side as the machine is behind NAT; but this works too. I’m not even sure how much I need them; but look, it can’t hurt. As long as after running enable my SMB and SSH sessions kept working; that was a win. Now I get to repeat some of the IPv6 stuff and go off my notes since you won’t be bringing your tunnel up after reboot. ## IPv6 Related Setup If you’re one of the people who skipped over my IPv4 stuff, you should thank the readers for putting up with some of the walls of redundant text they’re bout to see. When I set this up recently; I did it in an order that mixed some of the IPv4 and IPv6 steps together. Ok, actually the only thing I did early was configure the IPv6 tunnel interface. We’re going to be using a Hurricane Electric IPv6 tunnel with just the supplied /64 subnet they give me. I could go with the /48 and route that; but I really, seriously, have no idea what I would do with that many IP addresses. I totally get the whole “do it to do it” and “do it for learning”; I’ll get around to it some-day. Right now, I’ll just stick with routing the /64 to my LAN. ### Netplan Configuration ``` sudo nano /etc/99-he-tunnel.yaml ``` While you could put this in your default configuration, it’s probably a good idea to put it in a separate script that runs last; ensuring you actually have an internet connection before trying to start the tunnel. ``` network: version: 2 tunnels: he-ipv6: mode: sit remote: 216.66.22.2 local: 192.168.1.253 addresses: - "2001:470:7:3c2::2/64" gateway6: "2001:470:7:3c2::1" ``` What’s really nice is that this is literally the example HE gives for netplan. I remember back in 2018 when I first tried this; and their instructions required a bit of tweaking to get working. One thing I need to point out is that my router, despite being some ISP supplied consumer thing; fully supports Protocol 41 forwarding. In cases where you can forward this to machine acting as the tunnel-client; you use your actual LAN IP as the local instead of your public. Obviously the IPv6 address and gateway you use will be whatever HE assigns you. ``` sudo netplan apply ``` Assuming it doesn’t complain about indentation or something; the tunnel should appear when you take a peek at your network devices: ``` dewdude@ipv6:~$ ip addr .....a bunch of non-v6 stuff some people read earlier..... 7: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 8: he-ipv6@NONE: mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000 link/sit 192.168.1.253 peer 216.66.22.2 inet6 2001:470:7:3c2::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fd/64 scope link valid_lft forever preferred_lft forever ``` Obviously device order doesn’t matter…this machine happens to have 4 interfaces, lo, and a bridge before the tunnel. But if you see something similar to this, then you should have IPv6 on that machine. ``` dewdude@ipv6:~$ ping -c 4 google.com PING google.com(iad30s07-in-x0e.1e100.net (2607:f8b0:4004:804::200e)) 56 data bytes 64 bytes from iad30s07-in-x0e.1e100.net (2607:f8b0:4004:804::200e): icmp_seq=1 ttl=121 time=4.79 ms 64 bytes from iad30s07-in-x0e.1e100.net (2607:f8b0:4004:804::200e): icmp_seq=2 ttl=121 time=4.23 ms 64 bytes from iad30s07-in-x0e.1e100.net (2607:f8b0:4004:804::200e): icmp_seq=3 ttl=121 time=4.76 ms 64 bytes from iad30s07-in-x0e.1e100.net (2607:f8b0:4004:804::200e): icmp_seq=4 ttl=121 time=5.00 ms --- google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 4.234/4.696/4.998/0.281 ms ``` Looks like our tunnel is working. Now, before we actually make it routable, we need to make our kernel route. Both IPv4 and IPv6 forwarding are turned off by default. There are numerous ways to do this; what I do is edit `/etc/sysctl.conf`: ``` # Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1 # Uncomment the next line to enable packet forwarding for IPv6 # Enabling this option disables Stateless Address Autoconfiguration # based on Router Advertisements for this host net.ipv6.conf.all.forwarding=1 ``` Reload the file to make the changes activate instantly: ``` ``` sudo sysctl -p ``` Now part of this setup will be providing a network-wide firewall for IPv6, similar to the NAT-effect we’ve grown so used to on IPv4. Earlier in my setup, I defined a couple of rules for IPv4 so local traffic can always get to/thru it. I didn’t mention it at the time; but I also pre-added a bunch of IPv6 rules from my notes. ``` sudo ufw route allow from 2001:470:8:3c3::/64 sudo ufw allow from 2001:470:8:3c3::/64 sudo ufw allow from fe80::/1 sudo ufw allow from fe00::/7 port 546 to fe00::/7 port 547 proto udp ``` Obviously the two rules are so our 64 subnet can route through the machine. I can’t remember what the other two are other than my old recovered blog post told me routing didn’t work with ufw enabled without them. I seem to recall ufw did everything else for a workable stateful-firewall for every device using IPv6. You can still open ports to specific machines, or even make a machine wide-open. You just have to look up what your ufw rule syntax would be. If I remember correctly; at this point you should be able to manually configure a IPv6 static IP on your /64 subnet, specify the link-local of the router as your gateway, specify an IPv6 DNS server, and get some data moving. I recall doing this on my Windows machine before continuing to the next step and seeing traffic move. You can try that if you want; if it doesn’t work than switch that client back to auto-config and continue with the setup. ``` sudo apt-get install radvd ... Sep 11 22:26:54 ipv6 radvd[1635]: can't open /etc/radvd.conf: No such file or directory Sep 11 22:26:54 ipv6 radvd[1635]: Insecure file permissions, but continuing anyway Sep 11 22:26:54 ipv6 radvd[1635]: exiting, failed to read config file Sep 11 22:26:54 ipv6 systemd[1]: radvd.service: Control process exited, code=exited, status=1/FAILURE Sep 11 22:26:54 ipv6 systemd[1]: radvd.service: Failed with result 'exit-code'. Sep 11 22:26:54 ipv6 systemd[1]: Failed to start Router advertisement daemon for IPv6. ``` When you install radvd, it tries to start the service and fails due to lack of a configuration. It doesn’t even create a blank radvd.conf file or anything in /etc. Not a problem: ``` sudo nano /etc/radvd.conf interface br0 { AdvSendAdvert on; MaxRtrAdvInterval 30; AdvOtherConfigFlag on; prefix 2001:470:8:3c3::/64 { AdvOnLink on; AdvAutonomous on; }; RDNSS 2620:0:ccc::2 2620:0:ccd::2 { }; }; ``` Replace br0 with whatever interface you want to advertise on, and replace your /64 prefix with whatever you’re assigned (or /48 if you take that). The RDNSS entries point to OpenDNS; but you can specify whichever ones you want. ``` sudo systemctl enable radvd sudo systemctl start radvd sudo systemctl status radvd ● radvd.service - Router advertisement daemon for IPv6 Loaded: loaded (/lib/systemd/system/radvd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2020-09-11 22:30:53 EDT; 10s ago ``` It probably enabled the system during install, but I’ll just run it again. As we can see it’s up and running. In my experience, any IPv6 capable device on the network should magically start showing connectivity on v6; I know my laptop did, literally watched the network status change the moment the service was started. ``` D:\Downloads>ping google.com Pinging google.com [2607:f8b0:4004:808::200e] with 32 bytes of data: Reply from 2607:f8b0:4004:808::200e: time=6ms Reply from 2607:f8b0:4004:808::200e: time=6ms Reply from 2607:f8b0:4004:808::200e: time=6ms Reply from 2607:f8b0:4004:808::200e: time=7ms Ping statistics for 2607:f8b0:4004:808::200e: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 6ms, Maximum = 7ms, Average = 6ms ``` If you look at your network configuration on your devices, you might notice what looks like a fustercluck of addresses; and that’s after you get over the fact they’re 128-bits represented in hex. Windows, and a number of things; will typically have multiple IPv6 addresses assigned to them from your /64 subnet. In stateless configuration; the MAC address is used as part of your IP address. In fact, the 48-bit MAC is split in half; 16-bits added in the middle, and the resulting 64-bits is the “back-half” of your IPv6 address. This sounds neat until you realize that while it help’s eliminate the chances of a duplicate 64-bit suffix existing…..that means you’re even more identifiable than normal. So even if I were to get an entirely different prefix; all you’d have to do is look at the last 64-bits to know, “oh, that guy got a new prefix”. But IPv6 easily supports multiple addresses on an interface; and with something like 16 quintillion IPs in my subnet…we can waste them. So some operating systems will generate a new suffix, add that to your device, and use that when reporting “to the outside world”. At one point I had about 6 different “Temporary IPv6 Addresses” on my laptop because Windows wasn’t cleaning them out properly. I will, of course, also mention that you can still totally set a static IPv6 addresses that will co-exist with stateless and “temporary”. So you can assign something full of words made in hex and likely not have an issue with it conflicting with a stateless configured IP. ``` sudo apt-get wide-dhcpv6-server ``` You absolutely don’t need a dhcpv6 server in your stateless configuration; unless you want to support older things that don’t follow RFC6106. For stuff like that; you’ll need the dhcpv6-server strictly to serve DNS server information. I mean if you want to do a full sateful configuration; I’m sure there are tutorials out there. While I don’t know if this is something I need to worry about; I’m going to do it anyway for no other reason than backwards-compatibility. ``` /etc/wide-dhcpv6/dhcp6s.conf option domain-name-servers RDNSS 2620:0:ccc::2 2620:0:ccd::2; ``` ``` sudo systemctl start wide-dhcpv6-service ``` At this point; I’m done. My LAN has the IPv6 tunnel again; and the IPv4 stuff it was providing is going. Later that night I made my way back over and moved the LAN segments from the emergency switch back in to the R610. And that VM I broke that broke my network? I never figured out what exactly happened. I reassigned it’s interfaces to virtual networks, undid the configuration that broke the network, and then put it back on the network as normal. All in all; the entire process was only a 2.5 hour ordeal with about an hour of that getting extra-stressed after it became a LOT more difficult to get in to the hypervisor. I had to wind up loading up iDRAC, fighting with Java’s security, and killing the network-breaking VM over virtual console. Actually fixing everything after I restored network connectivity was maybe 45 minutes.