> In a nutshell, an IPv4x packet is a normal IPv4 packet, just with 128‑bit addresses. The first 32 bits of both the source and target address sit in their usual place in the header, while the extra 96 bits of each address (the “subspace”) are tucked into the first 24 bytes of the IPv4 body. A flag in the header marks the packet as IPv4x, so routers that understand the extension can read the full address, while routers that don’t simply ignore the extra data and forward it as usual.
So you have to ship new code to every 'network element' to support IPv4x. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (And their DNS idea won't work—or won't work differently than IPv6: a lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "AX" address (for ipv4X addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6.
A single residential connection that gets a single IPv4 address also gets to use all the /96 'behind it' with this IPv4x proposal? People complain about the "wastefulness" of /64s now, and this is even more so (to the tune of 32 bits). You'd probably be better served with pushing the new bits to the other end… like…
If you put part of the address in the body space, you can't encrypt the entire body.
IPv6 adoption has been linear for the last two decades. Currently, 48% of Google traffic is IPv6.[1] It was 30% in 2020. That's low, because Google is blocked in China. Google sees China as 6% IPv6, but China is really around 77%.
Sometimes it takes a long time to convert infrastructure. Half the Northeast Corridor track is still on 25Hz. There's still some 40Hz power around Niagara Falls. San Francisco got rid of the last PG&E DC service a few years ago. It took from 1948 to 1994 to convert all US freight rail stock to roller bearings.[2] European freight rail is still using couplers obsolete and illegal in the US since 1900. (There's an effort underway to fix this. Hopefully it will go better than Eurocoupler from the 1980s. Passenger rail uses completely different couplers, and doesn't uncouple much.)[3]
should also bring to mind that there's a technological leapfrogging effect in stages of development which it seems clear that China has taken advantage of.
Yes, I was wondering if I was missing something reading the hypothetical: This is still splits the Internet into two incompatible (but often bridged etc.) subnetworks, one on the v4, one on the v4x side, right?
It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset). Not sure if that actually makes anything better or just kicks the can down the road in an even more messy way.
> everything in v4x that happens to have the last 96 bits unset
That's pretty much identical to 6in4 and similar proposals.
The Internet really needs a variant of the "So, you have an anti spam proposal" meme that used to be popular. Yes, it kill fresh ideas in the bud sometimes, but it also helps establish a cultural baseline for what is constructive discussion.
Nobody needs to hear about the same old ideas that were subsumed by IPv6 because they required a flag day, delayed address exhaustion only about six months, or exploded routing tables to impossible sizes.
If you have new ideas, let's hear them, but the discussion around v6 has been on constant repeat since before it was finalized and that's not useful to anyone.
I feel like the greatest vindication of v6 is that I’m reading the same old arguments served over a quietly working v6 connection more often than not. While people were busy betting on the non-adoption of v6, it just happened.
This wasn't a proposal, but an alternate history. The world where the people who wished for IPv4 but with extra address space get their way. By the end I come down on being happy with we're in the IPv6 world, but wishing interoperability could be slicker.
> It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset).
See perhaps:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
It's not automatic, there were many proposed and utilized mechanisms for autodetecting translation servers and so on. By now though, if you want IPv6, you order real IPv6, and don't need some translation.
Yes, but the compatibility is very very easy to support for both hardware vendors, softwares, sysadmins etc. Some things might need a gentle stroke (mostly just enlarge a single bitfield) but after that everything just works, hardware, software, websites, operators.
A protocol is a social problem, and ipv6 fails exactly there.
What stymies IPv6 is human laziness more than anything else. It's not hard to set up. Every network I run has been dual stack for 10 years now, with minimal additional effort. People are just too lazy to put forth even a minimal effort when they believe that there's no payoff to it.
> What stymies IPv6 is human laziness more than anything else. It's not hard to set up.
I think the biggest barrier to IPv6 adoption is that this is just categorically untrue and people keep insisting that it isn't, reducing the chance that I'd make conscious efforts to try to grok it.
I've had dozens of weird network issues in the last few years that have all been solved by simply turning off IPv6. From hosts taking 20 seconds to respond, to things not connecting 40% of the time, DHCP leases not working, devices not able to find the printer on the network, everything simply works better on IPv4, and I don't think it's just me. I don't think these sort of issues should be happening for a protocol that has had 30 years to mature. At a certain point we have to look and wonder if the design itself is just too complicated and contributes to its own failure to thrive, instead of blaming lazy humans.
> People are just too lazy to put forth even a minimal effort when they believe that there's no payoff to it.
For me just disabling IPv6 has given the biggest payoff. Life is too short to waste time debugging obscure IPv6 problems that still routinely pop up after over 30 years of development.
Ever since OpenVPN silently routed IPv6 over clearnet I've just disabled it whenever I can.
This goes the other direction too. I just this second fixed a problem with incredibly slow SSH connections because a host lookup which returned an IPv4 address instantly was waiting 10+ seconds for an IPv6 response which would never come.
Now I'm sure I can fix DNSmasq to do something sensible here, but the defaults didn't even break - they worked in the most annoying way possible where had I just disabled IPv6 that would've fixed the entire problem right away.
I'm confused by the argument that replacing equipment is something that is always possible. It doesn't matter that it's easy to support by updating or replacing the hardware - a lot of hardware isn't going to be updated or replaced.
ISPs are used to this though, and tunnel a lot of packets. If you have DSL at home, your ISP doesn't have a router in every edge cabinet - your DSL router sets up a layer-2 point-to-point tunnel to the ISP's nearest BRAS (broadband remote access server) in a central location. All IP routing happens there. Because it's a layer-2 tunnel it looks like your router is directly connected to the BRAS, even though there are many devices in between. I don't know how it's done on CATV and fiber access networks.
If an ISP uses an MPLS core, every POP establishes a tunnel to every other POP. IP routing happens only at the source POP as it chooses which pre-established tunnel to use.
If an ISP is very new, it likely has an IPv6-only core, and IPv4 packets are tunneled through it. If an ISP is very old, with an IPv4-only core, it can do the reverse and tunnel IPv6 packets through IPv4. It can even use private addresses for the intermediate nodes as they won't be seen outside the network.
No, in this hypothetical, routers that don't know about IPv4x will still route based on the top 32 bits of the address which is still in the same place for IPv4 packets. If your machine on your desk and the other machine across the internet both understand IPv4x, but no other machines in the middle do, you'll still get your packets across.
Well no, all the routers on your subnet need to understand it.
So let’s say your internet provider owns x.x.x.x, it receives a packet directed to you at x.x.x.x.y.y… , forwards it to your network, but your local router has old software and treats all packages to x.x.x.x.* as directed to it. You never receive any medssagea directly to you evem though your computer would recognise IPv4x.
Your local machine isn't on the IPv4 internet if it doesn't have a globally routable IPv4 address.
Your home router that sits on the end of a single IPv4 address would need to know about IPv4x, but in this parallel world you'd buy a router that does.
you are missing the point - updating "network elements" was never the problem. Linux kernel has IPv6 support since 2.6. RedHat got IPv6 in 2008. Nginx got it in 2010. And yet there plenty of IPv4-systems out there. why?
Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
That's where IPv6 really dropped the ball by having dual-stack the default. In IPv4x, there is no dual-stack.
I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
I have my small office network which is on the mix IPv4 and IPv4x addresses. Most Windows/Linux machines are on IPv4x, but that old network printer and security controller still have IPv4 address (with router translating responses). It still all works together. There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
(Funny enough, I bet one _could_ accelerate IPv6 deployment a lot by have a standard that _requires_ 6to4/4to6/NAT64 technology in each IPv6 network... but instead the IPv6 supporters went into all-or-nothing approach)
> Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
With IPv6 the router needs to send out RAs. That's it. There's no need to do anything else with IPv6. "Automatic configuration of hosts and routers" was a requirement for IPng:
When I was with my last ISP I turned IPv6 on my Asus router, it got a IPv6 WAN connection, and a prefix delegation from my ISP, and my devices (including by Brother printer) started getting IPv6 addresses. The Asus had a default-deny firewall and so all incoming IPv6 connections were blocked. I had do to zero configuration on any of the devices (laptops, phones, IoT, etc).
> I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
So if you cannot connect via >32b addresses you fall back to 32b addresses?
> I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
A French ISP deployed this across their network of four million subscribers in five months (2007-11 to 2008-03).
> There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
If an (e.g.) public web server has public address (say) 2.3.4.5 to support legacy IPv4-only devices, but also has 2.3.4.5.6.7.8.9 to support IPv4x devices, how can you have only one firewall rule set?
> So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
Having 10.11.12.13 on your PC as well as 10.11.12.13.14.15.16 as per IPv4x is "a second parallel set of addresses".
It is running a whole separate network because your system has the address 10.11.12.13 and 10.11.12.13.14.15.16. You are running dual-stack because you support connection from 32-bit-only un-updated, legacy devices and >32b updated devices. This is no different than having 10.11.12.13 and 2001:db8:dead:beef::10:11:12:13.
IPv6 is a parallel system. It exists with IPv4. You don't need to stop using IPv4 - ever - if you don't want to. You can have both the chicken and egg together as long as is needed.
Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address. Then, with this proposal, if I want to have a bunch of computers behind that single ipv4 ip, I could do it without relying on NAT tricks
> Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address.
The reason there's an IPv4 address shortage is because ISPs assign every user a unique IPv4 address. In this alternative timeline, ISPs would have to give users less-than-an-IPv4 address, which probably means a single IPv4x address if we're being realistic and assuming that ISPs are taking the path of least resistance.
> Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree. It has to work this way because any router that doesn’t understand IPv4x will still route purely on the old 32‑bit address. There’s no point assigning part of your subspace to someone else — their packets will still land on your router whether you like it or not.
So the folks that just happen to get in early on the IPv4 address land rush (US, Western world) now also get to grab all this new address space?
What about any new players? This particular aspect idea seems to reward incumbents. Unlike IPv6, where new players (and countries and continents) that weren't online early get a chance to get equal footing in the expanded address space.
> The new players would each get a /24 and everyone would say that's "enough".
From where?
All then-existing IPv4 addresses would get all the bits behind them. There would, at the time, still be IPv4 addresses available that could be given out, and as people got them they would also get the extend "IPv4x" address associated with them.
But at some point IPv4 addresses would all be allocated… along with all the extended addresses 'behind' them.
Then what?
The extended IPv4x addresses are attached to the legacy IPv4 addressed they are 'prefixed' by, so once the legacy bits are assigned, so are the new bits. If someone comes along post-legacy-IPv4 exhaustion, where do new addresses come from?
You're in the exact same situation as we are now: legacy code is stuck with 32-bit-only addresses, new code is >32-bits… just like with IPv6. Great you managed to purchase/rent a legacy address range… but you still need a translation box for non-updated code… like with CG-NAT and IPv6.
So under this IPv4x proposal one gets an IPv4 /24, and receives a whole bunch of extended address space 'for free'.
But right now you can get an IPv4 /24 (as you say), but you can get an IPv6 allocation 'for free' as we speak.
In both cases legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the IPv4x and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Essentially.I imagined this hypothetical happening decades ago when there were still a few /8s unallocated. I suggested that those would be set aside for IPv4x only.
Yeah, and the value of IPv4 address space would plummet, and there would be no reason for any company to own a /8. Clawing back address space would involve a few emails and a few months to get network configs ready.
If it became universally adopted, then there wouldn't be much need for owning crazy amounts of ipv4 addresses, so the price of addresses would drop. If this proposal was not adopted universally, then we would be pretty much in the same situation as we are with ipv4 addresses
And at the same time the address format and IP header is extended, effectively still splitting one network into two (one of which is a superset of the others)?
A fundamentally breaking change remains a breaking change, whether you have the guts to bump your version number or not.
If the extra stuff is mandatory for global reachability, again, it’s conceptually a mandatory part of the header, no matter where you actually put or what you call it.
> The central idea is that a IPv4x packet is still a globally routable IPv4 packet.
That's cool and all, but end-user edge routers are absolutely going to have to be updated to handle "IPv4x". Why? Because the entire point of IPvNext is to address address space exhaustion, their ISP will stop giving them IPv4 addresses.
This means that the ISP is also going to have to update significant parts of their systems to handle "IPv4x" packets, because they're going to have to handle customer site address management. The only thing that doesn't have to change is the easiest part of the system to get changed... the core routers and associated infrastructure.
Yes. The router in your home would absolutely need to support IPv4x if you wanted to make use of the extended address space, just like how in the real world your home router needs to support NAT if you want to make use of shared IP.
> The router in your home would absolutely need to support IPv4x if you wanted to make use of the extended address space...
No. The router in your home would need to support IPv4x, or you would get no Internet connection. Why? Because IPv4x extends the address space "under" each IPv4 address -thus- competing with it for space. ISPs in areas with serious address pressure sure as fuck aren't going to be giving you IPv4 addresses anymore.
As I mentioned, similarly, ISPs will need to update their systems to handle IPv4x, because they are -at minimum- going to be doing IPv4x address management for their customers. They're probably going to -themselves- be working from IPv4x allocations. Maybe each ISP gets knocked down from several v4 /16s or maybe a couple of /20s to a handful of v4 /32s to carve up for v4x customer sites.
Your scheme has the adoption problems of IPv6, but even worse because it relies on reclaiming and repurposing IPv4 address space that's currently in use.
> the easiest part of the system to get changed... the core routers and associated infrastructure.
Is that really the easy bit to change? ISPs spend years trialling new hardware and software in their core. You go through numerous cheapo home routers over the lifetime of one of their chassis. You'll use whatever non-name box they send you, and you'll accept their regular OTA updates too, else you're on your own.
When you're adding support for a new Internet address protocol that's widely agreed to be the new one, it absolutely is. Compared to what end-users get, ISPs buy very high quality gear. The rate of gear change may be lower than at end-user sites but because they're paying far, far more for the equipment, it's very likely to have support for the new addressing protocol.
Consumer gear is often cheap-as-possible garbage that has had as little effort put into it as possible. [0] I know that long after 2012, you could find consumer-grade networking equipment that did not support (or actively broke) IPv6. [1] And how often do we hear complaints of "my ISP-provided router is just unreliable trash, I hate it", or stories of people saving lots of money by refusing to rent their edge router from their ISP? The equipment ISPs give you can also be bottom-of-the-barrel crap that folks actively avoid using. [2]
So, yeah, the stuff at the very edge is often bottom-of-the-barrel trash and is often infrequently updated. That's why it's harder to update the equipment at edge than the equipment in the core. It is way more expensive to update the core stuff, but it's always getting updated, and you're paying enough to get much better quality than the stuff at the edge.
[0] OpenWRT is so, so popular for a reason, after all.
[1] This was true even for "prosumer" gear. I know that even in the mid 2010s, Ubiquiti's UniFi APs broke IPv6 for attached clients if you were using VLANs. So, yeah, not even SOHO gear is expensive enough to ensure that this stuff gets done right.
[2] You do have something of a point in the implied claim that ISPs will update their customer rental hardware with IPv6 support once they start providing IPv6 to their customer. But. Way back when I was so foolish as to rent my cable modem, I learned that I'd been getting a small fraction of the speed available to me for years because my cable modem was significantly out of date. It required a lucky realization during a support call to get that update done. So, equipment upgrades sometimes totally fall through the cracks even with major ISPs.
I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update (because of the huge time/cost in validating it), and vendors minimising their workloads/risk exposure and only updating what they "have to". The vendors have a lot of power here and these big new protocols are just more work.
In addition, smaller ISPs have virtually no say in what software/features they get. They can ask all they want, they have little power. It takes a big customer to move the needle and get new features into these expensive boxes. It really only happens when there's another vendor offering something new, and therefore a business requirement to maintain feature parity else lose big-customer revenue. So yeh, if a new protocol magically becomes standard, only then would anyone bother implementing and supporting it.
I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship, the boxes are cheap, and just plug and play. They're relatively simple and easy to validate for 99% of usecases. If your internet stops working (because you didn't get the new hw/sw), they ship you a replacement, 2 days later it's fixed.
But I will just say, and slightly off topic of this thread, the lack of multiple extension headers in this proposed protocol instantly makes it more attractive to implement compared to v6.
> I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update... and vendors minimising their workloads/risk exposure and only updating what they "have to"...
You misunderstand me, though the misunderstanding is quite understandable given how I phrased some of the things.
I expect the updating usually occurs when buying new kit, rather than on kit that's deployed... and that that purchasing happens regularly, but infrequently. I'm a very, very big proponent of "If it's working fine, don't update its software load unless it fixes a security issue that's actually a concern.". New software often brings new trouble, and that's why cautious folks do extensive validation of new software.
My commentary presupposed that
[Y]ou're adding support for a new Internet address protocol that's widely agreed to be *the* new one
which I'd say counts as something that a vendor "has to" implement.
> I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship...
I expect enough people don't use the ISP-rented equipment that it's -in aggregate- actually not much easier to update edge equipment. That's what I was trying to get at with talking about "ISP-provided routers & etc are crap and not worth the expense".
On the other hand, consumer routers route in software, which is easily updated. Core routers with multi-terabit-per-second connections use specialized ASICs to handle all that traffic, which can never be updated.
> On the other hand, consumer routers route in software, which is easily updated.
Sure. On the other other hand, companies going "Is this a security problem that's going to cost us lots of money if we don't fix it? No? Why the fuck should I spend money fixing it for free, then? It can be a headline feature in the new model." means that -in practice- they aren't so easily updated.
If everyone in the consumer space made OpenWRT-compatible routers, switches, and APs, then that problem would be solved. But -for some reason- they do not and we still get shit like [0].
It seems like this really only helps intermediate routers.
All endpoints need to upgrade to IPv4x before anyone can reasonably use it. If I have servers on IPv4x, clients can reach my network fine, but they then can't reach individual servers. Clients need to know IPv4x to reach IPv4x servers.
Similarly, IPv4x clients talking to IPv4 servers do what? Send an IPv4x packet with the remaining IPv4x address bits zeroed out? Nope a V4 server won't understand it. So they're sending an IPv4 packet and the response gets back to your network but doesn't know how to get the last mile back to the IPv4x client?
I desperately wish there was a way to have "one stack to rule them all", whether that is IPv4x or IPv4 mapped into a portion of IPv6. But there doesn't seem to be an actually workable solution to it.
In my view, the problem largely comes from the way the Internet has grown. Many of these concepts developed together with the Internet, and IPv4 was the protocol that evolved with them.
I see many ISPs deploying IPv6 but still following the same design principles they used for IPv4. In reality, IPv6 should be treated as a new protocol with different capabilities and assumptions.
For example, dynamic IP addresses are common with IPv4, but with IPv6 every user should ideally receive a stable /64 prefix, with the ability to request additional prefixes through prefix delegation (PD) if needed.
Another example is bring-your-own IP space. This is practically impossible for normal users with IPv4, but IPv6 makes it much more feasible. However, almost no ISPs offer this. It would be great if ISPs allowed technically inclined users to announce their own address space and move it with them when switching providers.
You're correct, but the issue is that static IPv6 isn’t even available as an option—at least in my experience with two ISPs in my country. It may be different in other places.
It's also a privacy issue, in fact it's mandatory in some European countries because otherwise you'd be easily tracked by your address, but it's also mandated you can get a static one if you ask.
I personally feel that IPv6 is one of the clearest cases of second system syndrome. What we needed was more address bits. What we got was a nearly total redesign-by-committee with many elegant features but had difficult backwards compatibility.
Which IPv6 “gratuitious” features (i.e. anything other than the decision to make a breaking change to address formats and accordingly require adapters) would you argue made adoption more difficult?
IPv6 gets a lot of hate for all the bells and whistles, but on closer examination, the only one that really matters is always “it’s a second network and needs me to touch all my hosts and networking stack”.
Don’t like SLAAC? Don’t use it! Want to keep using DHCP instead? Use DHCPv6! Love manual address configuration? Go right ahead! It even makes the addresses much shorter. None of that stuff is essential to IPv6.
In fact, in my view TFA makes a very poor case for a counterfactual IPv4+ world. The only thing it really simplifies is address space assignment.
It's not that they loaded it up with features, it's that elegance was prized over practicality.
Simplifying address space assignment is a huge deal. IPv4+ allows the leaves of the network to adopt IPv4+ when it makes sense for them. They don't lose any investment in IPv4 address space, they don't have to upgrade to all IPv6 supporting hardware, there's no parallel configuration. You just support IPv4 on the terminals that want or need it, and on the network hardware when you upgrade. It's basically better NAT that eventually disappears and just becomes "routing".
> They don't lose any investment in IPv4 address space
What investment? IP addresses used to be free until we started running out, and I don't think anything of value would be lost for humanity as a whole if they became non-scarce again.
> they don't have to upgrade to all IPv6 supporting hardware
But they do, unless you're fine with maintaining an implicitly hierarchical network (or really two) forever.
> It's basically better NAT
How is it better? It also still requires NAT for every 4x host trying to reach a 4 only one, so it's exactly NAT.
>> They don't lose any investment in IPv4 address space
> What investment? IP addresses used to be free
Well they're not now, so it's an investment. Any entity that has IP addresses doesn't want its competition to get IP addresses, even when this leads to bad outcomes overall.
It doesn't work like this. SLAAC is a standard compliant way of distributing addresses, so you MUST support it unless you're running a very specific isolated setup.
Most people using Android will come to your home and ask "do you have WiFi here?"
>In what universe does implementing DHCP-PD but not 'regular' DHCPv6 make any kind of sense?
Their policy makes a lot of sense. It's hindering ipv6 deployment, but it is preventing ISPs from allocating less than /64 to customers. It has nothing to do with standards actually.
Dhcp-pd makes a lot of sense though, because if an isp is willing to give you a prefix, they are by default nice guys.
This is about client devices on home and corporate networks connecting to (e.g.) Wifi, and not about ISP connections and addresses on the WAN port of your home router.
Why should my Pixel 10 send out DHCP-PD packets when it connects to Wifi, but not DHCPv6?
Having both a real address, a link-local address, and a unique local address, and the requirement to use the right one in each circumstance
The removal of arp and removal of broadcast, the enforcement of multicast
The almost-required removal of NAT and the quasi-relgious dislike from many network people. Instead of simply src-natting your traffic behind ISP1 or ISP2, you are supposed to have multiple public IPs and somehow make your end devices choose the best routing rather than your router.
All of these were choices made in addition to simply expanding the address scope.
In fairness, aside from whining about the minority attitude towards NAT [0] the person you're replying to absolutely met your definition of "gratuitous":
(i.e. anything other than the decision to make a breaking change to address formats and accordingly require adapters)
I (and I expect the fellow you're replying to) believe that if you're going to have to rework ARP to support 128-bit addresses, you might as well come up with a new protocol that fixes things you think are bad about ARP.
And if the fellow you're replying to doesn't know that broadcast is another name for "all-hosts multicast", then he needs to read a bit more.
[0] Several purity-minded fools wanted to pretend that IPv6 NAT wasn't going to exist. That doesn't mean that IPv6 doesn't support NAT... NAT is and has always been a function of the packet mangling done by a router that sits between you and your conversation partner.
In my opinion the redesign of IPv6 was perfectly fine. The IPv6 headers are significantly simpler that those of IPv4 and much easier to process at great speed.
There was only 1 mistake, but it was huge and all backwards compatibility problems come from it. The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
IPv6 added very few features, but it mostly removed or simplified the IPv4 features that were useless.
> The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
Like
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
> The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
The entire IPv4 address space is included in the IPv6 address space, in fact it's included multiple times depending on what you want to do with it. There's one copy for representing IPv4 addresses in a dual-stack implementation, another copy for NAT64, a different copy for a different tunneling mechanism, etc.
There are several ways to map the IPv4 address space into the IPv6 address space, going right back to the first IPv6 addressing architecture RFC. Every compatibility protocol added a new one.
IPv6 added IPSEC which was backported to IPv4.
IPv6 tried to add easy renumbering, which did’t work and had to be discarded.
IPv6 added scoped addresses which are halfbaked and limited. Site-scoped addresses never worked and were discarded; link-scoped addresses are mostly used for autoconfiguration.
IPv6 added new autoconfiguration protocols instead of reusing bootp/DHCP.
Or did you mean something else? You still need a dual stack configuration though, there's nothing getting around that when you change the address space. Hence "happy eyeballs" and all that.
> You still need a dual stack configuration though, there's nothing getting around that when you change the address space
Yes there is, at least outside of the machine. All you need to do is have an internal network (100.64/16, 169.254/16, wherever) local to the machine. If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. This can be even more hidden than things like internal docker networks.
Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
When the packet arrives at a router with v6 and v4 on (assume your v4 address is 2.2.2.2), that does a 6:4 translation, just like a router does v4:v4 nat
The packet then runs over the v4 network until it reaches 1.0.0.1 with a source of 2.2.2.2, and a response is sent back to 2.2.2.2 where it is de-natted to a destination of 2001:1 and source of ::ffff:100.1
That way you don't need to change any application unless you want to reach ipv6 only devices, you don't need to run separate ipv4 and ipv6 stacks on your routers, and you can migrate easilly, with no more overhead than a typical 44 nat for rfc1918 devices.
Likewise you can serve on your ipv6 only devices by listening on 2001::1 port 80, and having a nat which port forwards traffic coming to 2.2.2.2:80 to 2001::1 port 80 with a source of ::ffff:(whatever)
(using colons as a deliminator wasn't great either, you end up with http://[2001::1]:80/ which is horrible)
> ...you end up with http://[2001::1]:80/ which is horrible
That is horrible, but you do no longer have any possibility of confusion between an IP address and a hostname/domain-name/whatever-it's-called. So, yeah, benefits and detriments.
> Your network then has a route to ::ffff:0:0/96 via a gateway...
I keep forgetting about IPv4-mapped addresses. Thanks for reminding me of them with this writeup. I should really get around to playing with them some day soon.
Could have used 2001~1001~~1 instead of 2001:1001::1, which looks weird today, but wouldn't have done if that had been chosen all those years ago.
(unless : as an ipv6 separator predates its use as a separator for tcp/udp ports, in which case tcp/udp should have used ~. Other symbols are available)
So, I bothered to play around with these addresses. I find myself a little confused by what you wrote.
> If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. ...
> Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
What's the name of this translation mechanism that you're talking about? It seems to be the important part of the system.
I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
When I add the route it looks like you suggested I add
ip route add ::ffff:0:0/96 dev eth0 via <$DEFAULT_IPV6_GATEWAY_IP>
I see the route in my routing table, but I get exactly the same results... no IPv4 or IPv6 traffic.
Based on my testing, it looks like this is only a way to represent IPv4 addresses as IPv6 addresses, as ::ffff:192.168.2.2 gets translated into ::ffff:c0a8:202, but the OS uses that to create IPv4 traffic. If your system doesn't have an IPv4 address configured on it, then this doesn't seem to help you at all. What am I missing?
> I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
Yes, that's the failure of ipv6 deployment.
Imagine you have two vlans, one ipv4 only, one ipv6 only. There's a router sitting across both vlans.
VLAN1 - ipv6 only
Router 2001::1
Device A 2001::1234
VLAN2 - ipv4 only
Router 192.168.1.1
Device B 192.168.1.2
Device A pings 192.168.1.2, the OS converts that transparently to ::ffff:192.168.1.2, it sends it to its default router 2001::1
That router does a 6>4 translation, converting the destination to 192.168.1.2 and the source to 192.168.1.1 (or however it's configured)
It maintains the protocol/port/address in its state as any ipv4 natting router would do, and the response is "unnatted" as an "established connection" (with connection also applying for icmp/udp as v4 nat does today)
An application on Device A has no need to be ipv6 aware. The A record in DNS which resolves to 192.168.1.2 is reachable from device A despite it not having a V4 address. The hard coded IP database in it works fine.
Now if Device B wants to reach Device A, it uses traditional port forwarding on the router, where 192.168.1.1:80 is forwarded to [2001::1234]:80, with source of ::ffff:192.168.1.2
With this in place, there is no need to update any applications, and certainly no need for dual stack.
The missing bits are the lack of common 64/46 natting -- I don't believe it's built into the normal linux network chain like v4 nat is, and the lack of transparent upgrading of v4 handling on an OS level.
You will certainly need to update applications, because they won't be able to connect to v6 addresses otherwise. 464xlat only helps you connect to v4 addresses. It just means that updating _all_ of your applications is no longer a prerequisite of turning v4 off on your network.
Ah. So, you're saying that what you describe doesn't actually exist. That the best you can currently do is stuff like [0] and [1] where the IPv4 or IPv6 client use v4 or v6 addresses (respectively) and an intermediary sets up a fake destination IP on both ingress and egress and does the v4 <-> v6 address translation.
If so, that was not at all clear from your original comment.
It does exist though. The OS part is 464xlat and the router part is NAT64. You can try the second part out by setting your DNS server to one of the ones listed on https://nat64.net/, which will work with hostnames. To get IP literals to work you need 464xlat, which is currently a bit annoying to set up on Linux.
(Note that using the servers provided by nat64.net is equivalent to using an open proxy, so you probably don't want it for general-purpose use. You would probably want either your ISP to run the NAT64 (equivalent to CGNAT), or to run it on your own router (equivalent to NAT).)
The standard prefix for NAT64 is 64:ff9b::/96, although you can pick any unused prefix for it. ::ffff:0:0/96 is the prefix for a completely different compatibility mechanism that's specifically just for allowing an application to talk to the kernel's v4 stack over AF_INET6 sockets (as you figured out). It was a confusing choice of prefix to use to describe NAT64.
Everyone is saying this but... what are the new features, actually? There are a couple of cleanups to the header, removal of fragmentation, and a bunch of things like SLAAC you don't have to use if you don't want to?
Note though that I'm not proposing IPv4x as something we should work towards now. Indeed, I come down on the side of being happy that we're in the IPv6 world instead of this alternative history.
This sounds a lot like what we have in 6to4 (for 25+ years now), where nodes behind two ipv4 derived prefixes can automatically talk to each other p2p, and use a gateway to communicate with the rest of the v6 internet.
You can configure it statically but there used to be the anycast address 192.88.99.1 and the idea was that you'll get routed to the nearest one by magic of BGP. It was retired once native IPv6 deployment took off.
I think the intention was to use normal internet (anycast) routing to send it to the closest translator - which would be your ISP, or the nearest ISP that supports IPv6, or a tier 1 network which is happy to have extra traffic traversing its network unnecessarily since they get paid for all of it. (The same reason HE runs the free tunnel broker)
That was one of the problems with 6to4. If there was no gateway or it was overloaded or there was a gateway but you couldn't reach it because of a weird firewall, all your IPv6 packets would be silently dropped and you'd have no idea why. And this was before happy eyeballs so your computer might default to broken IPv6.
> Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree.
Huh:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
I actually disagree: that's the road taken. NAT is practically this. When you're behind a NAT, you're effectively using a 64-bit address space. Two more layers of NAT, and you can have 128-bit address space. "The first part" of the address is a globally routable IPv4 address, and the rest is kept by the routers on the path tracking NAT connection states.
And NAT needed zero software changes. That's why it's won. It brought the benefits of whatever extension protocol with existing mechanisms of IPv4.
IPv6 isn't an alternative to IPv4, it's an alternative to all IPv4xes.
I'm glad this exists, because it demonstrates there are ways to make a next-gen IP interoperable with IPv4, and that while IPng had interoperability has part of its assessment, it completely nixed it because they didn't want to go with the other proposals at the time which had some consideration for it.
I tend to give the lack of credible ready to deploy asteroid response for Earth defense 41 years after consensus was reached on the KT boundary, 70 years since the 'space age' began, much greater weight.
Motivation for retiring IPv4 completely would NOT be to make the world a better more route-able place. It would be to deliberately obsolescence old products to sell new.
My fantasy road-not-taken for IPv4 is one where it originally used 36 bit addressing like the PDP-10. 64 billion addresses would be enough that we probably wouldn't have had the address exhaustion crisis in the first place, though routing would still get more complicated as most of the world's population (and many devices) started communicating over IP networks.
A flat address space causes problems tracking it. Think of the IPv4 space which is fragmenting towards having a completely separate route for every /24 (the smallest unit). Longer addresses enable levels of aggregation which is good for routing, hence 128 bits.
Even better if it had used 48 bit addressing like Ethernet. 32 bits is almost the worst possible option since it was big enough to seem inexhaustible initially while not actually being big enough.
There are many things wrong with this analogy, but the most important ones seem to be:
- NAT gateways are inherently stateful (per connection) and IP networks are stateless (per host, disregarding routing information). So even if you only look at the individual connection level, disregarding the host/connection layering violation, the analogy breaks.
- NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
Until you have stateful firewall, which any modern end network is going to have
> NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
If 192.168.0.1 and 0.2 both hide behind 2.2.2.2 and talk to 1.1.1.1:80 then they'll get private source IPs and source ports hidden behind different public source ports.
Unless your application requires the source port to not be changed, or indeed embeds the IP address in higher layers (active mode ftp, sip, generally things that have terrible security implications), it's not really a problem until you get to 50k concurrent connections per public ipv4 address.
In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
> Until you have stateful firewall, which any modern end network is going to have
Yes, but it's importantly still a choice. Also, a firewall I administer, I can control. One imposed onto me by my ISP I can’t.
> not really a problem until you get to 50k concurrent connections per public ipv4 address.
So it is in fact a big problem for CG-NATs.
> In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
No, I know what I'm complaining about. Stateful firewall traversal via hole punching is trivial on v6 without port translation, but completely implementation dependent on v4 with NAT in the mix, to just name one thing. (Explicit "TCP hole punching" would also be trivial to specify; it's a real shame we haven't already, since it would take about a decade or two for mediocre CPE firewalls to get the memo anyway.)
Having global addressing is also just useful in and of itself, even without global reachability.
Only if they're under provisioned. If my home really needed tens of thousands I'd provision another ipv4 address, but it doesn't -- at the moment I have a mere 121 active connections in my firewall.
The cost of a firewall is far more than the cost of an ipv4 address, which are available for about $20 each.
> Having global addressing is also just useful in and of itself, even without global reachability.
Except that doesn't happen, as most locations will not be BGP peering and advertising their own /48 (routing tables would melt)
Instead if you change your ISP, you change your IP address. Unless you use private ips in the fc00:: range, which is no different to using rfc1918 addresses for the vast majority of users
Which is the exact problem any other IPv4 "extended" proposal would have hit. But the practical reality if the port number really was the only freely available bits in the IPv4 header to reasonably extend into. Almost everything else had ossified middleboxes doing something dumb with it. (And we've seen from NAT/hole-punching/etc how even port numbers had a lot of assumptions to overcome from middle boxes and we aren't using a full /16 there either. A lot of the safest traffic has to be > 10,000, a constraint on 14 of those 16 bits.)
There was never 64-78 bits in the IPv4 header unconstrained enough to extend IPv4 in place even if you accepted the CGNAT-like compromise of routing through IPv4 "super-routers" on the way to 128-bit addresses. Extending address size was always going to need a version change.
The workarounds for IPv4 address exhaustion were a major contributing factor to today's Internet being basically unable to reliably handle traffic that isn't TCP or UDP. Protocol ossification and widespread tolerance of connections that were effectively only usable for WWW has led to the Internet as a whole almost losing an entire layer of the network stack.
No. Which doesn’t prove the technology has not been adopted. The internet also consists of much more than public-facing websites. So what’s your point?
My point is that we're still dependent on IPv4. For all the progress IPv6 has made, no-one is willing to switch IPv4 off yet. Until we do, we're still constrained by all the problems IPv4 has.
Plenty of people are switching v4 off. Facebook run basically all of their datacenters without v4. T-Mobile USA use only v6 on their network. Thread only supports v6 in the first place
There are plenty of other places doing the same thing, but these examples alone should be sufficient to disprove "no-one is willing to turn v4 off".
Not a single discussion here of his SixGate proposal (but many mischaracterizations of IPv4x as a "proposal", when it was rather an alternate history). So HN.
What's the advice on ULAs? On my internet-connected VLANs, I have a -er- site-local IPv4 subnet, a unique local IPv6 subnet, and a global IPv6 subnet. This works just fine.
Does the "advice" boil down to "You should NEVER use ULAs and ALWAYS use GUAs!" and is given by the same very, very loud subset of people who seemed to feel very strongly that IPv6 implementations should explicitly make it impossible to do NAT?
It's not. It's simple, understandable, straightforward. Only natting to a single address is flawed, but also understandable, because they want to charge you for a prefix.
> It's not [fucked up]. It's simple, understandable, straightforward.
Things that are fucked up can also be simple, understandable, and straightforward.
Unless you're claiming that DHCPv6 is not simple, understandable, and straightforward... in which case:
DHCPv4 is "Give me an IP address, please.". DHCPv6 is "Give me an IP address, please. And also give me what I need for all of my directly-connected friends to have one, too, if you don't mind.".
If your edge router supports IPv6, it almost certainly can make a DHCPv6-PD request and handle advertising the assigned prefix on its LAN side.
Because of Google's continued (deliberate?) misunderstanding of what DHCPv6 is for, Android clients don't do anything sane with it. That doesn't mean that DHCPv6 isn't simple.
Again, DHCPv6 is "Please give me an IP address, and maybe also what my directly-attached friends need to get IP addresses.". Simple, straightforward, and easy to understand. Even if it were relevant, Google's chronic rectocranial insertion doesn't change that.
Why would anyone need IPv6 to be incapable of doing NAT?
To answer your question: Who knows? Perhaps you have a shitlord ISP that only provides you with a /128 (such as that one "cloud provider" whose name escapes me). [0] It's a nice tool to have in your toolbox, should you find that you need to use it.
[0] Yes, I'm aware that a "cloud provider" is not strictly an ISP. They are providing your VMs with access to the Internet, so I think the definition fits with only a little stretching-induced damage.
As a network admin I can say that NAT makes everything much harder and that the source and destination IP should stay the same from source to destination whenever possible.
In any scenario where you want to do traffic steering at a network level. Managing multiple network upstreams (e.g. for network failover or load balancing) is a common example that is served well by numerous off-the-shelf routers with IPv4. That's an important feature that IPv6 cannot offer without using NPTv6 or NAT66.
It's conceivable that OSes could support some sort of traffic steering mechanism where the network distributes policy in some sort of dynamic way? But that also sounds fragile and you (i.e. the network operator) still have to cope with the long tail of devices that will never support such a mechanism.
> Managing multiple network upstreams (e.g. for network failover or load balancing) is a common example ... that IPv6 cannot offer without using NPTv6 or NAT66.
I don't think that's true. I haven't had reason to do edge router failover, but I am familiar with the concepts and also with anycast/multihoming... so do make sure to cross-check what I'm saying here with known-good information.
My claim is that the scenario you describe is superior in the non-NATted IPv6 world to that of the NATted IPv4 world. Let's consider the scenario you describe in the IPv4-only world. Assume you're providing a typical "one global IP shared with a number of LAN hosts via IPv4 NAT". When one uplink dies, the following happens:
* You fail over to your backup link
* This changes your external IP address
* Because you're doing NAT, and DHCP generally has no way to talk back to hosts after the initial negotiation you have no way to alert hosts of the change in external IP address
* Depending on your NAT box configuration, existing client connections either die a slow and painful death, or -ideally- they get abruptly RESET and the hosts reestablish them
Now consider the situation with IPv6. When one uplink dies:
* You fail over to your backup link
* This changes your external prefix
* Your router announces the prefix change by announcing the new prefix and also that the now-dead one's valid lifetime is 0 seconds [0]
* Hosts react to the change by reconfiguring via SLAAC and/or DHCPv6, depending on the settings in the RA
* Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
Assuming that I haven't screwed up any of the details, I think that's what happens. Of course, if you have provider-independent addresses [2] assigned to your site, then maybe none of that matters and you "just" fail over without much trouble?
[0] I think this is known as "deprecating" the prefix
[1] I think whether they die slow or fast depends on how the router is configured
> * Hosts react to the change by reconfiguring via SLAAC and/or DHCPv6, depending on the settings in the RA
This is the linchpin of the workflow you've outlined. Anecdotal experience in this area suggests it's not broadly effective enough in practice, not least because of this:
> * Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
The old IP addresses (afaiu/ime) will not be removed before any dependent connections are removed. In other words, the application (not the host/OS) is driving just as much as the OS is. Imo, this is one of the core problems with the scenario, that the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event. Because of that, things will ~always be leaky.
> [1] I think whether they die slow or fast depends on how the router is configured
Yeah, and that configuration will presumably be sensitive to what caused the failover. This could manifest differently based on whether upstream A simply has some bad packet loss or whether it went down altogether (e.g. a physical fault).
In any case, this vision of the world misses on at least two things, in my view:
1. Administrative load balancing (e.g. lightly utilizing upstream B even when upstream A is still up
2. The long tail of devices that don't respond well to the flow you outlined. It's not enough to think of well-behaved servers that one has total control over; need to think also of random devices with network stacks of...varying quality (e.g. IOT devices)
> The old IP addresses (afaiu/ime) will not be removed before any dependent connections are removed.
I have two reactions to this.
1) Duh? I'm discussing a failover situation where your router has unexpectedly lost its connection to the outside world. You'd hope that your existing connections would fail quickly. The existence of the deprecated IP shoudn't be relevant because the OS isn't supposed to use it for any new connections.
2) If you're suggesting that network-management infrastructure running on the host will be unable to delete a deprecated address from an interface because existing connections haven't closed, that doesn't match my experience at all. I don't think you're suggesting this, but I'm bringing it up to be thorough.
> ...the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event.
I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
> In any case, this vision of the world misses on at least two things, in my view:
> 1. Administrative load balancing...
I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
> 2. The long tail of devices that don't respond well to the flow you outlined.
Do they respond worse than in the IPv4 NAT world? This and other commentary throughout indicates that you missed the point I was making. That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
> 1) Duh? I'm discussing a failover situation where your router has unexpectedly lost its connection to the outside world. You'd hope that your existing connections would fail quickly. The existence of the deprecated IP shoudn't be relevant because the OS isn't supposed to use it for any new connections.
Well failover is an administrative decision that can result from unexpectedly losing connection. But it can also be more ambiguous packet loss too, something that wouldn't necessarily manifest in broken connections--just degraded ones.
If upstream A is still passing traffic that simply gets lost further down the line, then there's no particular guarantee that the connection will fail quickly. If upstream A deliberately starts rejecting TCP traffic with RST, then sure, that'll be fine. But UDP and other traffic, no such luck. Whereas QUIC would fare just fine with NAT thanks to its roaming capabilities.
> I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
Idk, I'll have to take your word for it. Instinctively though, this feels like a situation where the lowest common denominator wins. In other words, average applications aren't going to do any legwork here. The best thing to hope for is for language standard libraries to make this as built-in as possible. But if that exists, I'm extremely unaware of it.
> I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
I presume you meant a layer above IP? But no, I don't see why this is challenging in a NAT world. At least, I've worked with routers that support this, and it always seemed to Just Work™. I'd naively assume that the router is just modding the hash of the layer 3 addresses or something though.
> Do they respond worse than in the IPv4 NAT world?
I've basically only ever had good experiences in the IPv4 NAT world.
> That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
I'm certainly sympathetic to this point. And, all things being equal, of course that seems better! If NAT66 were to not offer sufficient practical benefits, then I'd be convinced.
But please bear in mind that this was the original comment I responded to (not yours). Responding to this is where I'm coming from:
30 years on and if I have a machine with an ipv6 only network and run
"ping 1.1.1.1"
it doesn't work.
If stacks had moved to ipv6 only, and the OS and network library do the translation of existing ipv4, I think things would have moved faster. Every few months I try out my ipv6 only network and inevitably something fails and I'm back to my ipv4 only network (as I don't see the benefit of dual-stack, just the headaches)
Sure you'd need a 64 gateway, but then that can be the same device that does your current 44 natting.
There are lots of places that have IPv6-only networks and access IPv4 through NAT64. It makes sense for new company networks that can control what software gets installed.
The main limitation is software that only supports IPv4. This would affect your proposed solution of doing the translation in the stack. There is no way to fix an IPv4-only software that has 32-bit address field.
Yes there is, you have the device the software is on do the translation transparently. The software thinks its talking to 1.2.3.4, it's actually talking to ::ffff:1.2.3.4, but the application doesn't need to know that as the translation is occuring in the network stack (driver, module, whatever).
> There are lots of places that have IPv6-only networks and access IPv4 through NAT64
I've just deployed a new mostly internal network, and this was my plan.
The network itself worked, but the applications wouldn't. Most required applications could cope, but not all, meaning I need to deploy ipv4, meaning that there's no point in deploying ipv6 as well as ipv4, just increases the maintenance and security for no business benefits.
It's bit weird how despite Linux kernel having otherwise fairly advanced network stack, the 464xlat and other transition mechanism situation is not great. There are some out of tree modules (jool and nat46) available, but nothing in mainline. Does anyone know why that is?
And if that's applies across the board, in 20 years time that might have filtered through to mean ipv4 can be dropped in my company.
I'd rather see this at a lower level than network manager and bodging in with bpf, so it's just a default part of any device running a linux network stack, but I don't know enough about kernel development and choices to know how possible that is in practice.
This should have been supported in the kernel 25 years ago though if the goal was to help ipv6 migration
I've often thought similar thoughts to this because fundamentally, IPv4 should still be enough for us, except that we've chronically wasted lots of it.
The first main issue is that most often we waste an entire IPv4 for things that just have a single service, usually HTTPS and also an HTTP redirector that just replies with a redirect to the HTTPS variant. This doesn't require an entire IPv4, just a single port or two.
We could have solved the largest issue with address exhaustion simply by extending DNS to have results that included a port number as well as an IP address, or if browsers had adopted the SRV DNS records, then a typical ISP could share a single IPv4 across hundreds of customers.
The second massive waste of IPv4 space is BGP being limited to /24. In the days of older routers when memory was expensive and space was limited, limiting to /24 makes sense. Now, even the most naive way of routing - having a byte per IP address specifying what the next hop is - would fit in 4GB of RAM. Sure, there is still a lot of legacy hardware out there, but if we'd said 10 years ago that the smallest BGP announcements would reduce from /24 to /32, 1 bit per year, so giving operators time to upgrade their kit, then we'd already be there by now. They've already spent the money on getting IPv6 kit which can handle prefixes larger than this, so it would have been entirely possible.
And following on from the BGP thing is that often this is used to provide anycast, so that a single IPv4 can be routed to the geographically closest server. And usually, this requires an entire /24, even though often it's only a single port on a single IPv4 that's actually being used.
Arguably, we don't even need BGP for anycast anyway. Again, going back to DNS, if the SRV record was extended to include an approximate location (maybe even just continent, region of continent, country, city) where each city is allocated a hierarchical location field split up roughly like ITU did for phone numbers, then the DNS could return multiple results and the browser can simply choose the one(s) that's closest, and gracefully fall back to other regions if they're not available. Alternatively, the client could specify their geo location during the request.
So, basically, all of that can be done with IPv4 as it currently exists, just using DNS more effectively.
We also have massive areas of IPv4 that's currently wasted. Over 8% of the space is in the 240.0.0.0/4 range that's marked as "reserved for future use" and which many software vendors (e.g. Microsoft) have made the OS return errors if it's used. Why? This is crazy. We could, and should, make use of this space, and specifically for usages where ports are better used, so that companies can share a single IPv4 at the ISP level.
Another 8% is reserved for multicast, but nowadays almost nothing on the public IPv4 uses it and multicast is only supported on private networks. But in any case, 225.0.0.0/8-231.0.0.0/8 and 234.0.0.0/8-238.0.0.0/8 (collectively 12 /8s, or 75% of the multicast block) is reserved and should never have been used for any purpose. This too could be re-purposed for alleviating pressure on IPv4 space.
Finally, there are still many IPv4 /24s or larger that are effectively being hoarded by companies knowing they can make good money from renting them out or selling them later. Rather than being considered an asset, we should be charging an annual fee to keep hold of these ranges and turn them into a liability instead, as that would encourage companies with a large allocation that they don't need to release them back.
The other main argument against IPv4 is NAT, but actually I see that as a feature. If services actually had port number discovery via DNS, then forwarding specific ports to the server than deals with them is an obvious thing to do, not something exceptional. The majority of machines don't even want incoming connections from a security point of view, and most firewalls will block incoming IPv6 traffic apart from to designated servers anyway. The "global routing" promised by IPv6 isn't actually desired for the most part, the only benefit is when it is wanted you have the same address for the service everywhere. The logical conclusion from that is that IPv4 needs a sensible way of allocating a range of ports to individual machines rather than stopping just at the IP address.
When you then look at IPv6 space, it initially looks vast and inexhaustible, but then you realise that the smallest routable prefix with BGP is /48, it should be apparent that it suffers from essentially the same constraints as IPv4. All of "the global internet" is in 2002::/16, which effectively gives 32 bits of assignable space. Exactly the same as IPv4. Even more, IPv6 space is usually given out in /44 or /40 chunks, which means it's going to be exhausted at almost the same rate as IPv4 given out in /24 chunks. So much additional complexity, for little extra gain, although I will concede that as 2003::/16 to 3ffe::/16 isn't currently allocated there is room to expand, as long as routers aren't baking in the assumption that all routable prefixes are in 2001::/16 as specified.
TLDR: browsers should use SRV to look up ports as well as addresses, and SRV should return geo information so clients can choose the closest server to them. If we did that, the IPv4 space is perfectly large enough because a single IPv4 address can support hundreds or thousands of customers that use the same ISP. Effectively a /32 IPv4 address is no different to a /40 IPv4 prefix, and the additional bits considered part of the address in IPv6 could be encoded in the port number for IPv4.
I think that's meant to be covered by the "IPv4x when we can. NAT when we must" part, in particular "ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony."
It seemed strange that the need for CGNAT wasn't mentioned until after the MIT story. The "Nothing broke" claim in that story seems unlikely; I was on a public IP at University at the end of the 90s and if I'd suddenly been put behind NAT, some things I did would have broken until the workarounds were worked out.
> "ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony."
What's the difference between that and dual stack v4/v6, though? Other than not needing v6 address range assignments, of course.
Try an IPv6-only VPS and see how quickly something breaks for you.
Dual-stack fails miserably when the newer stack is incompatible with the older one. With a stack that extends the old stack, you always have something to fallback to.
To replace something, you embrace it and extend it so the old version can be effectively phrased out.
> Try an IPv6-only VPS and see how quickly something breaks for you.
Who's arguing for that? That would be completely non-viable even today, and even with NAT64 it would be annoying.
> Dual-stack fails miserably when the newer stack is incompatible with the older one.
Does it? All my clients and servers are dual stack.
> With a stack that extends the old stack, you always have something to fallback to.
Yes, v4/v6 dual stack is indeed great!
> To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Some changes unfortunately really are breaking. Sometimes you can do a flag day, sometimes you drag out the migration over years or decades, sometimes you get something in between.
We'll probably be done in a few more decades, hopefully sooner. I don't see how else it could have realistically worked, other than maybe through top-down decree, which might just have wasted more resources than the transition we ended up with.
I don't see IPv4 going away within the next fifty years. I'd not be surprised for it to last for the next hundred+ years. I expect to see more and more residential ISPs provide their customers with globally-routable IPv6 service and put their customers behind IPv4 CGNs (or whatever the reasonable "Give the customer's edge router a not-globally-routable IPv4 address, but serve its traffic with IPv6 infrastructure" mechanism to use is). That IPv4 space will get freed up to use in IPv4-only publicly-facing services in datacenters.
There's IPv4-only software out there, and I expect that it will outlive everyone who's reading this site today. That's fine. What matters is getting proper IPv6 service to every Internet-connected site on (and off) the planet.
With you on “IPv6 only will become a thing for many clients”, but servers (or at least load balancers) will absolutely not stay v4-reachable only.
They’re already not. For example, I believe you won’t get an iOS app approved for distribution by Apple these days if it doesn’t work on v6-only clients.
> With you on “IPv6 only will become a thing for many clients"...
That's not what I said. I said that having a globally-routable IPv4 address assigned to a LAN's edge router will stop being a thing. Things like CGN (or some other sort of translation system) will be the norm for all residential users.
> ...but servers (or at least load balancers) will absolutely not stay v4-reachable only.
Some absolutely will. There's a lot of software and hardware out there that's chugging along doing exactly what the entity that deployed it needs it to do... but -for one of handful of reasons- will never, ever be updated ever again. This is fine. The absolute best thing any programmer can do is to create a system that one never has to touch ever again.
> That's not what I said. I said that having a globally-routable IPv4 address assigned to a LAN's edge router will stop being a thing. Things like CGN (or some other sort of translation system) will be the norm for all residential users.
That's still what I would call a v6-only (with translation mechanisms) client deployment. Sorry for being imprecise on the "with translation mechanisms" part.
> Some absolutely will.
Very few, in my prediction. We're already seeing massive v6 + CG-NAT-only deployments these days, and the NAT part is starting to have worse performance characteristics: Higher latency because the NATs aren't as geographically distributed as the v6 gateway routers, shorter-lived TCP connections because IP/port tuples are adding a tighter resource constraint than connection tracking memory alone etc.
This, and top-down mandates like Apple's "all apps must work on v6 only phones", is pushing most big services to become v6 reachable.
At some point, some ISP is going to decide that v6 only (i.e. without translation mechanisms) Internet is "enough" for their users. Hackers will complain, call it "not real Internet" (and have a point, just like I don't consider NATted v4 "real Internet"!), but most profit-oriented companies will react by quickly providing rudimentary v6 connectivity via assigning a v6 address to their load balancer and setting an AAAA record.
I agree that v4 only servers will stick around for decades, just like there are still many non-Internet networks out there, but v4 only reachability will become a non-starter for anything that humans/eyeballs will want to access. And at some point, the fraction of v4-only eyeballs will become so small that it'll start becoming feasible to serve content on v6 only. At that point, v4 will be finally considered "not the real Internet" too.
Sure, I agree. I'm not sure how you got the notion that I thought a large percentage of systems out there will never get IPv6 support. There's a lot of solid systems out there that just fucking run. They're a small percentage of all of the deployed machines in the world.
> That's still what I would call a v6-only (with translation mechanisms) client deployment.
When people say "IPv6 only", they mean "Cannot connect to IPv4 systems". IMO, claiming it means anything else is watering down the definition into meaninglessness. Consider it in the context of what someone means when they envision a future where the Internet is "IPv6 only", so we don't need to deal with the "trouble" and "headache" of running both v4 and v6.
Yeah, it's my understanding that that's been the situation for a great many folks in the Asia/Pacific part of the world for a while now. Lots and lots of hosts, but not much IPv4 space allocated.
The question was about forwarding as I understand it, not address resolution, and there simply won't be any forwarding, since the 32 bit only sending host won't be able to address the 128 bit receiving one.
in 100 years ipv4 will be recognized as one of the great discoveries like calculus. ipv6 is a misnomer really. It's a separate, and lesser protocol. Much like other second systems, it was too ambitious and not pragmatic enough.
Rather than looking down on IPv4 , we should admire how incredible it's design was. Its elegance and intuitiveness, resourcefulness have all led to it outlasting every prediction of it's demise.
What is described here is basically just CIDR plus NAT which is...what we actually have.
At the time IPv6 was being defined (I was there) CIDR was just being introduced and NAT hadn't been deployed widely. If someone had raised their hand and said "I think if we force people to use NAT and push down on the route table size with CIDR, I think it'll be ok" (nobody said that iirc), they would have been rejected because sentiment was heavily against creating a two-level network. After all having uniform addressing was pretty much the reason internet protocols took off.
> In a nutshell, an IPv4x packet is a normal IPv4 packet, just with 128‑bit addresses. The first 32 bits of both the source and target address sit in their usual place in the header, while the extra 96 bits of each address (the “subspace”) are tucked into the first 24 bytes of the IPv4 body. A flag in the header marks the packet as IPv4x, so routers that understand the extension can read the full address, while routers that don’t simply ignore the extra data and forward it as usual.
So you have to ship new code to every 'network element' to support IPv4x. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (And their DNS idea won't work—or won't work differently than IPv6: a lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "AX" address (for ipv4X addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6.
A single residential connection that gets a single IPv4 address also gets to use all the /96 'behind it' with this IPv4x proposal? People complain about the "wastefulness" of /64s now, and this is even more so (to the tune of 32 bits). You'd probably be better served with pushing the new bits to the other end… like…
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
If you put part of the address in the body space, you can't encrypt the entire body.
IPv6 adoption has been linear for the last two decades. Currently, 48% of Google traffic is IPv6.[1] It was 30% in 2020. That's low, because Google is blocked in China. Google sees China as 6% IPv6, but China is really around 77%.
Sometimes it takes a long time to convert infrastructure. Half the Northeast Corridor track is still on 25Hz. There's still some 40Hz power around Niagara Falls. San Francisco got rid of the last PG&E DC service a few years ago. It took from 1948 to 1994 to convert all US freight rail stock to roller bearings.[2] European freight rail is still using couplers obsolete and illegal in the US since 1900. (There's an effort underway to fix this. Hopefully it will go better than Eurocoupler from the 1980s. Passenger rail uses completely different couplers, and doesn't uncouple much.)[3]
[1] https://www.google.com/intl/en/ipv6/statistics.html
[2] https://www.youtube.com/watch?v=R-1EZ6K7bpQ
[2] https://rail-research.europa.eu/european-dac-delivery-progra...
should also bring to mind that there's a technological leapfrogging effect in stages of development which it seems clear that China has taken advantage of.
https://en.wikipedia.org/wiki/Leapfrogging
Yes, I was wondering if I was missing something reading the hypothetical: This is still splits the Internet into two incompatible (but often bridged etc.) subnetworks, one on the v4, one on the v4x side, right?
It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset). Not sure if that actually makes anything better or just kicks the can down the road in an even more messy way.
> everything in v4x that happens to have the last 96 bits unset
That's pretty much identical to 6in4 and similar proposals.
The Internet really needs a variant of the "So, you have an anti spam proposal" meme that used to be popular. Yes, it kill fresh ideas in the bud sometimes, but it also helps establish a cultural baseline for what is constructive discussion.
Nobody needs to hear about the same old ideas that were subsumed by IPv6 because they required a flag day, delayed address exhaustion only about six months, or exploded routing tables to impossible sizes.
If you have new ideas, let's hear them, but the discussion around v6 has been on constant repeat since before it was finalized and that's not useful to anyone.
I feel like the greatest vindication of v6 is that I’m reading the same old arguments served over a quietly working v6 connection more often than not. While people were busy betting on the non-adoption of v6, it just happened.
—Sent from my IPv6 phone
> The Internet really needs a variant of the "So, you have an anti spam proposal" meme that used to be popular.
For those unfamiliar:
* https://craphound.com/spamsolutions.txt
Public reluctance to accept weird new forms of money has turned out to be a myth.
I don't think that's true.
There are a ton of weird coins around, sure, but no-one is using them as money.
I still have to stump up actual dollars backed by a government if I want to buy a coffee.
This wasn't a proposal, but an alternate history. The world where the people who wished for IPv4 but with extra address space get their way. By the end I come down on being happy with we're in the IPv6 world, but wishing interoperability could be slicker.
> It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset).
See perhaps:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
* https://en.wikipedia.org/wiki/6to4
Have an IPv4 address? Congratulations! You get entire IPv6 /48 for free.
I might be interpreting wrong, but doesn't IPv6 also have a "implicit bridge" for IPv4?
If it does that's great, but why couldn't I connect to IPv6-only services back when my ISP was IPv4 only?
It's one way, v6 is aware of v4, but not the other way around
It's not automatic, there were many proposed and utilized mechanisms for autodetecting translation servers and so on. By now though, if you want IPv6, you order real IPv6, and don't need some translation.
IPv6 had an implicit bridge called 6to4 but it was phased out because it wasn't that reliable.
> Just like with IPv6.
Yes, but the compatibility is very very easy to support for both hardware vendors, softwares, sysadmins etc. Some things might need a gentle stroke (mostly just enlarge a single bitfield) but after that everything just works, hardware, software, websites, operators.
A protocol is a social problem, and ipv6 fails exactly there.
What slowed ipv6 wasn’t the design of ipv6, it was the invention of NAT and CGNAT.
Even still. The rollout is still progressing, and new systems like Matter are IPv6 only.
What stymies IPv6 is human laziness more than anything else. It's not hard to set up. Every network I run has been dual stack for 10 years now, with minimal additional effort. People are just too lazy to put forth even a minimal effort when they believe that there's no payoff to it.
> What stymies IPv6 is human laziness more than anything else. It's not hard to set up.
I think the biggest barrier to IPv6 adoption is that this is just categorically untrue and people keep insisting that it isn't, reducing the chance that I'd make conscious efforts to try to grok it.
I've had dozens of weird network issues in the last few years that have all been solved by simply turning off IPv6. From hosts taking 20 seconds to respond, to things not connecting 40% of the time, DHCP leases not working, devices not able to find the printer on the network, everything simply works better on IPv4, and I don't think it's just me. I don't think these sort of issues should be happening for a protocol that has had 30 years to mature. At a certain point we have to look and wonder if the design itself is just too complicated and contributes to its own failure to thrive, instead of blaming lazy humans.
Yes, if you can’t properly setup a network now then IPv6 won’t help you. That isn’t IPv6’s fault.
> People are just too lazy to put forth even a minimal effort when they believe that there's no payoff to it.
For me just disabling IPv6 has given the biggest payoff. Life is too short to waste time debugging obscure IPv6 problems that still routinely pop up after over 30 years of development.
Ever since OpenVPN silently routed IPv6 over clearnet I've just disabled it whenever I can.
This goes the other direction too. I just this second fixed a problem with incredibly slow SSH connections because a host lookup which returned an IPv4 address instantly was waiting 10+ seconds for an IPv6 response which would never come.
Now I'm sure I can fix DNSmasq to do something sensible here, but the defaults didn't even break - they worked in the most annoying way possible where had I just disabled IPv6 that would've fixed the entire problem right away.
Dual stack has some incredibly stupid defaults.
I'm confused by the argument that replacing equipment is something that is always possible. It doesn't matter that it's easy to support by updating or replacing the hardware - a lot of hardware isn't going to be updated or replaced.
ISPs are used to this though, and tunnel a lot of packets. If you have DSL at home, your ISP doesn't have a router in every edge cabinet - your DSL router sets up a layer-2 point-to-point tunnel to the ISP's nearest BRAS (broadband remote access server) in a central location. All IP routing happens there. Because it's a layer-2 tunnel it looks like your router is directly connected to the BRAS, even though there are many devices in between. I don't know how it's done on CATV and fiber access networks.
If an ISP uses an MPLS core, every POP establishes a tunnel to every other POP. IP routing happens only at the source POP as it chooses which pre-established tunnel to use.
If an ISP is very new, it likely has an IPv6-only core, and IPv4 packets are tunneled through it. If an ISP is very old, with an IPv4-only core, it can do the reverse and tunnel IPv6 packets through IPv4. It can even use private addresses for the intermediate nodes as they won't be seen outside the network.
No, in this hypothetical, routers that don't know about IPv4x will still route based on the top 32 bits of the address which is still in the same place for IPv4 packets. If your machine on your desk and the other machine across the internet both understand IPv4x, but no other machines in the middle do, you'll still get your packets across.
Well no, all the routers on your subnet need to understand it.
So let’s say your internet provider owns x.x.x.x, it receives a packet directed to you at x.x.x.x.y.y… , forwards it to your network, but your local router has old software and treats all packages to x.x.x.x.* as directed to it. You never receive any medssagea directly to you evem though your computer would recognise IPv4x.
It would be a clusterfuck.
Your local machine isn't on the IPv4 internet if it doesn't have a globally routable IPv4 address.
Your home router that sits on the end of a single IPv4 address would need to know about IPv4x, but in this parallel world you'd buy a router that does.
How would anything on the internet know about x.x.x.x.y.y…?
Your computer knows it’s connected to an old router because dhcp gave it x.x.x.x address and not x.x.x.x... so it knows it’s running in old v4 mode.
And it can still send outbound to a v4x address that it knows about.
you are missing the point - updating "network elements" was never the problem. Linux kernel has IPv6 support since 2.6. RedHat got IPv6 in 2008. Nginx got it in 2010. And yet there plenty of IPv4-systems out there. why?
Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
That's where IPv6 really dropped the ball by having dual-stack the default. In IPv4x, there is no dual-stack.
I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
I have my small office network which is on the mix IPv4 and IPv4x addresses. Most Windows/Linux machines are on IPv4x, but that old network printer and security controller still have IPv4 address (with router translating responses). It still all works together. There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
(Funny enough, I bet one _could_ accelerate IPv6 deployment a lot by have a standard that _requires_ 6to4/4to6/NAT64 technology in each IPv6 network... but instead the IPv6 supporters went into all-or-nothing approach)
"I bet one _could_ accelerate IPv6 deployment a lot by have a standard ..."
ahem
https://owl.billpg.com/sixgate/
"The good thing about standards is that there are so many to choose from."
> Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
With IPv6 the router needs to send out RAs. That's it. There's no need to do anything else with IPv6. "Automatic configuration of hosts and routers" was a requirement for IPng:
* https://datatracker.ietf.org/doc/html/rfc1726#section-5.8
When I was with my last ISP I turned IPv6 on my Asus router, it got a IPv6 WAN connection, and a prefix delegation from my ISP, and my devices (including by Brother printer) started getting IPv6 addresses. The Asus had a default-deny firewall and so all incoming IPv6 connections were blocked. I had do to zero configuration on any of the devices (laptops, phones, IoT, etc).
> I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
So if you cannot connect via >32b addresses you fall back to 32b addresses?
* https://en.wikipedia.org/wiki/Happy_Eyeballs
> I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
* https://en.wikipedia.org/wiki/IPv6_rapid_deployment
A French ISP deployed this across their network of four million subscribers in five months (2007-11 to 2008-03).
> There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
If an (e.g.) public web server has public address (say) 2.3.4.5 to support legacy IPv4-only devices, but also has 2.3.4.5.6.7.8.9 to support IPv4x devices, how can you have only one firewall rule set?
> So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
Having 10.11.12.13 on your PC as well as 10.11.12.13.14.15.16 as per IPv4x is "a second parallel set of addresses".
It is running a whole separate network because your system has the address 10.11.12.13 and 10.11.12.13.14.15.16. You are running dual-stack because you support connection from 32-bit-only un-updated, legacy devices and >32b updated devices. This is no different than having 10.11.12.13 and 2001:db8:dead:beef::10:11:12:13.
The advantage, as i see it, is that this could be done incrementally. Every new router/firmware/os could add support, until support is ubiquitous.
Contrast this with ip6, which is a completely new system, and thus has a chicken and egg problem.
IPv6 is a parallel system. It exists with IPv4. You don't need to stop using IPv4 - ever - if you don't want to. You can have both the chicken and egg together as long as is needed.
At some point IPv4 addresses will cost too much.
That is how v6 worked though. Every router and consumer device supports v6 and has for a very long time now. The holdup ended up being ISPs.
Today it seems most ISPs support it but have it behind an off by default toggle.
Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address. Then, with this proposal, if I want to have a bunch of computers behind that single ipv4 ip, I could do it without relying on NAT tricks
> Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address.
The reason there's an IPv4 address shortage is because ISPs assign every user a unique IPv4 address. In this alternative timeline, ISPs would have to give users less-than-an-IPv4 address, which probably means a single IPv4x address if we're being realistic and assuming that ISPs are taking the path of least resistance.
If that happens then the user can only communicate with hosts supporting IPv4x and you're back to the IPv6 issue
There aren’t enough IPv4 addresses to give everyone one. That is why ISPs use CGNAT to hide multiple customers behind one IP address.
Something that just uses IPv4 won’t work without making the extra layer visible. That may not have been apparent then but it is now.
> Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree. It has to work this way because any router that doesn’t understand IPv4x will still route purely on the old 32‑bit address. There’s no point assigning part of your subspace to someone else — their packets will still land on your router whether you like it or not.
So the folks that just happen to get in early on the IPv4 address land rush (US, Western world) now also get to grab all this new address space?
What about any new players? This particular aspect idea seems to reward incumbents. Unlike IPv6, where new players (and countries and continents) that weren't online early get a chance to get equal footing in the expanded address space.
The new players would each get a /24 and everyone would say that's "enough".
> The new players would each get a /24 and everyone would say that's "enough".
From where?
All then-existing IPv4 addresses would get all the bits behind them. There would, at the time, still be IPv4 addresses available that could be given out, and as people got them they would also get the extend "IPv4x" address associated with them.
But at some point IPv4 addresses would all be allocated… along with all the extended addresses 'behind' them.
Then what?
The extended IPv4x addresses are attached to the legacy IPv4 addressed they are 'prefixed' by, so once the legacy bits are assigned, so are the new bits. If someone comes along post-legacy-IPv4 exhaustion, where do new addresses come from?
You're in the exact same situation as we are now: legacy code is stuck with 32-bit-only addresses, new code is >32-bits… just like with IPv6. Great you managed to purchase/rent a legacy address range… but you still need a translation box for non-updated code… like with CG-NAT and IPv6.
ISPs can still get /24 of IPv4 today for free even after it "ran out". This comes from space that was set aside before runout. https://www.arin.net/participate/policy/nrpm/#4-10-dedicated...
This IPv4x thing is bullshit but we should be accurate about how it would play out.
So under this IPv4x proposal one gets an IPv4 /24, and receives a whole bunch of extended address space 'for free'.
But right now you can get an IPv4 /24 (as you say), but you can get an IPv6 allocation 'for free' as we speak.
In both cases legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the IPv4x and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Essentially.I imagined this hypothetical happening decades ago when there were still a few /8s unallocated. I suggested that those would be set aside for IPv4x only.
Yeah, and the value of IPv4 address space would plummet, and there would be no reason for any company to own a /8. Clawing back address space would involve a few emails and a few months to get network configs ready.
If it became universally adopted, then there wouldn't be much need for owning crazy amounts of ipv4 addresses, so the price of addresses would drop. If this proposal was not adopted universally, then we would be pretty much in the same situation as we are with ipv4 addresses
> The Version field must remain 4.
And at the same time the address format and IP header is extended, effectively still splitting one network into two (one of which is a superset of the others)?
A fundamentally breaking change remains a breaking change, whether you have the guts to bump your version number or not.
The central idea is that a IPv4x packet is still a globally routable IPv4 packet. The extra stuff all goes in the body of the packet.
If the extra stuff is mandatory for global reachability, again, it’s conceptually a mandatory part of the header, no matter where you actually put or what you call it.
> The central idea is that a IPv4x packet is still a globally routable IPv4 packet.
That's cool and all, but end-user edge routers are absolutely going to have to be updated to handle "IPv4x". Why? Because the entire point of IPvNext is to address address space exhaustion, their ISP will stop giving them IPv4 addresses.
This means that the ISP is also going to have to update significant parts of their systems to handle "IPv4x" packets, because they're going to have to handle customer site address management. The only thing that doesn't have to change is the easiest part of the system to get changed... the core routers and associated infrastructure.
Yes. The router in your home would absolutely need to support IPv4x if you wanted to make use of the extended address space, just like how in the real world your home router needs to support NAT if you want to make use of shared IP.
> The router in your home would absolutely need to support IPv4x if you wanted to make use of the extended address space...
No. The router in your home would need to support IPv4x, or you would get no Internet connection. Why? Because IPv4x extends the address space "under" each IPv4 address -thus- competing with it for space. ISPs in areas with serious address pressure sure as fuck aren't going to be giving you IPv4 addresses anymore.
As I mentioned, similarly, ISPs will need to update their systems to handle IPv4x, because they are -at minimum- going to be doing IPv4x address management for their customers. They're probably going to -themselves- be working from IPv4x allocations. Maybe each ISP gets knocked down from several v4 /16s or maybe a couple of /20s to a handful of v4 /32s to carve up for v4x customer sites.
Your scheme has the adoption problems of IPv6, but even worse because it relies on reclaiming and repurposing IPv4 address space that's currently in use.
> the easiest part of the system to get changed... the core routers and associated infrastructure.
Is that really the easy bit to change? ISPs spend years trialling new hardware and software in their core. You go through numerous cheapo home routers over the lifetime of one of their chassis. You'll use whatever non-name box they send you, and you'll accept their regular OTA updates too, else you're on your own.
> Is that really the easy bit to change?
When you're adding support for a new Internet address protocol that's widely agreed to be the new one, it absolutely is. Compared to what end-users get, ISPs buy very high quality gear. The rate of gear change may be lower than at end-user sites but because they're paying far, far more for the equipment, it's very likely to have support for the new addressing protocol.
Consumer gear is often cheap-as-possible garbage that has had as little effort put into it as possible. [0] I know that long after 2012, you could find consumer-grade networking equipment that did not support (or actively broke) IPv6. [1] And how often do we hear complaints of "my ISP-provided router is just unreliable trash, I hate it", or stories of people saving lots of money by refusing to rent their edge router from their ISP? The equipment ISPs give you can also be bottom-of-the-barrel crap that folks actively avoid using. [2]
So, yeah, the stuff at the very edge is often bottom-of-the-barrel trash and is often infrequently updated. That's why it's harder to update the equipment at edge than the equipment in the core. It is way more expensive to update the core stuff, but it's always getting updated, and you're paying enough to get much better quality than the stuff at the edge.
[0] OpenWRT is so, so popular for a reason, after all.
[1] This was true even for "prosumer" gear. I know that even in the mid 2010s, Ubiquiti's UniFi APs broke IPv6 for attached clients if you were using VLANs. So, yeah, not even SOHO gear is expensive enough to ensure that this stuff gets done right.
[2] You do have something of a point in the implied claim that ISPs will update their customer rental hardware with IPv6 support once they start providing IPv6 to their customer. But. Way back when I was so foolish as to rent my cable modem, I learned that I'd been getting a small fraction of the speed available to me for years because my cable modem was significantly out of date. It required a lucky realization during a support call to get that update done. So, equipment upgrades sometimes totally fall through the cracks even with major ISPs.
> but it's always getting updated,
I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update (because of the huge time/cost in validating it), and vendors minimising their workloads/risk exposure and only updating what they "have to". The vendors have a lot of power here and these big new protocols are just more work.
In addition, smaller ISPs have virtually no say in what software/features they get. They can ask all they want, they have little power. It takes a big customer to move the needle and get new features into these expensive boxes. It really only happens when there's another vendor offering something new, and therefore a business requirement to maintain feature parity else lose big-customer revenue. So yeh, if a new protocol magically becomes standard, only then would anyone bother implementing and supporting it.
I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship, the boxes are cheap, and just plug and play. They're relatively simple and easy to validate for 99% of usecases. If your internet stops working (because you didn't get the new hw/sw), they ship you a replacement, 2 days later it's fixed.
But I will just say, and slightly off topic of this thread, the lack of multiple extension headers in this proposed protocol instantly makes it more attractive to implement compared to v6.
> I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update... and vendors minimising their workloads/risk exposure and only updating what they "have to"...
You misunderstand me, though the misunderstanding is quite understandable given how I phrased some of the things.
I expect the updating usually occurs when buying new kit, rather than on kit that's deployed... and that that purchasing happens regularly, but infrequently. I'm a very, very big proponent of "If it's working fine, don't update its software load unless it fixes a security issue that's actually a concern.". New software often brings new trouble, and that's why cautious folks do extensive validation of new software.
My commentary presupposed that
which I'd say counts as something that a vendor "has to" implement.> I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship...
I expect enough people don't use the ISP-rented equipment that it's -in aggregate- actually not much easier to update edge equipment. That's what I was trying to get at with talking about "ISP-provided routers & etc are crap and not worth the expense".
On the other hand, consumer routers route in software, which is easily updated. Core routers with multi-terabit-per-second connections use specialized ASICs to handle all that traffic, which can never be updated.
> consumer routers route in software, which is easily updated
You must have had much better experiences with firmware update policies for embedded consumer devices than me.
> On the other hand, consumer routers route in software, which is easily updated.
Sure. On the other other hand, companies going "Is this a security problem that's going to cost us lots of money if we don't fix it? No? Why the fuck should I spend money fixing it for free, then? It can be a headline feature in the new model." means that -in practice- they aren't so easily updated.
If everyone in the consumer space made OpenWRT-compatible routers, switches, and APs, then that problem would be solved. But -for some reason- they do not and we still get shit like [0].
[0] <https://www.youtube.com/watch?v=KsiuA5gOl1o>
It seems like this really only helps intermediate routers.
All endpoints need to upgrade to IPv4x before anyone can reasonably use it. If I have servers on IPv4x, clients can reach my network fine, but they then can't reach individual servers. Clients need to know IPv4x to reach IPv4x servers.
Similarly, IPv4x clients talking to IPv4 servers do what? Send an IPv4x packet with the remaining IPv4x address bits zeroed out? Nope a V4 server won't understand it. So they're sending an IPv4 packet and the response gets back to your network but doesn't know how to get the last mile back to the IPv4x client?
I desperately wish there was a way to have "one stack to rule them all", whether that is IPv4x or IPv4 mapped into a portion of IPv6. But there doesn't seem to be an actually workable solution to it.
In my view, the problem largely comes from the way the Internet has grown. Many of these concepts developed together with the Internet, and IPv4 was the protocol that evolved with them.
I see many ISPs deploying IPv6 but still following the same design principles they used for IPv4. In reality, IPv6 should be treated as a new protocol with different capabilities and assumptions.
For example, dynamic IP addresses are common with IPv4, but with IPv6 every user should ideally receive a stable /64 prefix, with the ability to request additional prefixes through prefix delegation (PD) if needed.
Another example is bring-your-own IP space. This is practically impossible for normal users with IPv4, but IPv6 makes it much more feasible. However, almost no ISPs offer this. It would be great if ISPs allowed technically inclined users to announce their own address space and move it with them when switching providers.
Dynamic v6 is likely a business and billing issue rather than a technical one. They want to sell you the static IP like they do with v4.
You're correct, but the issue is that static IPv6 isn’t even available as an option—at least in my experience with two ISPs in my country. It may be different in other places.
It's also a privacy issue, in fact it's mandatory in some European countries because otherwise you'd be easily tracked by your address, but it's also mandated you can get a static one if you ask.
Similar discussion from a couple of months ago: https://news.ycombinator.com/item?id=46468625
I personally feel that IPv6 is one of the clearest cases of second system syndrome. What we needed was more address bits. What we got was a nearly total redesign-by-committee with many elegant features but had difficult backwards compatibility.
https://en.wikipedia.org/wiki/Second-system_effect
Which IPv6 “gratuitious” features (i.e. anything other than the decision to make a breaking change to address formats and accordingly require adapters) would you argue made adoption more difficult?
IPv6 gets a lot of hate for all the bells and whistles, but on closer examination, the only one that really matters is always “it’s a second network and needs me to touch all my hosts and networking stack”.
Don’t like SLAAC? Don’t use it! Want to keep using DHCP instead? Use DHCPv6! Love manual address configuration? Go right ahead! It even makes the addresses much shorter. None of that stuff is essential to IPv6.
In fact, in my view TFA makes a very poor case for a counterfactual IPv4+ world. The only thing it really simplifies is address space assignment.
It's not that they loaded it up with features, it's that elegance was prized over practicality.
Simplifying address space assignment is a huge deal. IPv4+ allows the leaves of the network to adopt IPv4+ when it makes sense for them. They don't lose any investment in IPv4 address space, they don't have to upgrade to all IPv6 supporting hardware, there's no parallel configuration. You just support IPv4 on the terminals that want or need it, and on the network hardware when you upgrade. It's basically better NAT that eventually disappears and just becomes "routing".
> They don't lose any investment in IPv4 address space
What investment? IP addresses used to be free until we started running out, and I don't think anything of value would be lost for humanity as a whole if they became non-scarce again.
> they don't have to upgrade to all IPv6 supporting hardware
But they do, unless you're fine with maintaining an implicitly hierarchical network (or really two) forever.
> It's basically better NAT
How is it better? It also still requires NAT for every 4x host trying to reach a 4 only one, so it's exactly NAT.
> that eventually disappears
Driven by what mechanism?
>> They don't lose any investment in IPv4 address space
> What investment? IP addresses used to be free
Well they're not now, so it's an investment. Any entity that has IP addresses doesn't want its competition to get IP addresses, even when this leads to bad outcomes overall.
And would you say this is a dynamic worth preserving by policy?
/>Don’t like SLAAC? Don’t use it!
It doesn't work like this. SLAAC is a standard compliant way of distributing addresses, so you MUST support it unless you're running a very specific isolated setup.
Most people using Android will come to your home and ask "do you have WiFi here?"
> Most people using Android will come to your home and ask "do you have WiFi here?"
The Android implementation of IPv6 completely boggles my mind. They have completely refused to implemented DHCPv6 since 2012:
* https://issuetracker.google.com/issues/36949085
But months after client-side DHCP-PD was made an RFC they're implementing that?
* https://android-developers.googleblog.com/2025/09/simplifyin...
In what universe does implementing DHCP-PD but not 'regular' DHCPv6 make any kind of sense?
>In what universe does implementing DHCP-PD but not 'regular' DHCPv6 make any kind of sense?
Their policy makes a lot of sense. It's hindering ipv6 deployment, but it is preventing ISPs from allocating less than /64 to customers. It has nothing to do with standards actually.
Dhcp-pd makes a lot of sense though, because if an isp is willing to give you a prefix, they are by default nice guys.
This is about client devices on home and corporate networks connecting to (e.g.) Wifi, and not about ISP connections and addresses on the WAN port of your home router.
Why should my Pixel 10 send out DHCP-PD packets when it connects to Wifi, but not DHCPv6?
Because they only implement the methods which force ISPs to give you /64.
You really think they’re doing all of this as some elaborate “all or nothing” v6 deployment bargaining chip?
Yes. I'm not an insider, of course.
Their reasoning seems to be that it enables use cases like a smartphone delegating v6 addresses to wearables etc.
Which is fine, I guess, but still doesn’t explain their refusal to implement regular DHCPv6 for so long.
Having both a real address, a link-local address, and a unique local address, and the requirement to use the right one in each circumstance
The removal of arp and removal of broadcast, the enforcement of multicast
The almost-required removal of NAT and the quasi-relgious dislike from many network people. Instead of simply src-natting your traffic behind ISP1 or ISP2, you are supposed to have multiple public IPs and somehow make your end devices choose the best routing rather than your router.
All of these were choices made in addition to simply expanding the address scope.
> Having both a real address, a link-local address, and a unique local address, and the requirement to use the right one in each circumstance
Only use the real one then (unless you happen to be implementing ND or something)!
> The removal of arp and removal of broadcast, the enforcement of multicast
ARP was effectively only replaced by ND, no? Maybe there are many disadvantages I'm not familiar with, but is there a fundamental problem with it?
> The almost-required removal of NAT
Don't like that part? Don't use it, and do use NAT66. It works great, I use it sometimes!
In fairness, aside from whining about the minority attitude towards NAT [0] the person you're replying to absolutely met your definition of "gratuitous":
I (and I expect the fellow you're replying to) believe that if you're going to have to rework ARP to support 128-bit addresses, you might as well come up with a new protocol that fixes things you think are bad about ARP.And if the fellow you're replying to doesn't know that broadcast is another name for "all-hosts multicast", then he needs to read a bit more.
[0] Several purity-minded fools wanted to pretend that IPv6 NAT wasn't going to exist. That doesn't mean that IPv6 doesn't support NAT... NAT is and has always been a function of the packet mangling done by a router that sits between you and your conversation partner.
In my opinion the redesign of IPv6 was perfectly fine. The IPv6 headers are significantly simpler that those of IPv4 and much easier to process at great speed.
There was only 1 mistake, but it was huge and all backwards compatibility problems come from it. The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
IPv6 added very few features, but it mostly removed or simplified the IPv4 features that were useless.
> The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
Like
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
?
Also:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
* https://en.wikipedia.org/wiki/6to4
> The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
The entire IPv4 address space is included in the IPv6 address space, in fact it's included multiple times depending on what you want to do with it. There's one copy for representing IPv4 addresses in a dual-stack implementation, another copy for NAT64, a different copy for a different tunneling mechanism, etc.
There are several ways to map the IPv4 address space into the IPv6 address space, going right back to the first IPv6 addressing architecture RFC. Every compatibility protocol added a new one.
IPv6 added IPSEC which was backported to IPv4.
IPv6 tried to add easy renumbering, which did’t work and had to be discarded.
IPv6 added scoped addresses which are halfbaked and limited. Site-scoped addresses never worked and were discarded; link-scoped addresses are mostly used for autoconfiguration.
IPv6 added new autoconfiguration protocols instead of reusing bootp/DHCP.
> The IPv4 32-bit address space should have been included in the IPv6 address space,
That's ... exactly how IPv6 works?
Look at the default prefix table at https://en.wikipedia.org/wiki/IPv6_address#Default_address_s... .
Or did you mean something else? You still need a dual stack configuration though, there's nothing getting around that when you change the address space. Hence "happy eyeballs" and all that.
> You still need a dual stack configuration though, there's nothing getting around that when you change the address space
Yes there is, at least outside of the machine. All you need to do is have an internal network (100.64/16, 169.254/16, wherever) local to the machine. If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. This can be even more hidden than things like internal docker networks.
Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
When the packet arrives at a router with v6 and v4 on (assume your v4 address is 2.2.2.2), that does a 6:4 translation, just like a router does v4:v4 nat
The packet then runs over the v4 network until it reaches 1.0.0.1 with a source of 2.2.2.2, and a response is sent back to 2.2.2.2 where it is de-natted to a destination of 2001:1 and source of ::ffff:100.1
That way you don't need to change any application unless you want to reach ipv6 only devices, you don't need to run separate ipv4 and ipv6 stacks on your routers, and you can migrate easilly, with no more overhead than a typical 44 nat for rfc1918 devices.
Likewise you can serve on your ipv6 only devices by listening on 2001::1 port 80, and having a nat which port forwards traffic coming to 2.2.2.2:80 to 2001::1 port 80 with a source of ::ffff:(whatever)
(using colons as a deliminator wasn't great either, you end up with http://[2001::1]:80/ which is horrible)
> ...you end up with http://[2001::1]:80/ which is horrible
That is horrible, but you do no longer have any possibility of confusion between an IP address and a hostname/domain-name/whatever-it's-called. So, yeah, benefits and detriments.
> Your network then has a route to ::ffff:0:0/96 via a gateway...
I keep forgetting about IPv4-mapped addresses. Thanks for reminding me of them with this writeup. I should really get around to playing with them some day soon.
Using almost anything but : would have been fine.
Could have used 2001~1001~~1 instead of 2001:1001::1, which looks weird today, but wouldn't have done if that had been chosen all those years ago.
(unless : as an ipv6 separator predates its use as a separator for tcp/udp ports, in which case tcp/udp should have used ~. Other symbols are available)
So, I bothered to play around with these addresses. I find myself a little confused by what you wrote.
> If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. ...
> Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
What's the name of this translation mechanism that you're talking about? It seems to be the important part of the system.
I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
When I add the route it looks like you suggested I add
I see the route in my routing table, but I get exactly the same results... no IPv4 or IPv6 traffic.Based on my testing, it looks like this is only a way to represent IPv4 addresses as IPv6 addresses, as ::ffff:192.168.2.2 gets translated into ::ffff:c0a8:202, but the OS uses that to create IPv4 traffic. If your system doesn't have an IPv4 address configured on it, then this doesn't seem to help you at all. What am I missing?
[0] <http://[::ffff:192.168.2.2]/>
You make Nat46 part of the OS network stack.
You make nat64 part of the typical router.
> I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
Yes, that's the failure of ipv6 deployment.
Imagine you have two vlans, one ipv4 only, one ipv6 only. There's a router sitting across both vlans.
VLAN1 - ipv6 only
Router 2001::1
Device A 2001::1234
VLAN2 - ipv4 only
Router 192.168.1.1
Device B 192.168.1.2
Device A pings 192.168.1.2, the OS converts that transparently to ::ffff:192.168.1.2, it sends it to its default router 2001::1
That router does a 6>4 translation, converting the destination to 192.168.1.2 and the source to 192.168.1.1 (or however it's configured)
It maintains the protocol/port/address in its state as any ipv4 natting router would do, and the response is "unnatted" as an "established connection" (with connection also applying for icmp/udp as v4 nat does today)
An application on Device A has no need to be ipv6 aware. The A record in DNS which resolves to 192.168.1.2 is reachable from device A despite it not having a V4 address. The hard coded IP database in it works fine.
Now if Device B wants to reach Device A, it uses traditional port forwarding on the router, where 192.168.1.1:80 is forwarded to [2001::1234]:80, with source of ::ffff:192.168.1.2
With this in place, there is no need to update any applications, and certainly no need for dual stack.
The missing bits are the lack of common 64/46 natting -- I don't believe it's built into the normal linux network chain like v4 nat is, and the lack of transparent upgrading of v4 handling on an OS level.
You will certainly need to update applications, because they won't be able to connect to v6 addresses otherwise. 464xlat only helps you connect to v4 addresses. It just means that updating _all_ of your applications is no longer a prerequisite of turning v4 off on your network.
Ah. So, you're saying that what you describe doesn't actually exist. That the best you can currently do is stuff like [0] and [1] where the IPv4 or IPv6 client use v4 or v6 addresses (respectively) and an intermediary sets up a fake destination IP on both ingress and egress and does the v4 <-> v6 address translation.
If so, that was not at all clear from your original comment.
[0] <https://docs.fortinet.com/document/fortigate/7.6.1/administr...>
[1] <https://docs.fortinet.com/document/fortigate/7.6.1/administr...>
It does exist though. The OS part is 464xlat and the router part is NAT64. You can try the second part out by setting your DNS server to one of the ones listed on https://nat64.net/, which will work with hostnames. To get IP literals to work you need 464xlat, which is currently a bit annoying to set up on Linux.
(Note that using the servers provided by nat64.net is equivalent to using an open proxy, so you probably don't want it for general-purpose use. You would probably want either your ISP to run the NAT64 (equivalent to CGNAT), or to run it on your own router (equivalent to NAT).)
The standard prefix for NAT64 is 64:ff9b::/96, although you can pick any unused prefix for it. ::ffff:0:0/96 is the prefix for a completely different compatibility mechanism that's specifically just for allowing an application to talk to the kernel's v4 stack over AF_INET6 sockets (as you figured out). It was a confusing choice of prefix to use to describe NAT64.
Everyone is saying this but... what are the new features, actually? There are a couple of cleanups to the header, removal of fragmentation, and a bunch of things like SLAAC you don't have to use if you don't want to?
Thanks for reading and commenting everyone!
Note though that I'm not proposing IPv4x as something we should work towards now. Indeed, I come down on the side of being happy that we're in the IPv6 world instead of this alternative history.
Reminds me of https://cr.yp.to/djbdns/ipv6mess.html
Which has been discussed previously: https://hn.algolia.com/?q=The+IPv6+mess
This sounds a lot like what we have in 6to4 (for 25+ years now), where nodes behind two ipv4 derived prefixes can automatically talk to each other p2p, and use a gateway to communicate with the rest of the v6 internet.
Interesting. Who deploys and maintains the gateway?
You can configure it statically but there used to be the anycast address 192.88.99.1 and the idea was that you'll get routed to the nearest one by magic of BGP. It was retired once native IPv6 deployment took off.
Apparently the practical problems were related to people haplessly firewalling it out (ref. https://labs.ripe.net/author/emileaben/6to4-why-is-it-so-bad...)
I think the intention was to use normal internet (anycast) routing to send it to the closest translator - which would be your ISP, or the nearest ISP that supports IPv6, or a tier 1 network which is happy to have extra traffic traversing its network unnecessarily since they get paid for all of it. (The same reason HE runs the free tunnel broker)
That was one of the problems with 6to4. If there was no gateway or it was overloaded or there was a gateway but you couldn't reach it because of a weird firewall, all your IPv6 packets would be silently dropped and you'd have no idea why. And this was before happy eyeballs so your computer might default to broken IPv6.
> Who owns all these new addresses? You do. If you own an IPv4 address, you automatically own the entire 96‑bit subspace beneath it. Every IPv4 address becomes the root of a vast extended address tree.
Huh:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
* https://en.wikipedia.org/wiki/6to4
I actually disagree: that's the road taken. NAT is practically this. When you're behind a NAT, you're effectively using a 64-bit address space. Two more layers of NAT, and you can have 128-bit address space. "The first part" of the address is a globally routable IPv4 address, and the rest is kept by the routers on the path tracking NAT connection states.
And NAT needed zero software changes. That's why it's won. It brought the benefits of whatever extension protocol with existing mechanisms of IPv4.
IPv6 isn't an alternative to IPv4, it's an alternative to all IPv4xes.
"IPv6 does that."
Are you sure about that? Until a few years ago my residential ISP was IPv4 only. I definitely couldn't connect to an IPv6-only service back then.
I'm glad this exists, because it demonstrates there are ways to make a next-gen IP interoperable with IPv4, and that while IPng had interoperability has part of its assessment, it completely nixed it because they didn't want to go with the other proposals at the time which had some consideration for it.
I tend to give the lack of credible ready to deploy asteroid response for Earth defense 41 years after consensus was reached on the KT boundary, 70 years since the 'space age' began, much greater weight.
Motivation for retiring IPv4 completely would NOT be to make the world a better more route-able place. It would be to deliberately obsolescence old products to sell new.
Their hypothetical approach of extending IPv4 reminds me of how TLS v1.3 masquerades as v1.2 in order to pass through meddling middleboxes safely.
Similar discussion from 10 years ago: https://news.ycombinator.com/item?id=10854570
We should basically all call our ISPs and ask for ipv6 to be implemented.
My fantasy road-not-taken for IPv4 is one where it originally used 36 bit addressing like the PDP-10. 64 billion addresses would be enough that we probably wouldn't have had the address exhaustion crisis in the first place, though routing would still get more complicated as most of the world's population (and many devices) started communicating over IP networks.
36-bit is still too small for the future. We are at 30 billion connected devices. We will probably hit 64 billion in decade or two.
The most likely alternative would have been 64-bit. That's big enough that could have worked for a long time.
A flat address space causes problems tracking it. Think of the IPv4 space which is fragmenting towards having a completely separate route for every /24 (the smallest unit). Longer addresses enable levels of aggregation which is good for routing, hence 128 bits.
Even better if it had used 48 bit addressing like Ethernet. 32 bits is almost the worst possible option since it was big enough to seem inexhaustible initially while not actually being big enough.
IPv4 was evolved, it is now a 48 bit address, signified by IP:PORT.
There are many things wrong with this analogy, but the most important ones seem to be:
- NAT gateways are inherently stateful (per connection) and IP networks are stateless (per host, disregarding routing information). So even if you only look at the individual connection level, disregarding the host/connection layering violation, the analogy breaks.
- NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
There's actually a kind of stateless NAT where each host gets a source port range. I believe it was involved in one of the IPv6 transition ideas.
> IP networks are stateless
Until you have stateful firewall, which any modern end network is going to have
> NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
If 192.168.0.1 and 0.2 both hide behind 2.2.2.2 and talk to 1.1.1.1:80 then they'll get private source IPs and source ports hidden behind different public source ports.
Unless your application requires the source port to not be changed, or indeed embeds the IP address in higher layers (active mode ftp, sip, generally things that have terrible security implications), it's not really a problem until you get to 50k concurrent connections per public ipv4 address.
In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
> Until you have stateful firewall, which any modern end network is going to have
Yes, but it's importantly still a choice. Also, a firewall I administer, I can control. One imposed onto me by my ISP I can’t.
> not really a problem until you get to 50k concurrent connections per public ipv4 address.
So it is in fact a big problem for CG-NATs.
> In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
No, I know what I'm complaining about. Stateful firewall traversal via hole punching is trivial on v6 without port translation, but completely implementation dependent on v4 with NAT in the mix, to just name one thing. (Explicit "TCP hole punching" would also be trivial to specify; it's a real shame we haven't already, since it would take about a decade or two for mediocre CPE firewalls to get the memo anyway.)
Having global addressing is also just useful in and of itself, even without global reachability.
> So it is in fact a big problem for CG-NATs.
Only if they're under provisioned. If my home really needed tens of thousands I'd provision another ipv4 address, but it doesn't -- at the moment I have a mere 121 active connections in my firewall.
The cost of a firewall is far more than the cost of an ipv4 address, which are available for about $20 each.
> Having global addressing is also just useful in and of itself, even without global reachability.
Except that doesn't happen, as most locations will not be BGP peering and advertising their own /48 (routing tables would melt)
Instead if you change your ISP, you change your IP address. Unless you use private ips in the fc00:: range, which is no different to using rfc1918 addresses for the vast majority of users
Routing of this additional /16 is more tricky and non-uniform though. NAT, hole-punching, all that.
Which is the exact problem any other IPv4 "extended" proposal would have hit. But the practical reality if the port number really was the only freely available bits in the IPv4 header to reasonably extend into. Almost everything else had ossified middleboxes doing something dumb with it. (And we've seen from NAT/hole-punching/etc how even port numbers had a lot of assumptions to overcome from middle boxes and we aren't using a full /16 there either. A lot of the safest traffic has to be > 10,000, a constraint on 14 of those 16 bits.)
There was never 64-78 bits in the IPv4 header unconstrained enough to extend IPv4 in place even if you accepted the CGNAT-like compromise of routing through IPv4 "super-routers" on the way to 128-bit addresses. Extending address size was always going to need a version change.
DNS SRV records actually can identify a port, so for "many" uses it would be transparent.
I've rarely seen it used in practice, but it's in theory doable.
You're thinking of TCP or UDP. IP does not have ports.
The workarounds for IPv4 address exhaustion were a major contributing factor to today's Internet being basically unable to reliably handle traffic that isn't TCP or UDP. Protocol ossification and widespread tolerance of connections that were effectively only usable for WWW has led to the Internet as a whole almost losing an entire layer of the network stack.
“IPv6 is waiting for adoption”
A major website sees over 46 percent of its traffic over ipv6. A major mobile operator has a network that runs entirely over ipv6.
This is not “waiting for adoption” so I stopped reading there.
https://www.google.com/intl/en/ipv6/statistics.html
https://www.internetsociety.org/deploy360/2014/case-study-t-...
You'd happily deploy a website for use by the general public on IPv6-only?
No. Which doesn’t prove the technology has not been adopted. The internet also consists of much more than public-facing websites. So what’s your point?
My point is that we're still dependent on IPv4. For all the progress IPv6 has made, no-one is willing to switch IPv4 off yet. Until we do, we're still constrained by all the problems IPv4 has.
Plenty of people are switching v4 off. Facebook run basically all of their datacenters without v4. T-Mobile USA use only v6 on their network. Thread only supports v6 in the first place
There are plenty of other places doing the same thing, but these examples alone should be sufficient to disprove "no-one is willing to turn v4 off".
I use Devouring Goats on your straw man. It's Super Effective!
To be less glib: IPv6 is well-adopted. It's not universally adopted.
Until you would happily deploy a service on IPv6 only, I suggest that you're still dependent on IPv4.
I'll repeat myself:
Not a single discussion here of his SixGate proposal (but many mischaracterizations of IPv4x as a "proposal", when it was rather an alternate history). So HN.
le sigh
Thanks though. Your comment really cheered me up.
> And yes, this whole piece was a sneaky way to get you to read my SixGate proposal.
Too sneaky, apparently. I suggest putting something at the top mentioning it ... then even folks with very short attention spans will see it.
IPv6 is fine. The advice on ULAs is garbage. The purpose of a protocol is to provide utility, not proscription.
What's the advice on ULAs? On my internet-connected VLANs, I have a -er- site-local IPv4 subnet, a unique local IPv6 subnet, and a global IPv6 subnet. This works just fine.
Does the "advice" boil down to "You should NEVER use ULAs and ALWAYS use GUAs!" and is given by the same very, very loud subset of people who seemed to feel very strongly that IPv6 implementations should explicitly make it impossible to do NAT?
Why would IPv6 ever need NAT?
This is how the majority of ipv6 is deployed where I live.
The router in a coffee shop gives you an ULA, and NATs everything to a single globally routable public ipv6 address.
That's completely f'd of course (and unlike any deployment i've seen).
It's not. It's simple, understandable, straightforward. Only natting to a single address is flawed, but also understandable, because they want to charge you for a prefix.
> It's not [fucked up]. It's simple, understandable, straightforward.
Things that are fucked up can also be simple, understandable, and straightforward.
Unless you're claiming that DHCPv6 is not simple, understandable, and straightforward... in which case:
DHCPv4 is "Give me an IP address, please.". DHCPv6 is "Give me an IP address, please. And also give me what I need for all of my directly-connected friends to have one, too, if you don't mind.".
Dhcpv6 is not simple because it's not universally available. A lot of devices don't support it and likely never will.
If your edge router supports IPv6, it almost certainly can make a DHCPv6-PD request and handle advertising the assigned prefix on its LAN side.
Because of Google's continued (deliberate?) misunderstanding of what DHCPv6 is for, Android clients don't do anything sane with it. That doesn't mean that DHCPv6 isn't simple.
Again, DHCPv6 is "Please give me an IP address, and maybe also what my directly-attached friends need to get IP addresses.". Simple, straightforward, and easy to understand. Even if it were relevant, Google's chronic rectocranial insertion doesn't change that.
If a protocol can be misunderstood (especially deliberately), it means that it isn't simple.
Why would anyone need IPv6 to be incapable of doing NAT?
To answer your question: Who knows? Perhaps you have a shitlord ISP that only provides you with a /128 (such as that one "cloud provider" whose name escapes me). [0] It's a nice tool to have in your toolbox, should you find that you need to use it.
[0] Yes, I'm aware that a "cloud provider" is not strictly an ISP. They are providing your VMs with access to the Internet, so I think the definition fits with only a little stretching-induced damage.
As a network admin I can say that NAT makes everything much harder and that the source and destination IP should stay the same from source to destination whenever possible.
Sure. I agree.
That doesn't help when your shitbag ISP has given you exactly one /128.
In any scenario where you want to do traffic steering at a network level. Managing multiple network upstreams (e.g. for network failover or load balancing) is a common example that is served well by numerous off-the-shelf routers with IPv4. That's an important feature that IPv6 cannot offer without using NPTv6 or NAT66.
It's conceivable that OSes could support some sort of traffic steering mechanism where the network distributes policy in some sort of dynamic way? But that also sounds fragile and you (i.e. the network operator) still have to cope with the long tail of devices that will never support such a mechanism.
> Managing multiple network upstreams (e.g. for network failover or load balancing) is a common example ... that IPv6 cannot offer without using NPTv6 or NAT66.
I don't think that's true. I haven't had reason to do edge router failover, but I am familiar with the concepts and also with anycast/multihoming... so do make sure to cross-check what I'm saying here with known-good information.
My claim is that the scenario you describe is superior in the non-NATted IPv6 world to that of the NATted IPv4 world. Let's consider the scenario you describe in the IPv4-only world. Assume you're providing a typical "one global IP shared with a number of LAN hosts via IPv4 NAT". When one uplink dies, the following happens:
* You fail over to your backup link
* This changes your external IP address
* Because you're doing NAT, and DHCP generally has no way to talk back to hosts after the initial negotiation you have no way to alert hosts of the change in external IP address
* Depending on your NAT box configuration, existing client connections either die a slow and painful death, or -ideally- they get abruptly RESET and the hosts reestablish them
Now consider the situation with IPv6. When one uplink dies:
* You fail over to your backup link
* This changes your external prefix
* Your router announces the prefix change by announcing the new prefix and also that the now-dead one's valid lifetime is 0 seconds [0]
* Hosts react to the change by reconfiguring via SLAAC and/or DHCPv6, depending on the settings in the RA
* Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
Assuming that I haven't screwed up any of the details, I think that's what happens. Of course, if you have provider-independent addresses [2] assigned to your site, then maybe none of that matters and you "just" fail over without much trouble?
[0] I think this is known as "deprecating" the prefix
[1] I think whether they die slow or fast depends on how the router is configured
[2] ...whether IPv4 or IPv6...
> * Hosts react to the change by reconfiguring via SLAAC and/or DHCPv6, depending on the settings in the RA
This is the linchpin of the workflow you've outlined. Anecdotal experience in this area suggests it's not broadly effective enough in practice, not least because of this:
> * Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
The old IP addresses (afaiu/ime) will not be removed before any dependent connections are removed. In other words, the application (not the host/OS) is driving just as much as the OS is. Imo, this is one of the core problems with the scenario, that the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event. Because of that, things will ~always be leaky.
> [1] I think whether they die slow or fast depends on how the router is configured
Yeah, and that configuration will presumably be sensitive to what caused the failover. This could manifest differently based on whether upstream A simply has some bad packet loss or whether it went down altogether (e.g. a physical fault).
In any case, this vision of the world misses on at least two things, in my view:
1. Administrative load balancing (e.g. lightly utilizing upstream B even when upstream A is still up
2. The long tail of devices that don't respond well to the flow you outlined. It's not enough to think of well-behaved servers that one has total control over; need to think also of random devices with network stacks of...varying quality (e.g. IOT devices)
> The old IP addresses (afaiu/ime) will not be removed before any dependent connections are removed.
I have two reactions to this.
1) Duh? I'm discussing a failover situation where your router has unexpectedly lost its connection to the outside world. You'd hope that your existing connections would fail quickly. The existence of the deprecated IP shoudn't be relevant because the OS isn't supposed to use it for any new connections.
2) If you're suggesting that network-management infrastructure running on the host will be unable to delete a deprecated address from an interface because existing connections haven't closed, that doesn't match my experience at all. I don't think you're suggesting this, but I'm bringing it up to be thorough.
> ...the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event.
I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
> In any case, this vision of the world misses on at least two things, in my view:
> 1. Administrative load balancing...
I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
> 2. The long tail of devices that don't respond well to the flow you outlined.
Do they respond worse than in the IPv4 NAT world? This and other commentary throughout indicates that you missed the point I was making. That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
[0] ...like BIND and NTPd...
> 1) Duh? I'm discussing a failover situation where your router has unexpectedly lost its connection to the outside world. You'd hope that your existing connections would fail quickly. The existence of the deprecated IP shoudn't be relevant because the OS isn't supposed to use it for any new connections.
Well failover is an administrative decision that can result from unexpectedly losing connection. But it can also be more ambiguous packet loss too, something that wouldn't necessarily manifest in broken connections--just degraded ones.
If upstream A is still passing traffic that simply gets lost further down the line, then there's no particular guarantee that the connection will fail quickly. If upstream A deliberately starts rejecting TCP traffic with RST, then sure, that'll be fine. But UDP and other traffic, no such luck. Whereas QUIC would fare just fine with NAT thanks to its roaming capabilities.
> I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
Idk, I'll have to take your word for it. Instinctively though, this feels like a situation where the lowest common denominator wins. In other words, average applications aren't going to do any legwork here. The best thing to hope for is for language standard libraries to make this as built-in as possible. But if that exists, I'm extremely unaware of it.
> I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
I presume you meant a layer above IP? But no, I don't see why this is challenging in a NAT world. At least, I've worked with routers that support this, and it always seemed to Just Work™. I'd naively assume that the router is just modding the hash of the layer 3 addresses or something though.
> Do they respond worse than in the IPv4 NAT world?
I've basically only ever had good experiences in the IPv4 NAT world.
> That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
I'm certainly sympathetic to this point. And, all things being equal, of course that seems better! If NAT66 were to not offer sufficient practical benefits, then I'd be convinced.
But please bear in mind that this was the original comment I responded to (not yours). Responding to this is where I'm coming from:
> Why would IPv6 ever need NAT?
30 years on and if I have a machine with an ipv6 only network and run
"ping 1.1.1.1"
it doesn't work.
If stacks had moved to ipv6 only, and the OS and network library do the translation of existing ipv4, I think things would have moved faster. Every few months I try out my ipv6 only network and inevitably something fails and I'm back to my ipv4 only network (as I don't see the benefit of dual-stack, just the headaches)
Sure you'd need a 64 gateway, but then that can be the same device that does your current 44 natting.
There are lots of places that have IPv6-only networks and access IPv4 through NAT64. It makes sense for new company networks that can control what software gets installed.
The main limitation is software that only supports IPv4. This would affect your proposed solution of doing the translation in the stack. There is no way to fix an IPv4-only software that has 32-bit address field.
Yes there is, you have the device the software is on do the translation transparently. The software thinks its talking to 1.2.3.4, it's actually talking to ::ffff:1.2.3.4, but the application doesn't need to know that as the translation is occuring in the network stack (driver, module, whatever).
> There are lots of places that have IPv6-only networks and access IPv4 through NAT64
I've just deployed a new mostly internal network, and this was my plan.
The network itself worked, but the applications wouldn't. Most required applications could cope, but not all, meaning I need to deploy ipv4, meaning that there's no point in deploying ipv6 as well as ipv4, just increases the maintenance and security for no business benefits.
This works if you have 464xlat turned on. It's mostly used by phones though.
It's bit weird how despite Linux kernel having otherwise fairly advanced network stack, the 464xlat and other transition mechanism situation is not great. There are some out of tree modules (jool and nat46) available, but nothing in mainline. Does anyone know why that is?
NetworkManager just recently got CLAT! https://gitlab.freedesktop.org/NetworkManager/NetworkManager...
Issue for CLAT in systemd-networkd: https://github.com/systemd/systemd/issues/23674
And if that's applies across the board, in 20 years time that might have filtered through to mean ipv4 can be dropped in my company.
I'd rather see this at a lower level than network manager and bodging in with bpf, so it's just a default part of any device running a linux network stack, but I don't know enough about kernel development and choices to know how possible that is in practice.
This should have been supported in the kernel 25 years ago though if the goal was to help ipv6 migration
I agree. Someone was working on that, though the work seems quite stale now: https://codeberg.org/IPv6-Monostack/ipxlat-net-next
I've often thought similar thoughts to this because fundamentally, IPv4 should still be enough for us, except that we've chronically wasted lots of it.
The first main issue is that most often we waste an entire IPv4 for things that just have a single service, usually HTTPS and also an HTTP redirector that just replies with a redirect to the HTTPS variant. This doesn't require an entire IPv4, just a single port or two.
We could have solved the largest issue with address exhaustion simply by extending DNS to have results that included a port number as well as an IP address, or if browsers had adopted the SRV DNS records, then a typical ISP could share a single IPv4 across hundreds of customers.
The second massive waste of IPv4 space is BGP being limited to /24. In the days of older routers when memory was expensive and space was limited, limiting to /24 makes sense. Now, even the most naive way of routing - having a byte per IP address specifying what the next hop is - would fit in 4GB of RAM. Sure, there is still a lot of legacy hardware out there, but if we'd said 10 years ago that the smallest BGP announcements would reduce from /24 to /32, 1 bit per year, so giving operators time to upgrade their kit, then we'd already be there by now. They've already spent the money on getting IPv6 kit which can handle prefixes larger than this, so it would have been entirely possible.
And following on from the BGP thing is that often this is used to provide anycast, so that a single IPv4 can be routed to the geographically closest server. And usually, this requires an entire /24, even though often it's only a single port on a single IPv4 that's actually being used.
Arguably, we don't even need BGP for anycast anyway. Again, going back to DNS, if the SRV record was extended to include an approximate location (maybe even just continent, region of continent, country, city) where each city is allocated a hierarchical location field split up roughly like ITU did for phone numbers, then the DNS could return multiple results and the browser can simply choose the one(s) that's closest, and gracefully fall back to other regions if they're not available. Alternatively, the client could specify their geo location during the request.
So, basically, all of that can be done with IPv4 as it currently exists, just using DNS more effectively.
We also have massive areas of IPv4 that's currently wasted. Over 8% of the space is in the 240.0.0.0/4 range that's marked as "reserved for future use" and which many software vendors (e.g. Microsoft) have made the OS return errors if it's used. Why? This is crazy. We could, and should, make use of this space, and specifically for usages where ports are better used, so that companies can share a single IPv4 at the ISP level.
Another 8% is reserved for multicast, but nowadays almost nothing on the public IPv4 uses it and multicast is only supported on private networks. But in any case, 225.0.0.0/8-231.0.0.0/8 and 234.0.0.0/8-238.0.0.0/8 (collectively 12 /8s, or 75% of the multicast block) is reserved and should never have been used for any purpose. This too could be re-purposed for alleviating pressure on IPv4 space.
Finally, there are still many IPv4 /24s or larger that are effectively being hoarded by companies knowing they can make good money from renting them out or selling them later. Rather than being considered an asset, we should be charging an annual fee to keep hold of these ranges and turn them into a liability instead, as that would encourage companies with a large allocation that they don't need to release them back.
The other main argument against IPv4 is NAT, but actually I see that as a feature. If services actually had port number discovery via DNS, then forwarding specific ports to the server than deals with them is an obvious thing to do, not something exceptional. The majority of machines don't even want incoming connections from a security point of view, and most firewalls will block incoming IPv6 traffic apart from to designated servers anyway. The "global routing" promised by IPv6 isn't actually desired for the most part, the only benefit is when it is wanted you have the same address for the service everywhere. The logical conclusion from that is that IPv4 needs a sensible way of allocating a range of ports to individual machines rather than stopping just at the IP address.
When you then look at IPv6 space, it initially looks vast and inexhaustible, but then you realise that the smallest routable prefix with BGP is /48, it should be apparent that it suffers from essentially the same constraints as IPv4. All of "the global internet" is in 2002::/16, which effectively gives 32 bits of assignable space. Exactly the same as IPv4. Even more, IPv6 space is usually given out in /44 or /40 chunks, which means it's going to be exhausted at almost the same rate as IPv4 given out in /24 chunks. So much additional complexity, for little extra gain, although I will concede that as 2003::/16 to 3ffe::/16 isn't currently allocated there is room to expand, as long as routers aren't baking in the assumption that all routable prefixes are in 2001::/16 as specified.
TLDR: browsers should use SRV to look up ports as well as addresses, and SRV should return geo information so clients can choose the closest server to them. If we did that, the IPv4 space is perfectly large enough because a single IPv4 address can support hundreds or thousands of customers that use the same ISP. Effectively a /32 IPv4 address is no different to a /40 IPv4 prefix, and the additional bits considered part of the address in IPv6 could be encoded in the port number for IPv4.
what happens when a legacy host sends a 32 bit address to a 128 bit endpoint? it doesn't have enough information to forward it anywhere
I think that's meant to be covered by the "IPv4x when we can. NAT when we must" part, in particular "ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony."
It seemed strange that the need for CGNAT wasn't mentioned until after the MIT story. The "Nothing broke" claim in that story seems unlikely; I was on a public IP at University at the end of the 90s and if I'd suddenly been put behind NAT, some things I did would have broken until the workarounds were worked out.
> "ISPs used carrier‑grade NAT as a compatibility shim rather than a lifeline: if you needed to reach an IPv4‑only service, CGNAT stepped in while IPv4x traffic flowed natively and without ceremony."
What's the difference between that and dual stack v4/v6, though? Other than not needing v6 address range assignments, of course.
Try an IPv6-only VPS and see how quickly something breaks for you. Dual-stack fails miserably when the newer stack is incompatible with the older one. With a stack that extends the old stack, you always have something to fallback to.
To replace something, you embrace it and extend it so the old version can be effectively phrased out.
> Try an IPv6-only VPS and see how quickly something breaks for you.
Who's arguing for that? That would be completely non-viable even today, and even with NAT64 it would be annoying.
> Dual-stack fails miserably when the newer stack is incompatible with the older one.
Does it? All my clients and servers are dual stack.
> With a stack that extends the old stack, you always have something to fallback to.
Yes, v4/v6 dual stack is indeed great!
> To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Some changes unfortunately really are breaking. Sometimes you can do a flag day, sometimes you drag out the migration over years or decades, sometimes you get something in between.
We'll probably be done in a few more decades, hopefully sooner. I don't see how else it could have realistically worked, other than maybe through top-down decree, which might just have wasted more resources than the transition we ended up with.
> We'll probably be done in a few more decades...
I don't see IPv4 going away within the next fifty years. I'd not be surprised for it to last for the next hundred+ years. I expect to see more and more residential ISPs provide their customers with globally-routable IPv6 service and put their customers behind IPv4 CGNs (or whatever the reasonable "Give the customer's edge router a not-globally-routable IPv4 address, but serve its traffic with IPv6 infrastructure" mechanism to use is). That IPv4 space will get freed up to use in IPv4-only publicly-facing services in datacenters.
There's IPv4-only software out there, and I expect that it will outlive everyone who's reading this site today. That's fine. What matters is getting proper IPv6 service to every Internet-connected site on (and off) the planet.
With you on “IPv6 only will become a thing for many clients”, but servers (or at least load balancers) will absolutely not stay v4-reachable only.
They’re already not. For example, I believe you won’t get an iOS app approved for distribution by Apple these days if it doesn’t work on v6-only clients.
> With you on “IPv6 only will become a thing for many clients"...
That's not what I said. I said that having a globally-routable IPv4 address assigned to a LAN's edge router will stop being a thing. Things like CGN (or some other sort of translation system) will be the norm for all residential users.
> ...but servers (or at least load balancers) will absolutely not stay v4-reachable only.
Some absolutely will. There's a lot of software and hardware out there that's chugging along doing exactly what the entity that deployed it needs it to do... but -for one of handful of reasons- will never, ever be updated ever again. This is fine. The absolute best thing any programmer can do is to create a system that one never has to touch ever again.
> That's not what I said. I said that having a globally-routable IPv4 address assigned to a LAN's edge router will stop being a thing. Things like CGN (or some other sort of translation system) will be the norm for all residential users.
That's still what I would call a v6-only (with translation mechanisms) client deployment. Sorry for being imprecise on the "with translation mechanisms" part.
> Some absolutely will.
Very few, in my prediction. We're already seeing massive v6 + CG-NAT-only deployments these days, and the NAT part is starting to have worse performance characteristics: Higher latency because the NATs aren't as geographically distributed as the v6 gateway routers, shorter-lived TCP connections because IP/port tuples are adding a tighter resource constraint than connection tracking memory alone etc.
This, and top-down mandates like Apple's "all apps must work on v6 only phones", is pushing most big services to become v6 reachable.
At some point, some ISP is going to decide that v6 only (i.e. without translation mechanisms) Internet is "enough" for their users. Hackers will complain, call it "not real Internet" (and have a point, just like I don't consider NATted v4 "real Internet"!), but most profit-oriented companies will react by quickly providing rudimentary v6 connectivity via assigning a v6 address to their load balancer and setting an AAAA record.
I agree that v4 only servers will stick around for decades, just like there are still many non-Internet networks out there, but v4 only reachability will become a non-starter for anything that humans/eyeballs will want to access. And at some point, the fraction of v4-only eyeballs will become so small that it'll start becoming feasible to serve content on v6 only. At that point, v4 will be finally considered "not the real Internet" too.
> Very few, in my prediction.
Sure, I agree. I'm not sure how you got the notion that I thought a large percentage of systems out there will never get IPv6 support. There's a lot of solid systems out there that just fucking run. They're a small percentage of all of the deployed machines in the world.
> That's still what I would call a v6-only (with translation mechanisms) client deployment.
When people say "IPv6 only", they mean "Cannot connect to IPv4 systems". IMO, claiming it means anything else is watering down the definition into meaninglessness. Consider it in the context of what someone means when they envision a future where the Internet is "IPv6 only", so we don't need to deal with the "trouble" and "headache" of running both v4 and v6.
> We're already seeing massive v6 + CG-NAT-only deployments these days...
Yeah, it's my understanding that that's been the situation for a great many folks in the Asia/Pacific part of the world for a while now. Lots and lots of hosts, but not much IPv4 space allocated.
[dead]
It can't even address a 128 bit endpoint, so nothing would happen.
Sure it can, the DNS server returns the A record if your client doesn't understand Ax. It just won't work.
Honestly, this backwards compatibility thing seems even worse than IPv6 because it would be so confusing. At least IPv6 is distinctive on the network.
The question was about forwarding as I understand it, not address resolution, and there simply won't be any forwarding, since the 32 bit only sending host won't be able to address the 128 bit receiving one.
in 100 years ipv4 will be recognized as one of the great discoveries like calculus. ipv6 is a misnomer really. It's a separate, and lesser protocol. Much like other second systems, it was too ambitious and not pragmatic enough.
Rather than looking down on IPv4 , we should admire how incredible it's design was. Its elegance and intuitiveness, resourcefulness have all led to it outlasting every prediction of it's demise.
Em-dashes in this article.
What is described here is basically just CIDR plus NAT which is...what we actually have.
At the time IPv6 was being defined (I was there) CIDR was just being introduced and NAT hadn't been deployed widely. If someone had raised their hand and said "I think if we force people to use NAT and push down on the route table size with CIDR, I think it'll be ok" (nobody said that iirc), they would have been rejected because sentiment was heavily against creating a two-level network. After all having uniform addressing was pretty much the reason internet protocols took off.