• There is no Plan B: why the IPv4-to-IPv6 transition will be ugly
    94 replies, posted
[img]http://static.arstechnica.net//public/v6/styles/light/images/masthead/logo.png[/img] [url=http://arstechnica.com/business/news/2010/09/there-is-no-plan-b-why-the-ipv4-to-ipv6-transition-will-be-ugly.ars]Source[/url] [release][img]http://static.arstechnica.net/assets/2010/01/ipv4_space._ars-thumb-640xauto-11566.png[/img] Twenty years ago, the fastest Internet backbone links were 1.5Mbps. Today we argue whether that's a fast enough minimum to connect home users. In 1993, 1.3 million machines were connected to the Internet. By this past summer, that number had risen to 769 million— and this only counts systems that have DNS names. The notion of a computer that is not connected to the Internet is patently absurd these days. But all of this rapid progress is going to slow in the next few years. The Internet will soon be sailing in very rough seas, as it's about to run out of addresses, needing to be gutted and reconfigured for continued growth in the second half of the 2010s and beyond. Originally, the idea was that this upgrade would happen quietly in the background, but over the past few years, it has become clear that the change from the current Internet Protocol version 4, which is quickly running out of addresses, to the new version 6 will be quite a messy affair. Legacy problems Across the computing industry, we spend enormous amounts of money and effort on keeping older, "legacy" systems running. The examples range from huge and costly to small and merely annoying: planes circle around in holding patterns burning precious fuel because air traffic control can't keep up on systems that are less powerful than a smartphone; WiFi networks don't reach their top speeds because an original 802.11(no letter), 2Mbps system could show up—you never know. So when engineers dream, we dream of leaving all of yesterday's technology behind and starting from scratch. But such clean breaks are rarely possible. For instance, the original 10 megabit Ethernet specification allows for 1500-byte packets. Filling up 10Mbps takes about 830 of those 1500-byte packets. Then Fast Ethernet came along, which was 100Mbps, but the packet size remained the same so that 100Mbps ethernet gear could be hooked up to 10Mbps ethernet equipment without compatibility issues. Fast Ethernet needs 8300 packets per second to fill up the pipe. Gigabit Ethernet needs 83,000 and 10 Gigabit Ethernet needs almost a million packets per second (well, 830,000). For each faster Ethernet standard, the switch vendors need to pull out even more stops to process an increasingly outrageous numbers of packets per second, running the CAMs that store the forwarding tables at insane speeds that demand huge amounts of power. The need to connect antique NE2000 cards meant sticking to 1500 bytes for Fast Ethernet, and then the need to talk to those rusty Fast Ethernet cards meant sticking to 1500 bytes for Gigabit Ethernet, and so on. At each point, the next step makes sense, but the entire journey ends up looking irrational. The problem in the middle Of course, change does manage to happen. We went from 10Mbps to 10Gbps Ethernet, from wired to wireless, and from a Web that was barely able to show blinking text to one running all manners of applications. We even gained the DNS and TCP congestion control only in the late 1980s. But the reason we were able to change all of these technologies is that they happen either above or below the Internet Protocol in the network stack. Network protocols are built as "stacks" where a number of layers each provide a part of the required functionality. The famous OSI reference model has seven layers, but the TCP/IP stack has only four. Starting from the bottom and moving up, the (data)link layer knows how to send packets through cables or the air; the network layer knows about routing and addressing, allowing packets to find their way through the network; and the transport layer makes multi-packet communications work, and finally the application layer makes applications work over the network. These layers map to OSI layers 2, 3, 4, and 7, respectively. Each of these layers has many different protocols to choose from, except the network layer, which has only IP. Hence it looks like the waist in an hourglass. [img]http://static.arstechnica.com/09-29-2010/tcp_ip_stack.png[/img] Ethernet operates at the datalink layer (and also on OSI layer 1, the physical layer), and supports complex networks of its own, but Ethernet networks are nonetheless limited in size and scope, and can be upgraded with relative ease. The transport layer, where TCP and UDP live, has some upgradability issues, but in principle, routers don't look beyond the network layer, so they don't care if a packet is TCP, UDP or some other *P. This means that changing to a new transport protocol just involves updating the end systems that send and receive the packets—the routers in the middle don't care. (Firewalls do care, hence the complications in practice.) The same is true for application protocols like HTTP, FTP, or SMTP. If your browser decides to start downloading webpages using FTP, that's between the browser and the remote server; —the rest of the network doesn't care. But in contrast to the other layers of the stack, IP is everywhere. The source of the packet needs to create a valid IP packet, which all the routers along the way must process in order to send the packet down the right path. And the destination must be able to decode the IP packet. So changing the Internet Protocol means changing all hosts and all routers. A brief history of Internet protocol transitions In the early 1990s, it became obvious that the 32-bit addresses in the existing Internet Protocol were too small to allow for continued growth of the Internet beyond the first years of the 21st century. 4.3 billion possible different addresses (with 3.7 billion that are actually usable) only gets you so far in a world with 6, 7, or even 10 billion people. So the IETF leadership decided adopt the OSI/ITU-T CLNP protocol to replace IP. What now? Yes, really: the Connectionless Network Protocol. CLNP is basically IP from a parallel universe where everything has a different name, such as Network Service Access Point (NSAP) for address. NSAPs are variable length with a maximum of 160 bits, which would be a nice increase over the existing 32-bit IP addresses. And due to earlier, mostly unsuccessful, government mandates, major vendors had already implemented CLNP. But the IETF constituency, suffering from a bad case of not-invented-here syndrome, wouldn't have it. So around 1995, the IP Next Generation effort resulted in a new version of the IP protocol to fix the address length limitation. The IETF took advantage of this opportunity to fix some other limitations of the existing IP version 4——nobody knows what happened to versions 1, 2, and 3——but they exercised restraint: complaints about the new protocol are evenly balanced between "they changed too much" and "they didn't fix enough." As a result, the way in which IPv6——don't ask what happened to version 5——interacts with Ethernet or other lower layers is rather different from IPv4, DHCP is significantly overhauled, and there's stateless autoconfiguration to configure the now 128-bit addresses. But other than that, IPv6 is still IP, so it would be fairly straightforward to transition from IPv4 to IPv6. All of this has happened before Interestingly, the Internet already lived through a transition from one protocol to another. In the 1970s, the ARPANET used NCP, the Network Control Program. NCP was full of quaint notions, such as the ability to contact a remote router and inquire whether an earlier message was received at the other end or not. (A bit like sending a letter to the mail carrier asking whether a letter you sent to a friend earlier was delivered or not.) Such complexities are hard to square with a lean, mean, and especially fast network, so a shiny new protocol was developed around 1980: IP/TCP—. (Today, we call it TCP/IP, or IPv4, or just IP, as the TCP part is now implied.) RFC 801 describes the transition from NCP to TCP/IP. Basically, the two protocols coexisted during 1982, and as of January 1, 1983, NCP went the way of the dodo, with only TCP/IP left. All told, they needed a year to transition a network with about a hundred nodes and three main applications: —telnet, FTP, and mail. So it can hardly be a surprise that doing the same (transitioning from the soon-to-be-obsolete IPv4 to the new IPv6) less than two decades later for a network with hundreds of millions of nodes, and hundreds, if not thousands, of application types, would take a good long time. The only saving grace is that the IETF started its IP next generation (IPng) effort, which eventually produced IPv6, very early, giving us nearly 20 years between the moment that IPv6 was first standardized in 1995 and the moment that the IPv4 addresses run out, —probably 2012. The bad news is that those 20 years are almost up. Ships in the night: the IPv6 solution While IPv4 and IPv6 are very much alike to users and applications; "on the wire," the protocols are completely separate and don't interact. In routing, we call this the "ships in the night" approach. (Hopefully, at least one of them has radar.) The advantage of this design is that there is no need to change existing IPv4 infrastructure——IPv6 is simply added as a new protocol. All the limitations and mistakes that are part of IPv4 are left behind. The problem with this approach is that the first person who wants to turn off IPv4 has to wait for the last person to add IPv6. It's like having a cell phone network that is not connected to the landline network. Everyone has to have both types of phones, with the expectation that in the far future, we can turn off the landlines and just use cell phones. And with cell phones, there is actually an advantage to switching: no cord. (Although landlines have some advantages, too.) With IPv6, there are no real advantages to switching for most users, who tend to be unimpressed by technological elegance and future-proofness. All of this makes the transition to IPv6 like the trackstand strategy in match sprint track cycling, where competitors try to get their opponents to take the lead by not moving themselves. Once the opponent is in the lead, the "winner" of the trackstand can take advantage of the opponent's slipstream and keep up with reduced effort. After a decade of trackstanding, the migration towards IPv6 is finally starting to get underway, —but unfortunately the progress comes way too late to avoid problems when the IPv4 addresses run out. But we'll come back to that. First, let's have a look at some of the differences between IPv4 and IPv6 that get in the way of an easy transition. You mean to say that IPv6 is actually different from IPv4??? When IPv6 was developed in the middle of the 1990s, things we take for granted today didn't exist, or weren't widely used. For instance, today pretty much every system that isn't a router or a dedicated server gets its IP address and a bunch of other information through DHCP, the Dynamic Host Configuration Protocol. DHCP is remarkably complex and error-prone, compared to the way other protocols from the 1980s, such as IPX and AppleTalk, solve this problem. With DHCP, the client first broadcasts a request. Hopefully, one or more servers see the request for an address and send an IP address offer. The client then evaluates the offers and takes one server up on its offer. The server must now remember to not give out the same address to another system until after the lease runs out, and the client must remember to renew the address lease with the server before the lease expires. With IPX, this process is much simpler, and there's no dependence on server-stored information or timers. Routers broadcast a "network address," and each system creates its own individual address by combining the broadcasted network address and the MAC address burned into its Ethernet chip. AppleTalk is very similar, but the addresses are much shorter, so AppleTalk uses a random number rather than a MAC address. It also sends a few packets to determine if some other system already has the chosen address. Easy as pie. When IPv6 was created, the IETF looked at protocols like IPX and AppleTalk and created stateless autoconfig, which is basically the IPX approach. Later, privacy concerns came up over embedding a machine's MAC address in its IPv6 address, so an AppleTalk-like option was added: hold the MAC address, substitute it for a random number, rinse and repeat every 24 hours to keep the government and other snoops on their toes. But why limit ourselves to two mechanisms for configuring IPv6 addresses (three if you count manual configuration)? DHCP was then re-imagined as DCHPv6. Although the two DHCPs share the same concepts and architecture, their message formats and many operational details are completely different: DHCP is so intimately interwoven with IPv4 that simply making the address fields larger to support IPv6 was pretty much impossible. Also, routers already broadcast— (well, multicast, which is more efficient) —their existence for the purpose of stateless autoconfiguration, so two critical pieces of information can be learned only from those router advertisements; they are not present in DHCPv6. These two pieces of information are the default gateway/router address, and how many bits are part of the network prefix— (the equivalent of the IPv4 subnet mask). There is a third bit of crucial information that's needed before a host can enjoy network connectivity: the addresses of the local DNS servers. (Remembering those 128-bit IPv6 addresses is such a pain...) This is, of course, something that DHCPv6 can easily provide. There also a specification, —recently upgraded to "standards track" status by the IETF, —for putting DNS addresses in router advertisements. But today, routers don't— yet —do this. The end result is a bit of a mess: all IPv6 systems support stateless autoconfig; Windows Vista and 7 support DHCPv6, but Windows XP and Mac OS X don't; on open source OSes a DHCPv6 client can usually be installed if one doesn't come with the distribution; and Vista and 7 also use the temporary, random number-derived addresses by default, whereas other OSes don't. The trouble ahead Right around this point in the story, many system administrators start becoming very uncomfortable about the transition to IPv6: they can't just set up a big DHCPv6 server that logs all address assignments and call it a day. On networks that need to support anything other than a Windows Vista/7 monoculture, it's infeasible to turn off stateless autoconfig, because then some systems simply wouldn't get an IPv6 address. Worse, the information that stateless autoconfig is not in use must be multicast by routers. Just one rogue router that says otherwise can make all hosts create those unpredictable temporary addresses, and may even siphon off traffic and do all kinds of horrific things to it. Of course a rogue router sending out unwanted router advertisements isn't any different from a rogue router sending out IPv4 DHCP information, except that enterprise switches know how to deal with the latter, whereas they don't yet know what to do with the former. However, even if the fixes that are currently in the pipeline materialize, some sysadmins argue that the need to coordinate the information in DHCPv6 and that in router advertisements is too onerous, as router people and server people shouldn't be required to talk to each other. Ten years ago, this difference between IPv4 and IPv6 may have been met with a can-do attitude or perhaps resignation, but the peculiarities of IPv4 are now so ingrained and the memories remain of the multi-protocol world that existed during the last decades of the previous century, that the lack of identicalness between IPv4 and IPv6 could be a real stumbling block today. Set and forget A much more fundamental problem with migrating to IPv6 is the "set and forget" issue. If an organization decides to adopt IPv6, it may have to upgrade hardware and software, the ISP must provide IPv6 connectivity and IPv6 addresses, and then the new protocol must be enabled in routers and on the DNS. So far so good. Then some IPv6 pings and IPv6 traceroutes must be performed, and when everything works as it should, everyone goes back to business as usual. But because there are so many players involved in networking, from time to time stuff breaks. This can be the result of hardware failures, broken cables, software updates, reconfigurations... you name it. Usually, ISPs and IT departments are very proactive in fixing these problems, and if they aren't, users are always helpful in pointing out how costly and unacceptable the outage is. So problems tend to be fixed fairly quickly. For IPv4. If something goes wrong with IPv6, nine out of ten times, nobody even notices: most communication is over IPv4 anyway. And if a session is initiated over IPv6 but then fails, it's usually retried over IPv4 automatically. This may happen after a significant delay, or within a fraction of a second. If users complain, it's not unheard of that IPv6 issues get very low priority. For instance, Mac OS X 10.6 Snow Leopard has a serious bug in its DNS code that makes it ignore IPv6 in the DNS when there is a CNAME record involved—, a case which is not uncommon. One year and four point updates later, the bug is still there. A variation on this problem is the one where a new technology is implemented in software, but not deployed yet. Then, when the new feature is used, things don't work so well, often because early implementations are problematic. A non-IPv6 example is HTTP pipelining. When your browser tells you "loading 93 of 126" when you go to a website, the original HTTP protocol required a new TCP session for each of those 126 transfers. This introduced a lot of overhead. So in HTTP 1.1, it's possible to keep a TCP session open, and reuse it for subsequent HTTP requests. This is called pipelining, and it was implemented incorrectly in Microsoft's IIS 4 Web server software. Once Web browsers started to use the feature, bad things happened. So for many years, browsers had pipelining turned off by default. Something similar happened with BitTorrent and IPv6. Basic BitTorrent operation is to connect to a "tracker" and tell it which file you want to download and which port number others can use to reach you. The tracker then gives you the IP addresses and port numbers of others who are also sharing the same file. BitTorrent clients connect to the addresses/ports of their peers, which are given to them by the tracker. The peers then exchange parts of the file in question directly between them. The original BitTorrent protocol specification allowed for the use of IPv4 or IPv6 addresses, as well as domain names, as referrals from the tracker to the clients/peers. But since most people don't have a fixed domain name at home, and there were no IPv6-capable trackers in the early days of BitTorrent, only IPv4 addresses were used. After a while, the BitTorrent protocol was amended with the capability to exchange the IPv4 addresses of peers in binary format, saving a lot of overhead in the process. Of course this broke compatibility with IPv6. Then, the biggest (well, certainly the most visible) BitTorrent tracker, the one run by The Pirate Bay, got an IPv6 address. To avoid problems with the updated BitTorrent protocol, the Pirate Bay decided to have BitTorrent clients connect to the tracker twice: over IPv4 to receive the addresses of IPv4 peers, and once over IPv6 to receive the addresses of IPv6 peers. That strategy works well if the BiTtorrent client software supports it, but not so well for existing BitTorrent client apps running on machines that were quietly IPv6-enabled. The latter case was common, and many of these clients were written in high level languages that don't care about the differences between IPv4 and IPv6. Such a client would only see the IPv6 peers, if those. DNS whitelisting An example of the "can't get there from here" problem is the issue with hosts that think they have IPv6, but in reality it doesn't work. (Some years ago Google measured these hosts as more than 25 percent of all hosts that have IPv6.) Even though we know how to run IPv4 and we know how to run IPv6, we find ourselves in a world where both protocols exist side-by-side, and interact in unexpected ways. Hosts that think they have IPv6 connectivity will try to connect to servers that have an IPv6 address in the DNS over IPv6. But if the IPv6 connectivity then doesn't work, nothing happens for a while until the application times out and retries another address supplied by the DNS. If this is an IPv4 address, the connection will go through and everything works, but only after an annoying delay. Delays like this cost people like Google a lot of money, so Google opted to forego putting the IPv6 addresses of its servers in the DNS. They only expose their IPv6 addresses to DNS servers of ISPs participating in the Google over IPv6 program. So we know where we eventually want to end up: a fully IPv6-enabled network. But getting there requires suffering timeouts, which generate support calls and lost income. Or, the transition requires additional layers of effort and complexity——what if all the top 100 Web properties require ISPs to get their DNS servers whitelisted the same way Google does? The fact that IPv6 tends to deteriorate in the manner described above is a direct result of the decision to create IPv6 from scratch as a separate protocol from IPv4. Suppose that IPv6 had been a backward-compatible replacement for IPv4. So if a packet has a 32-bit source address and a 32-bit destination address, it could be translated from IPv6 to IPv4, without loss of information. And obviously any IPv4 packet could be translated to IPv6 without loss of information. In that situation, organizations could have chosen to replace IPv4 with IPv6, so all their traffic is IPv6——at least, it would be until it leaves the network and has to be translated into IPv4. This way, there would be no "set and forget," and IPv6 would work because it's used for everything. Then, once enough people are on IPv6, it becomes possible to use 128-bit addresses. It's hard to tell whether this approach wouldn't have had some issues of its own, but it certainly seems that in an effort to create a clean new protocol from scratch, the IETF set up the Internet community for an uphill battle down the road. The endgame: NAT, P2P woes, and firewalls Today, 99 percent of the Internet is still IPv4-only. There is just no way that everything will be IPv6-enabled before we run out of IPv4 addresses in two, maybe three years. So, we need to put IPv4 on some kind of life support. Actually, for those of us who have IPv4 addresses today, there really isn't much of a problem in the short term. It's just that there won't be any addresses for new users a few years from now. But obviously, just serving existing users without any opportunity to grow won't fly with businesses. So the solution is to have users share an IPv4 address. People already do this at home using NAT (Network Address Translation). This makes it possible to go online with an entire household using only one IP address. But that level of frugalness won't be enough: soon, ISPs will be doing NAT so multiple customers will share an IPv4 address. Architecturally, NAT is a bad thing. It breaks all kinds of assumptions built into protocols, so it can get in the way of applications. This is especially true for peer-to-peer applications such as BitTorrent, but also VoIP (including Skype). However, over the years application builders have figured out ways to work around NAT. One method involves talking to the local home gateway that's performing NAT, and asking it to forward incoming connections using a port mapping protocol. This works very well when you run a NAT in your own home, —but not so much when your ISP runs a big NAT. Not only will different users compete for the best port numbers, but also the commonly used port mapping protocols (UPnP IGD and NAT-PMP) can't talk to a NAT several hops away. Making peer-to-peer applications work may or may not be a high priority for ISPs—, who also tend to be in the video content distribution and/or voice calling business. Worse, ISP-provided NAT may actually break IPv6 tunneling mechanisms that allow people to gain IPv6 connectivity if their ISP doesn't provide it. And deploying these expensive and complex new NAT boxes may take away resources that could otherwise be used to deploy IPv6 in the network. Firewalling In the early 2000s, Windows shipped with lots of insecure protocols enabled by default, and an unintended side effect of NAT—the service prevents all the machines behind a shared IP from receiving incoming connections—actually became a marketable feature in home routers. But since IPv6 has no NAT (nor does it have any need for NAT), NAT routers had to create the same firewall effect using a stateless firewall. However, NATs only break incoming connections by accident, so applications can work around that pretty easily (see the port mapping method described above). Firewalls, on the other hand, break connectivity on purpose, so it's generally not possible to easily work around them. And there are no port mapping mechanisms for IPv6 (—there's no NAT, remember)? All of this means that peer-to-peer protocols such as VoIP solutions and BitTorrent work worse over IPv6 than over IPv4. This situation probably won't be resolved any time soon, as people with "security" in their job title refuse to consider passing through unsolicited, incoming packets in any way, shape, or form. IPv6 NAT In the meantime, there are heated discussions inside the IETF about whether to specify the dreaded IPv6 NAT after all. The argument against it is architectural disgust—. (The IETF also passed on creating an IPv4 NAT specification, at first—.) But the argument in favor is that a well-behaved IPv6 NAT wouldn't break as much software as an IPv4 NAT recompiled for 128-bit addresses. The thing that creates most of the problems with NAT is the address sharing. With IPv6 NAT, it would be possible to create 1-to-1 address mappings rather than the 1-to-many used with IPv4 NAT, So it would still be possible to obfuscate an internal network that uses private addresses, but peer-to-peer applications—, if allowed through a firewall,— could work with a little extra logic. The lack of IPv6 NAT is also cited as a hurdle in IPv6 deployment, as it's one more thing that's different between IPv4 and IPv6. IPv6 NAT that's different from IPv4 NAT wouldn't really help in this area. Then again, the only thing that really looks, smells, and tastes like authentic IPv4... is IPv4. At some point, IPv6 needs to be its own protocol. Plan B There is no plan B. Despite the long list of the issues with IPv6 and its deployment, there are no alternatives. It took us the better part of two decades to get this far with IPv6—, and there's no way we can come up with, implement, and deploy an alternative before the lack of IPv4 addresses becomes a serious problem. Address sharing using NAT and some form of IPv4 address redistribution (for pay or otherwise) will allow IPv4 to continue to operate without too much trouble for a few years after the depletion of the various free address pools. However, we're currently putting about 200 million new addresses into use every year. That's with pretty tough rules about who can get addresses and fairly heavy NAT deployment in place already. Countries in northern and western Europe, which didn't receive large allotments of IPv4 address space when the getting was good in the 1980s, still use more than one IPv4 address per inhabitant and are still growing a little. Even the US—, at 1.5 billion IPv4 addresses—, is continuing to put more IPv4 addresses in use every year. But no matter how the IPv4 address space is sliced and diced, it's not going to sustain a global network in a future where countries like China and Brazil are rapidly catching up to the many countries in the developing world which don't even have anything close to their fair share of addresses, yet. So, even though we're still trying to figure out the question, the answer will have to be "IPv6." Some of the real and perceived shortcomings of IPv6 are currently being addressed by the IETF and other bodies. Even though there is very little IPv6 deployment today, there's already somewhat of a legacy problem: early IPv6 implementations don't support DHCPv6, for instance. But most IPv6-capable systems are still receiving (semi-) automated software updates, so the legacy issues may not be insurmountable this time around. We will have to move through a period where IPv4 is going from good to worse because of increasing layers of NAT, while IPv6 is still in its awkward phase and lacks the secondary features or the easy compatibility that we've grown accustomed to with IPv4. And maybe some systems will be IPv4-only while others are IPv6-only. Even though the transition is not going to be as smooth as we'd hoped, we'll end up with a stronger and more stable Internet, with almost infinite potential for growth after the transition. Just as long as we make sure there's a way to get there from here. [/release]
Known this for a while, gonna enjoy the view, I suggest premium or the ultimate skew.
oh fuck oh fuck were gonna be poor
Source? Otherwise this is hard to take seriously.
Thats a long article...
Nevermind, missed it. Yea.. never heard of this source before.
I understood this less and less as the article devolved into technical stuff I don't understand, but we're in for a bumpy ride right?
[QUOTE=leontodd;25159017]Thats a long article...[/QUOTE] What. Why? [editline]11:03PM[/editline] I remember 3 years back that we were starting to get low on IP-addresses. Hopefully this transition will be done as soon as possible so we don't have to worry about it in the future.
Yea...this is gonna suck balls for networking professionals...
For anyone who doesn't want to read, view the image at the top of the page and then read this: [quote]The problem with this approach is that the first person who wants to turn off IPv4 has to wait for the last person to add IPv6.[/quote]
So basically, [U]everyone[/U] needs to switch?
I love how it opens by saying 1.5Mbs is hardly even acceptable for home usage. Where I live, the very fastest internet available to me is 768Kbs. :frown:
[QUOTE=Haxxer;25159550]So basically, [U]everyone[/U] needs to switch?[/QUOTE] Pretty much. It isn't quite as hard as the article suggests, though. I know Comcast has begun trials of IPv6, so it's starting to get there, and I'm considering switching my home network to IPv6.
Could someone sum this up nicely? I'm interested but don't have the time to read all of that.
This might not be as troublesome as we think. There's always [url=http://en.wikipedia.org/wiki/Teredo_tunneling]Teredo Tunneling[/url] [quote]Teredo is a tunneling protocol designed to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices. It defines a way of encapsulating IPv6 packets within IPv4 UDP datagrams that can be routed through NAT devices and on the IPv4 internet.[/quote] [editline]11:36PM[/editline] In fact. There are several [url=http://en.wikipedia.org/wiki/IPv6_transition_mechanisms]IPv6 transition mechanisms[/url]
Isn't this kind of common knowledge? It was talked about quite a bit when I was doing my Computer Science degree.
Maybe after all of this is done someone can finally figure out how to open your NAT for Xbox Live.
[QUOTE=Elexar;25159966]Could someone sum this up nicely? I'm interested but don't have the time to read all of that.[/QUOTE] Short version: We're fucked.
Is there some quick way to check if your router supports IPv6 :ohdear:?
If I run ipconfig on CMD on one of my school's computers it throws up and IPv4 [I]and[/I] IPv6 address, is this normal or is my school network already upgraded?
[QUOTE=Mister_Jack;25160130]Maybe after all of this is done someone can finally figure out how to open your NAT for Xbox Live.[/QUOTE] Quick, easy, most insecure option: enable UPnP on your router. Manual port forwarding safer solution: [url]http://support.xbox.com/support/en/us/nxe/kb.aspx?category=xboxlive&ID=908874&lcid=1033[/url]
[QUOTE=Baldr;25160200]Is there some quick way to check if your router supports IPv6 :ohdear:?[/QUOTE] Pretty much 99% of home routers can do it. The only real problem is in backbone routers and infrastructure. And people still on Windows 98.
[QUOTE=gman003-main;25160336]Pretty much 99% of home routers can do it. The only real problem is in backbone routers and infrastructure. And people still on Windows 98.[/QUOTE] They deserve to be cut off.
Well, we're boned.
I was disappointed to find my router only supported IPv4 after I'd purchased it. Most unfortunately, it has been the most reliable router I've ever had too.
[QUOTE=CoolCorky;25160273]If I run ipconfig on CMD on one of my school's computers it throws up and IPv4 [I]and[/I] IPv6 address, is this normal or is my school network already upgraded?[/QUOTE] Almost all routers today support it, but unfortunately rolling it out across the actual internet is a bit tougher.
The transition will be ugly? It can't be uglier than the address itself. Look at this fucking thing - 3ffe:1900:4545:4:200:f8ff:fe21:67cf Yuck.
[QUOTE=M2k3;25160748]The transition will be ugly? It can't be uglier than the address itself. Look at this fucking thing - 3ffe:1900:4545:4:200:f8ff:fe21:67cf Yuck.[/QUOTE] Letters? Shit, I can barely memorize my own SteamID.
[QUOTE=Kidd;25160908]Letters? Shit, I can barely memorize my own SteamID.[/QUOTE] Then you'll love [url=http://tools.ietf.org/html/rfc1924]RFC 1924[/url] representation: 4)+k&C#VzJ4br>0wv%Y That is a valid address. So is this: 0aZ!~[>jhg#Ed8){|cQ Imagine typing those in. [sp]Note the date of release for RFC 1924. April 1, 1996.[/sp]
Seems like this is going to be a huge pain in the [IMG]http://static.arstechnica.net//public/v6/styles/light/images/masthead/logo.png[/IMG]
Sorry, you need to Log In to post a reply to this thread.