[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[no subject]
The justification for 16 bits of subnetting is a little more pie-in-the-sky, I?ll grant you, but given a 64-bit network numbering space, there?s really no disadvantage to giving out /48s and very little (or no) advantage to giving out smaller chunks to end-sites, regardless of their residential or commercial nature.
Let?s assume that ISPs come in essentially 3 flavors. MEGA (The Verizons, AT&Ts, Comcasts, etc. of the world) having more than 5 million customers, LARGE (having between 100,000and 5 million customers) and SMALL (having fewer than 100,000 customers).
Let?s assume the worst possible splits and add 1 nibble to the minimum needed for each ISP and another nibble for overhead.
Further, let?s assume that 7 billion people on earth all live in individual households and that each of them runs their own small business bringing the total customer base worldwide to 14 billion.
If everyone subscribes to a MEGA and each MEGA serves 5 million customers, we need 2,800 MEGA ISPs. Each of those will need 5,000,000 /48s which would require a /24. Let?s give each of those an additional 8 bits for overhead and bad splits and say each of them gets a /16. That?s 2,800 out of
65,536 /16s and we?ve served every customer on the planet with a lot of extra overhead, using approximately 4% of the address space.
Now, let?s make another copy of earth and serve everyone on a LARGE ISP with only 100,000 customers each. This requires 140,000 LARGE ISPs each of whom will need a /28 (100,000 /48s doesn?t fit in a /32, so we bump them up to /28). Adding in bad splits and overhead at a nibble each, we give each of them a /20. 140,000 /20s out of 1,048,576 total of which we used 44,800 for the MEGA ISPS leaves us with 863,776 /20s still available. We?ve now managed to burn approximately 18% of the total address space and we?ve served the entire world twice.
Finally, let us serve every customer in the world using a small ISP. Let?s assume that each small ISP only serves about 5,000 customers. For 5,000 customers, we would need a /32. Backing that off two nibbles for bad splits and overhead, we give each one a /24.
This will require 2,800,000 /24s. (I realize lots of ISPs server fewer than 5,000 customers, but those ISPs also don?t serve a total of 14 billion end sites,
so I think in terms of averages, this is not an unreasonable place to throw the dart).
There are 16,777,216 /24s in total, but we?ve already used 2,956,800 for the MEGA and LARGE ISPs, bringing our total utilization to 5,756,800 /24s.
We have now built three complete copies of the internet with some really huge assumptions about number of households and businesses added in and we still have only used roughly 34% of the total address space, including nibble boundary round-ups and everything else.
I propose the following: Let?s give out /48s for now. If we manage to hit either of the following two conditions in less than 50 years, I will happily (assuming I am still alive when it happens) assist in efforts to shift to more restrictive allocations.
Condition 1: If any RIR fully allocates more than 3 /12s worth of address space total
Condition 2: If we somehow manage to completely allocate all of 2000::/3
I realize that Condition 2 is almost impossible without meeting condition 1 much much earlier, but I put it there just in case.
If we reach a point where EITHER of those conditions becomes true, I will be happy to support more restrictive allocation policy. In the worst case, we have roughly 3/4 of the address space still unallocated when we switch to more restrictive policies. In the case of condition 1, we have a whole lot more. (At most we?ve used roughly 15[1] of the 512 /12s in 2000::/3 or less than 0.004% of the total address space.
My bet is that we can completely roll out IPv6 to everyone with every end-site getting a /48 and still not burn more than 0.004% of the total address space.
If anyone can prove me wrong, then I?ll help to push for more restrictive policies. Until then, let?s just give out /48s and stop hand wringing about how wasteful it is. Addresses that sit in the free pool beyond the end of the useful life of a protocol are also wasted.
Owen
[1] This figure could go up if we add more RIRs. However, even if we double it, we move from 0.004% to 0.008% utilization risk with 10 RIRs.
>
>
>
>
> -----
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> ----- Original Message -----
>
> From: "Daniel Corbe" <corbe at corbe.net>
> To: "Mike Hammett" <nanog at ics-il.net>
> Cc: "Mark Andrews" <marka at isc.org>, "North American Network Operators' Group" <nanog at nanog.org>
> Sent: Saturday, December 19, 2015 10:55:03 AM
> Subject: Re: Nat
>
> Hi.
>
>> On Dec 19, 2015, at 11:41 AM, Mike Hammett <nanog at ics-il.net> wrote:
>>
>> "A single /64 has never been enough and it is time to grind that
>> myth into the ground. ISP's that say a single /64 is enough are
>> clueless."
>>
>>
>>
>> LLLLOOOOOOLLLLL
>>
>>
>> A 100 gallon fuel tank is fine for most forms of transportation most people think of. For some reason we built IPv6 like a fighter jet requiring everyone have 10,000 gallon fuel tanks... for what purpose remains to be seen, if ever.
>>
>>
>
> You?re being deliberately flippant.
>
> There are technical reasons why a single /64 is not enough for an end user. A lot of it has to do with the way auto configuration works. The lower 64 bits of the IP address are essentially host entropy. EUI-64 (for example) is a 64 bit number derived from the mac address of the NIC.
>
> The requirement for the host portion of the address to be 64 bits long isn?t likely to change. Which means a /64 is the smallest possible prefix that can be assigned to an end user and it limits said end user to a single subnet.
>
> Handing out a /56 or a /48 allows the customer premise equipment to have multiple networks behind it. It?s a good practice and there?s certainly enough address space available to support it.
>
>
>