[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
MTU
- Subject: MTU
- From: ler762 at gmail.com (Lee)
- Date: Fri, 22 Jul 2016 18:45:49 -0400
- In-reply-to: <[email protected]>
- References: <CAPkb-7CRMNUK6Av-BBzUgMLh2YW0XfxDs4x4P4rJg6ookVNP=A@mail.gmail.com> <CAPkb-7CTi_ekX35OcjdwqfM+W-gdkfkfpLsHXxhH5bVr=nyv3A@mail.gmail.com> <CAPkb-7CnxtLuWWA+PDx2ba_xkaE54+Ei4NAbZp7TzvgdGi2Auw@mail.gmail.com> <CAPkb-7B3geM2iydgmidq+gFh4oJvzbCj_eznQSvLRhi3LGgvCA@mail.gmail.com> <CAPkb-7C78wAAYrdfgZB65W=tJRumUb3qbVGVE1u28V7DTP82Ww@mail.gmail.com> <CAPkb-7A1RsiN=tv6YuggARbUx2u8EDOUyHc__BJ6iB=UFWxs4Q@mail.gmail.com> <CAPkb-7AQTQnuv9baaO=yEG0e+s0O1ppopafsCNoP8bSuCMysxg@mail.gmail.com> <CAPkb-7AXhoCrVnWv+pzbJGPW86mvzGmGuitqscqQ7AJFrriDRg@mail.gmail.com> <CAPkb-7BaGqybRDLRd01E65bmzoc-1ebnuYsEtYsSU8VLu5dp3A@mail.gmail.com> <CAPkb-7BAjMO2sdY=87J6S6yd8oecyr1gyJa_jqaQUy7=XU0oAw@mail.gmail.com> <CAPkb-7A5TgL+dL8D48sccHSCVcHWEah9sv3hZT-7j=kN4aKnYQ@mail.gmail.com> <CAPkb-7DtxSt=9vgBviZBSaxN57q=wkDJw+6m=RBmf01s9GtVXw@mail.gmail.com> <CAPkb-7AOoQ4=7cCuqwN96u-q5cBsr6q1aTnQ8sxW83LNqcjb2g@mail.gmail.com> <CAP-guGUcE_wrbZdZX7rO2nN1GGREuqVrWfJ37CqY7MRLBZTTSw@mail.gmail.com> <[email protected]> <[email protected]>
On 7/22/16, Phil Rosenthal <pr at isprime.com> wrote:
>
>> On Jul 22, 2016, at 1:37 PM, Grzegorz Janoszka <Grzegorz at Janoszka.pl>
>> wrote:
>> What I noticed a few years ago was that BGP convergence time was faster
>> with higher MTU.
>> Full BGP table load took twice less time on MTU 9192 than on 1500.
>> Of course BGP has to be allowed to use higher MTU.
>>
>> Anyone else observed something similar?
>
> I have read about others experiencing this, and did some testing a few
> months back -- my experience was that for low latency links, there was a
> measurable but not huge difference. For high latency links, with Juniper
> anyway, there was a very negligible difference, because the TCP Window size
> is hard-coded at something small (16384?), so that ends up being the limit
> more than the tcp slow-start issues that MTU helps with.
I think the Cisco default window size is 16KB but you can change it with
ip tcp window-size NNN
Lee
>
> With that said, we run MTU at >9000 on all of our transit links, and all of
> our internal links, with no problems. Make sure to do testing to send pings
> with do-not-fragment at the maximum size configured, and without
> do-not-fragment just slightly larger than the maximum size configured, to
> make sure that there are no mismatches on configuration due to vendor
> differences.
>
> Best Regards,
> -Phil Rosenthal
- References:
- MTU
- From: baldur.norddahl at gmail.com (Baldur Norddahl)
- MTU
- From: bill at herrin.us (William Herrin)
- MTU
- From: Grzegorz at Janoszka.pl (Grzegorz Janoszka)
- MTU
- From: pr at isprime.com (Phil Rosenthal)