The NBN Questions again ...
Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser
Hi Terry, This may not be useful to you depending on the products that you use, but I am aware of someone who wrote their own DHCP server in Perl and modified the DHCP response to specify the CPE MAC address as the destination for the response instead of the broadcast address. It appears to work well. If you are not too tied to your DHCP product, it may be the simplest option since it would not require any changes to your current network design. It doesn't solve the IPv6 ND issue, however... Regards, Philip -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Monday, 29 August 2016 3:24 PM To: public@talk.mikrotik.com.au Subject: [MT-AU Public] The NBN Questions again ... Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
-----Original Message-----
From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group.
Not wanting to be mean, but why did you put 4000 hosts into a single broadcast domain? That's really your root problem. That said, a DHCP reply should not be broadcast under normal circumstances, it should be unicast to the discoverer. Only the initial discover from a client should be broadcast. On some systems this is a setting however; maybe your DHCP server has been set to broadcast responses. As far as IPv6 goes, DHCP works very differently. As long as your switches support MLD snooping and don't just flood all multicast packets out all ports, you should see massively reduced "broadcast" traffic compared to IPv4.
When ipv6 ND fires up some discoveries, again N*4000+ packets go out.
About the only stuff that goes to all hosts should be RA packets from routers. What other packets are you seeing to all hosts? There should be nothing except packets to ff02::1. I'm starting to wonder if your hardware supports MLD; it sounds as if multicast is being flooded out all ports.
Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge?
What "interfaces" are we talking here? Router interfaces? If this is a routed network, you should certainly not be seeing floods of DHCP traffic... neither IPv6 multicast nor IPv4 broadcast should make it across a subnet boundary. I clearly don't have a good enough handle on your topology. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@nullarbor.com.au) work +61 2 64957435 http://www.nullarbor.com.au mobile +61 428 957160 GPG fingerprint: 6D59 8AE6 810D 44E3 7626 7040 4DD6 F89F 3053 4774 Old fingerprint: 9DCA 0903 BCBD 0647 BCCC 2FA7 A35C 57A1 ACF9 00BB
Hi Terry, others I know and have helped with this have gone down the PPPOE path, it's not pretty either but it works and is pretty reliable. Splitting CVC's across multiple routers manages the load OK. I think DHCP will always give problems somewhere, and connecting to many AVC's into a bridge is a nightmare. Regards Paul -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Monday, 29 August 2016 3:24 PM To: public@talk.mikrotik.com.au Subject: [MT-AU Public] The NBN Questions again ... Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
On 29/08/2016 2:53 PM, Terry Sweetser (SkyMesh) wrote:
Hello MK People,
I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range.
They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue.
A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group.
The group members are all on the same split horizon, so no port to port traffic occurs.
When ipv6 ND fires up some discoveries, again N*4000+ packets go out.
Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out.
I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space.
PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible.
Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests.
Where to from here?
Is there some new or interesting Cisco-like "unnumbered" scheme possible?
Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge?
Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.)
Your going to need a uni-cast reply DHCP server, this is what is used by cable ISP. Not sure how to fix the ipv6, as I've not seen how this is done by cable providers. Mike
Hi Question How did you configure this " The group members are all on the same split horizon, so no port to port traffic occurs. " I presume you saying you have clients on the same broadcast domain, but configured up so that they can't talk to each other ? Alex -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Monday, 29 August 2016 3:24 PM To: public@talk.mikrotik.com.au Subject: [MT-AU Public] The NBN Questions again ... Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Alex - refer to this: http://wiki.mikrotik.com/wiki/Manual:MPLSVPLS#Split_horizon_bridging -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Alex Samad - Yieldbroker Sent: Tuesday, 30 August 2016 6:52 AM To: terry@skymesh.net.au; MikroTik Australia Public List <public@talk.mikrotik.com.au> Subject: Re: [MT-AU Public] The NBN Questions again ... Hi Question How did you configure this " The group members are all on the same split horizon, so no port to port traffic occurs. " I presume you saying you have clients on the same broadcast domain, but configured up so that they can't talk to each other ? Alex -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Monday, 29 August 2016 3:24 PM To: public@talk.mikrotik.com.au Subject: [MT-AU Public] The NBN Questions again ... Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Thanks -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Philip Loenneker Sent: Tuesday, 30 August 2016 8:55 AM To: MikroTik Australia Public List <public@talk.mikrotik.com.au>; terry@skymesh.net.au Subject: Re: [MT-AU Public] The NBN Questions again ... Alex - refer to this: http://wiki.mikrotik.com/wiki/Manual:MPLSVPLS#Split_horizon_bridging -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Alex Samad - Yieldbroker Sent: Tuesday, 30 August 2016 6:52 AM To: terry@skymesh.net.au; MikroTik Australia Public List <public@talk.mikrotik.com.au> Subject: Re: [MT-AU Public] The NBN Questions again ... Hi Question How did you configure this " The group members are all on the same split horizon, so no port to port traffic occurs. " I presume you saying you have clients on the same broadcast domain, but configured up so that they can't talk to each other ? Alex -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Monday, 29 August 2016 3:24 PM To: public@talk.mikrotik.com.au Subject: [MT-AU Public] The NBN Questions again ... Hello MK People, I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range. They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue. A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group. The group members are all on the same split horizon, so no port to port traffic occurs. When ipv6 ND fires up some discoveries, again N*4000+ packets go out. Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out. I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space. PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible. Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests. Where to from here? Is there some new or interesting Cisco-like "unnumbered" scheme possible? Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge? Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.) -- http://about.me/terry.sweetser _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au _______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
I have an update! On two moderately loaded CCRs, I have deployed a non-bridged solution. All of the Q-in-Q sub-interfaces are now their own broadcast domains. As I use static ip addresses, it is possible to assign a specific ip address to each AVC:
948 address=180.181.80.1/32 network=180.181.86.111 interface=AVC0000XYZXYZ actual-interface=AVC0000XYZXYZ
Yes, that works ... (the route below is auto-generated, the ARP entry comes from the DHCP "Assign ARP" feature.)
[terry@CCR2.NSW] /ip route> print detail where dst-address ="180.181.86.111/32" Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, b - bgp, o - ospf, m - mme, B - blackhole, U - unreachable, P - prohibit 0 ADC dst-address=180.181.86.111/32 pref-src=180.181.80.1 gateway=AVC0000XYZXYZ gateway-status=AVC0000XYZXYZ reachable distance=0 scope=10 [terry@CCR2.NSW] /ip route>
[terry@CCR2.NSW] /ip arp> print detail where address ="180.181.86.111" Flags: X - disabled, I - invalid, H - DHCP, D - dynamic, P - published, C - complete 0 HDC address=180.181.86.111 mac-address=10:BE:F5:CD:93:4B interface=AVC0000XYZXYZ published=no [terry@CCR2.NSW] /ip arp>
The sub-interface has proxy-arp turned up, so the AVC gets to ARP all of the 180.180.80.0/20 without issue. (The CCR hands out it's MAC address, problems solved.) Right now, I have 950 interfaces, 450 active clients and 2% CPU load on the busiest CCR. Almost every single byte is moved using FP. So far, the only issue is -- I need to wait around 1 second between adding an interface and making a /ip/dhcp-server on the interface -- the DHCP instance will go "red" aka invalid if added too fast. I have some concerns about ARP poisoning given the CCR now has proxy-arp running, but otherwise so far so good. --- http://about.me/terry.sweetser On 29/08/16 15:23, Terry Sweetser (SkyMesh) wrote:
Hello MK People,
I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range.
They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue.
A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group.
The group members are all on the same split horizon, so no port to port traffic occurs.
When ipv6 ND fires up some discoveries, again N*4000+ packets go out.
Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out.
I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space.
PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible.
Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests.
Where to from here?
Is there some new or interesting Cisco-like "unnumbered" scheme possible?
Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge?
Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.)
Nice solution Terry, and I think even better to you for sharing and working with people on the list, those are pretty great stats ! Regards Paul -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 12:16 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ... I have an update! On two moderately loaded CCRs, I have deployed a non-bridged solution. All of the Q-in-Q sub-interfaces are now their own broadcast domains. As I use static ip addresses, it is possible to assign a specific ip address to each AVC:
948 address=180.181.80.1/32 network=180.181.86.111 interface=AVC0000XYZXYZ actual-interface=AVC0000XYZXYZ
Yes, that works ... (the route below is auto-generated, the ARP entry comes from the DHCP "Assign ARP" feature.)
[terry@CCR2.NSW] /ip route> print detail where dst-address ="180.181.86.111/32" Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, b - bgp, o - ospf, m - mme, B - blackhole, U - unreachable, P - prohibit 0 ADC dst-address=180.181.86.111/32 pref-src=180.181.80.1 gateway=AVC0000XYZXYZ gateway-status=AVC0000XYZXYZ reachable distance=0 scope=10 [terry@CCR2.NSW] /ip route>
[terry@CCR2.NSW] /ip arp> print detail where address ="180.181.86.111" Flags: X - disabled, I - invalid, H - DHCP, D - dynamic, P - published, C - complete 0 HDC address=180.181.86.111 mac-address=10:BE:F5:CD:93:4B interface=AVC0000XYZXYZ published=no [terry@CCR2.NSW] /ip arp>
The sub-interface has proxy-arp turned up, so the AVC gets to ARP all of the 180.180.80.0/20 without issue. (The CCR hands out it's MAC address, problems solved.) Right now, I have 950 interfaces, 450 active clients and 2% CPU load on the busiest CCR. Almost every single byte is moved using FP. So far, the only issue is -- I need to wait around 1 second between adding an interface and making a /ip/dhcp-server on the interface -- the DHCP instance will go "red" aka invalid if added too fast. I have some concerns about ARP poisoning given the CCR now has proxy-arp running, but otherwise so far so good. --- http://about.me/terry.sweetser On 29/08/16 15:23, Terry Sweetser (SkyMesh) wrote:
Hello MK People,
I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range.
They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue.
A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group.
The group members are all on the same split horizon, so no port to port traffic occurs.
When ipv6 ND fires up some discoveries, again N*4000+ packets go out.
Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out.
I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space.
PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible.
Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests.
Where to from here?
Is there some new or interesting Cisco-like "unnumbered" scheme possible?
Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge?
Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.)
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Hi Terry, That's fantastic - I love seeing this kind of creative solution. Just to clarify - are you copying the PPPoE IP addressing method by assigning a /32 to the CPE and specifying the network address as the gateway? So you would have a separate IP pool and DHCP server running on each interface. I'd be very interested to see a more complete snip of the config if you are willing to share. Does this seem to work on all CPEs, or do you only work with particular brands? I vaguely remember trying something like this a while back, but only Mikrotik CPEs seemed happy with it. Regards, Philip -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 12:16 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ... I have an update! On two moderately loaded CCRs, I have deployed a non-bridged solution. All of the Q-in-Q sub-interfaces are now their own broadcast domains. As I use static ip addresses, it is possible to assign a specific ip address to each AVC:
948 address=180.181.80.1/32 network=180.181.86.111 interface=AVC0000XYZXYZ actual-interface=AVC0000XYZXYZ
Yes, that works ... (the route below is auto-generated, the ARP entry comes from the DHCP "Assign ARP" feature.)
[terry@CCR2.NSW] /ip route> print detail where dst-address ="180.181.86.111/32" Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, b - bgp, o - ospf, m - mme, B - blackhole, U - unreachable, P - prohibit 0 ADC dst-address=180.181.86.111/32 pref-src=180.181.80.1 gateway=AVC0000XYZXYZ gateway-status=AVC0000XYZXYZ reachable distance=0 scope=10 [terry@CCR2.NSW] /ip route>
[terry@CCR2.NSW] /ip arp> print detail where address ="180.181.86.111" Flags: X - disabled, I - invalid, H - DHCP, D - dynamic, P - published, C - complete 0 HDC address=180.181.86.111 mac-address=10:BE:F5:CD:93:4B interface=AVC0000XYZXYZ published=no [terry@CCR2.NSW] /ip arp>
The sub-interface has proxy-arp turned up, so the AVC gets to ARP all of the 180.180.80.0/20 without issue. (The CCR hands out it's MAC address, problems solved.) Right now, I have 950 interfaces, 450 active clients and 2% CPU load on the busiest CCR. Almost every single byte is moved using FP. So far, the only issue is -- I need to wait around 1 second between adding an interface and making a /ip/dhcp-server on the interface -- the DHCP instance will go "red" aka invalid if added too fast. I have some concerns about ARP poisoning given the CCR now has proxy-arp running, but otherwise so far so good. --- http://about.me/terry.sweetser On 29/08/16 15:23, Terry Sweetser (SkyMesh) wrote:
Hello MK People,
I have now several CCR 36core deployed routers, with Q-in-Q interfaces numbering in 4,000 range.
They're in a single bridge group and DHCP broadcasts and ipv6 multicasts have become a major issue.
A single DHCP reply generates an ipv4 broadcast to 4000 interfaces in the bridge group.
The group members are all on the same split horizon, so no port to port traffic occurs.
When ipv6 ND fires up some discoveries, again N*4000+ packets go out.
Given these are 600 bytes, and N*4000 PPS, N being number of open requests or discoveries, there's multiples of 10Mbps of traffic going out.
I'm loath to redo the bridge concept, it would mean burning up a /30 per site for "their own subnet" space.
PPPoE, as an answer to urgent support cases has been used, but for 10,000 sites this is just not possible.
Currently, I have a bridge rate limit in place to "slow down" (a la tar pit) floods of DHCP requests.
Where to from here?
Is there some new or interesting Cisco-like "unnumbered" scheme possible?
Can I try to move IPv6 to the individual interfaces, and drop IPv6 across the bridge?
Can I filter out all bridge DHCP and attach 4000 DHCP servers to the interfaces? (No joke.)
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Hi Phillip,
[terry@CCR2.NSW] /ip dhcp-server network> export # sep/06/2016 13:03:15 by RouterOS 6.36.2 # software id = CG4T-VJTQ # /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1 [terry@CCR2.NSW] /ip dhcp-server network>
As above, the netmask stays nice and short, 4000 ip addresses per NBN CVC. "proxy-arp" will happily handle the issue of getting from AVC to AVC. A '/32' on an ethernet port for the WAN ip address is very much a dead end on all CPE I have tested. Also, of note, the ipv6 set up and M/C load from this deployment has also been far leaner. (ipv6 is trivial, add address to sub-interface with advertise=yes, add DHCPv6 with a Pool set up just for the sub-interface, voila: static ipv6 addresses and PD.) --- http://about.me/terry.sweetser On 06/09/16 12:33, Philip Loenneker wrote:
Hi Terry,
That's fantastic - I love seeing this kind of creative solution.
Just to clarify - are you copying the PPPoE IP addressing method by assigning a /32 to the CPE and specifying the network address as the gateway? So you would have a separate IP pool and DHCP server running on each interface.
I'd be very interested to see a more complete snip of the config if you are willing to share.
Does this seem to work on all CPEs, or do you only work with particular brands? I vaguely remember trying something like this a while back, but only Mikrotik CPEs seemed happy with it.
Regards, Philip
Hi Terry, I think I understand now - you put the 180.181.80.1/32 IP on every AVC interface with the customer IP as the network address, and the DHCP server on each one has a pool with a single IP address but shares the Network object. Something like this: [common to all AVCs] /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1 [example AVC] /interface vlan add interface=NBN-NNI-CVC name=AVC27 vlan-id=27 arp=proxy-arp /ip address interface=AVC27 address=180.181.80.1/32 network=180.181.81.27 /ip pool add name=POOL-AVC27 ranges=180.181.81.27 /ip dhcp-server add address-pool=POOL-ACV27 disabled=no interface=AVC27 lease-time=30m name=DHCP-AVC27 add-arp=yes Do I understand correctly? -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 1:19 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ... Hi Phillip,
[terry@CCR2.NSW] /ip dhcp-server network> export # sep/06/2016 13:03:15 by RouterOS 6.36.2 # software id = CG4T-VJTQ # /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1 [terry@CCR2.NSW] /ip dhcp-server network>
As above, the netmask stays nice and short, 4000 ip addresses per NBN CVC. "proxy-arp" will happily handle the issue of getting from AVC to AVC. A '/32' on an ethernet port for the WAN ip address is very much a dead end on all CPE I have tested. Also, of note, the ipv6 set up and M/C load from this deployment has also been far leaner. (ipv6 is trivial, add address to sub-interface with advertise=yes, add DHCPv6 with a Pool set up just for the sub-interface, voila: static ipv6 addresses and PD.) --- http://about.me/terry.sweetser On 06/09/16 12:33, Philip Loenneker wrote:
Hi Terry,
That's fantastic - I love seeing this kind of creative solution.
Just to clarify - are you copying the PPPoE IP addressing method by assigning a /32 to the CPE and specifying the network address as the gateway? So you would have a separate IP pool and DHCP server running on each interface.
I'd be very interested to see a more complete snip of the config if you are willing to share.
Does this seem to work on all CPEs, or do you only work with particular brands? I vaguely remember trying something like this a while back, but only Mikrotik CPEs seemed happy with it.
Regards, Philip
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Hi Phillip, That looks like a good config -- my only difference is I use radius, and the auth is based on option 82. --- http://about.me/terry.sweetser On 06/09/16 13:59, Philip Loenneker wrote:
Hi Terry,
I think I understand now - you put the 180.181.80.1/32 IP on every AVC interface with the customer IP as the network address, and the DHCP server on each one has a pool with a single IP address but shares the Network object. Something like this:
[common to all AVCs] /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1
[example AVC] /interface vlan add interface=NBN-NNI-CVC name=AVC27 vlan-id=27 arp=proxy-arp /ip address interface=AVC27 address=180.181.80.1/32 network=180.181.81.27 /ip pool add name=POOL-AVC27 ranges=180.181.81.27 /ip dhcp-server add address-pool=POOL-ACV27 disabled=no interface=AVC27 lease-time=30m name=DHCP-AVC27 add-arp=yes
Do I understand correctly?
-----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 1:19 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ...
Hi Phillip,
[terry@CCR2.NSW] /ip dhcp-server network> export # sep/06/2016 13:03:15 by RouterOS 6.36.2 # software id = CG4T-VJTQ # /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1 [terry@CCR2.NSW] /ip dhcp-server network> As above, the netmask stays nice and short, 4000 ip addresses per NBN CVC.
"proxy-arp" will happily handle the issue of getting from AVC to AVC.
A '/32' on an ethernet port for the WAN ip address is very much a dead end on all CPE I have tested.
Also, of note, the ipv6 set up and M/C load from this deployment has also been far leaner. (ipv6 is trivial, add address to sub-interface with advertise=yes, add DHCPv6 with a Pool set up just for the sub-interface, voila: static ipv6 addresses and PD.)
--- http://about.me/terry.sweetser
On 06/09/16 12:33, Philip Loenneker wrote:
Hi Terry,
That's fantastic - I love seeing this kind of creative solution.
Just to clarify - are you copying the PPPoE IP addressing method by assigning a /32 to the CPE and specifying the network address as the gateway? So you would have a separate IP pool and DHCP server running on each interface.
I'd be very interested to see a more complete snip of the config if you are willing to share.
Does this seem to work on all CPEs, or do you only work with particular brands? I vaguely remember trying something like this a while back, but only Mikrotik CPEs seemed happy with it.
Regards, Philip
Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
Thanks Terry. And thanks for sharing this, I have no doubt that it will prove useful in the future. -----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 2:02 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ... Hi Phillip, That looks like a good config -- my only difference is I use radius, and the auth is based on option 82. --- http://about.me/terry.sweetser On 06/09/16 13:59, Philip Loenneker wrote:
Hi Terry,
I think I understand now - you put the 180.181.80.1/32 IP on every AVC interface with the customer IP as the network address, and the DHCP server on each one has a pool with a single IP address but shares the Network object. Something like this:
[common to all AVCs] /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1
[example AVC] /interface vlan add interface=NBN-NNI-CVC name=AVC27 vlan-id=27 arp=proxy-arp /ip address interface=AVC27 address=180.181.80.1/32 network=180.181.81.27 /ip pool add name=POOL-AVC27 ranges=180.181.81.27 /ip dhcp-server add address-pool=POOL-ACV27 disabled=no interface=AVC27 lease-time=30m name=DHCP-AVC27 add-arp=yes
Do I understand correctly?
-----Original Message----- From: Public [mailto:public-bounces@talk.mikrotik.com.au] On Behalf Of Terry Sweetser (SkyMesh) Sent: Tuesday, 6 September 2016 1:19 PM To: public@talk.mikrotik.com.au Subject: Re: [MT-AU Public] The NBN Questions again ...
Hi Phillip,
[terry@CCR2.NSW] /ip dhcp-server network> export # sep/06/2016 13:03:15 by RouterOS 6.36.2 # software id = CG4T-VJTQ # /ip dhcp-server network add address=180.181.80.0/20 gateway=180.181.80.1 netmask=20 ntp-server=180.181.127.1 [terry@CCR2.NSW] /ip dhcp-server network> As above, the netmask stays nice and short, 4000 ip addresses per NBN CVC.
"proxy-arp" will happily handle the issue of getting from AVC to AVC.
A '/32' on an ethernet port for the WAN ip address is very much a dead end on all CPE I have tested.
Also, of note, the ipv6 set up and M/C load from this deployment has also been far leaner. (ipv6 is trivial, add address to sub-interface with advertise=yes, add DHCPv6 with a Pool set up just for the sub-interface, voila: static ipv6 addresses and PD.)
--- http://about.me/terry.sweetser
On 06/09/16 12:33, Philip Loenneker wrote:
Hi Terry,
That's fantastic - I love seeing this kind of creative solution.
Just to clarify - are you copying the PPPoE IP addressing method by assigning a /32 to the CPE and specifying the network address as the gateway? So you would have a separate IP pool and DHCP server running on each interface.
I'd be very interested to see a more complete snip of the config if you are willing to share.
Does this seem to work on all CPEs, or do you only work with particular brands? I vaguely remember trying something like this a while back, but only Mikrotik CPEs seemed happy with it.
Regards, Philip
Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
_______________________________________________ Public mailing list Public@talk.mikrotik.com.au http://talk.mikrotik.com.au/mailman/listinfo/public_talk.mikrotik.com.au
participants (6)
-
Alex Samad - Yieldbroker
-
Karl Auer
-
Mike O'Connor
-
Paul Julian
-
Philip Loenneker
-
Terry Sweetser (SkyMesh)