Multicast lab 3: Any-Source Multicast with anycast RP

Multicast lab 3: Any-Source Multicast with anycast RP

After giving a two-days training to a customer on multicast technology, I take the opportunity to have my lab and the configurations ready to share with you a suite of five different multicast configurations examples. And, how to make some tests and troubleshooting. These examples are based on the labs I used to practice the CCIE R&S practical exam.

Content of the posts

 

Including, for each topic:

  1. A network drawing
  2. The configuration details.
  3. The tests and debug outputs.
  4. When possible, a failover test and debug outputs.
  5. Some basic troubleshooting.

 

Any-source multicast (ASM) with anycast RP

 

Lab design and description

The network design is the same for every scenario:

I use four layer-3 Cisco 3650 catalyst switches running IOS-XE. With layer-3 ptp interfaces between them to create a small layer-3 network. OSPF is running between the switches to advertise all the networks: the ptp networks, the loopback0 interfaces and the VLAN10 of SW-1, where is the multicast source.

For the multicast source, I use a laptop running two VLC instances, sending video stream traffic to 233.1.2.3 and 234.1.2.3, both on port 30’001.

The receiver will be simulated by the loopback0 interface of SW-4.

 (Click on the image to see a larger version)

 

On this scenario, we will add a new loopback IP (5.5.5.5) on SW-2 and SW-3 to have RP redundancy.

Then, we must configure a multicast source discovery protocol (MSDP) session between the loopback0 IP of SW2 (2.2.2.2) and SW3 (3.3.3.3), to maintain consistent sources information.

 

 

Configuration

First, we need to configure PIM on the interfaces between the four switches, on the interface where is source is located (VLAN10 of SW-1) and the interface of the clients or receivers (in this case, the loopback0 of SW-4). For this, we use the ip pim spare-mode command.

Do not forget to also configure ip multicast-routing, like below:

SW-1,2,3,4:
SW-X(config)#ip multicast-routing

SW-1,2,3,4: on the ptp interfaces, VLAN10 of SW-1, lo0 of SW-4: 
SW-X(config-if)#ip pim sparse-mode

 

Then, we add the two loopback5 interfaces on SW-2 and SW-3. We redistribute the network 5.5.5.5/32 into OSPF and activate ip pim spare-mode on each:

SW-2/3#config t
Enter configuration commands, one per line. End with CNTL/Z.
SW-2/3(config)#int lo5
SW-2/3(config-if)#ip address 5.5.5.5 255.255.255.255
SW-2/3(config-if)#ip ospf 1 area 0.0.0.0
SW-2/3(config-if)#ip pim sparse-mode

 

Now, we configure the MSDP session between SW-2 and SW-3:

SW-2(config)#ip msdp peer 3.3.3.3 connect-source lo0
SW-2(config)#ip msdp originator-id loopback 0 
SW-3(config)#ip msdp peer 2.2.2.2 connect-source lo0
SW-3(config)#ip msdp originator-id loopback 0

 

After a few seconds, we should see this on the log of SW-2/SW-3:

%MSDP-5-PEER_UPDOWN: Session to peer 3.3.3.3 going up
%MSDP-5-PEER_UPDOWN: Session to peer 2.2.2.2 going up

 

Finally, we must configure the RP. For this, we can use static RP, auto RP or bootstrap router (BSR). Here, I will configure static RP on the four switches. And like in the previous labs, I use an ACL to specify the group 233.1.2.3 only:

SW-1(config)#access-list 11 permit 233.1.2.3
!
SW-1(config)#ip pim rp-address 5.5.5.5 11 override

 

 

Tests and debugging

 

Before any IGMP join

First of all, let’s check the RP static mapping on the four switches:

SW-X#show ip pim rp mapping 
PIM Group-to-RP Mappings

Acl: 11, Static-Override
RP: 5.5.5.5 (?)

 

Good, now let’s see the MSDP session status on SW-2 / SW-3 with the command show ip mdsp summary:

SW-2#show ip msdp summary 
MSDP Peer Status Summary
Peer Address AS State Uptime/ Reset SA Peer Name
Downtime Count Count
3.3.3.3 ? Up 00:07:45 0 1 ?

SW-3#show ip msdp summary 
MSDP Peer Status Summary
Peer Address AS State Uptime/ Reset SA Peer Name
Downtime Count Count
2.2.2.2 ? Up 00:08:53 0 0 ?

 

We can see much more details with the command: show ip msdp peer:

SW-2#show ip msdp peer 
MSDP Peer 3.3.3.3 (?), AS ?
Connection status:
State: Up, Resets: 0, Connection source: Loopback0 (2.2.2.2)
Uptime(Downtime): 00:07:42, Messages sent/received: 8/9
Output messages discarded: 0
Connection and counters cleared 00:08:42 ago
SA Filtering:
Input (S,G) filter: none, route-map: none
Input RP filter: none, route-map: none
Output (S,G) filter: none, route-map: none
Output RP filter: none, route-map: none
SA-Requests: 
Input filter: none
Peer ttl threshold: 0
SAs learned from this peer: 1
Number of connection transitions to Established state: 1
Input queue size: 0, Output queue size: 0
MD5 signature protection on MSDP TCP connection: not enabled
Message counters:
RPF Failure count: 0
SA Messages in/out: 5/0
SA Requests in: 0
SA Responses out: 0
Data Packets in/out: 0/0

 

 

IGMP join

Now, let’s make an IGMP join for the group 233.1.2.3 on the loopback0 of SW-4.

SW-4#config t
SW-4(config)#int lo
SW-4(config-if)#ip igmp join-group 233.1.2.3

 

At this point, we can see on the multicast routing table of SW-4, the multicast traffic is coming through SW-3:

SW-4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 233.1.2.3), 00:02:41/stopped, RP 5.5.5.5, flags: SJCL
  Incoming interface: GigabitEthernet1/0/3, RPF nbr 10.10.34.3
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:11/00:02:48

(10.10.10.10, 233.1.2.3), 00:02:41/00:00:18, flags: LJT
  Incoming interface: GigabitEthernet1/0/3, RPF nbr 10.10.34.3
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:11/00:02:48

And we can also see traffic statistics with the command: show ip mroute count:

SW-4#show ip mroute count 
Use "show ip mfib count" to get better response time for a large number of mroutes.

IP Multicast Statistics
3 routes using 1544 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 233.1.2.3, Source count: 1, Packets forwarded: 23200, Packets received: 23200
RP-tree: Forwarding: 4/0/0/0, Other: 4/0/0
Source: 10.10.10.10/32, Forwarding: 23196/146/348/404, Other: 23196/0/0

 

On SW-3, we can see the multicast route (10.10.10.10, 233.1.2.3) is advertised to SW-2 via MSDP, because it have the “A” flag:

SW-3#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, 
Z - Multicast Tunnel, z - MDT-data group sender, 
Y - Joined MDT-data group, y - Sending to MDT-data group, 
V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 233.1.2.3), 04:20:30/00:03:19, RP 5.5.5.5, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
GigabitEthernet1/0/4, Forward/Sparse, 00:00:10/00:03:19

(10.10.10.10, 233.1.2.3), 02:24:25/00:02:34, flags: TA
Incoming interface: GigabitEthernet1/0/1, RPF nbr 10.10.13.1
Outgoing interface list:
GigabitEthernet1/0/4, Forward/Sparse, 00:00:10/00:03:19

 

We can also see on SW-2, it has received the source-active information from SW-3 via MSDP:

SW-2#show ip msdp sa-cache 
MSDP Source-Active Cache - 1 entries
(10.10.10.10, 233.1.2.3), RP 3.3.3.3, AS ?,00:12:16/00:05:35, Peer 3.3.3.3

 

 

Failover test

Now, let’s test the redundancy of this system. We know SW-4 uses SW-3 as RP because 5.5.5.5 has a better OSPF metric from the SW-4 point of view (1Gb link instead of 100Mb to SW-2).

So, let’s shutdown the interface between SW-4 and SW-3 to simulate a problem on the link.

I could also shutdown the loopback5 interface of SW-3, but in this case the multicast traffic should continue to arrives through SW-3, because of the shortest-path tree.

As soon as I shutdown the Gi1/0/3 of SW-4, here are the PIM messages we can see with the debug ip pim 233.1.2.3 command on the switches:

  • SW-4 list the PIM neighbor SW3. Then send a PIM join message to SW-2 (Gi1/0/2) for the group 233.1.2.3:
SW-4:
PIM(0): Neighbor 10.10.34.3 (GigabitEthernet1/0/3) timed out
%PIM-5-NBRCHG: neighbor 10.10.34.3 DOWN on interface GigabitEthernet1/0/3 non DR
Changing source interface for RP 5.5.5.5 register encap tunnel from GigabitEthernet1/0/3 to Loopback0
%OSPF-5-ADJCHG: Process 1, Nbr 3.3.3.3 on GigabitEthernet1/0/3 from FULL to DOWN, Neighbor Down: Interface down or detached
%LINK-5-CHANGED: Interface GigabitEthernet1/0/3, changed state to administratively down
%LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/3, changed state to down
PIM(0): Insert (10.10.10.10,233.1.2.3) join in nbr 10.10.24.2's queue
PIM(0): Building Triggered (*,G) Join / (S,G,RP-bit) Prune message for 233.1.2.3
PIM(0): Insert (*,233.1.2.3) join in nbr 10.10.24.2's queue
PIM(0): Building Join/Prune packet for nbr 10.10.24.2
PIM(0):  Adding v2 (5.5.5.5/32, 233.1.2.3), WC-bit, RPT-bit, S-bit Join
PIM(0):  Adding v2 (10.10.10.10/32, 233.1.2.3), S-bit Join
PIM(0): Send v2 join/prune to 10.10.24.2 (GigabitEthernet1/0/2)
  • SW-2 receive the PIM join message, add himself as RP for the group 233.1.2.3 and send the PIM join message to SW-1:
PIM(0): Received v2 Join/Prune on GigabitEthernet1/0/4 from 10.10.24.4, to us
PIM(0): Join-list: (*, 233.1.2.3), RPT-bit set, WC-bit set, S-bit set
PIM(0): Check RP 5.5.5.5 into the (*, 233.1.2.3) entry
PIM(0): Adding register decap tunnel (Tunnel0) as accepting interface of (*, 233.1.2.3).
PIM(0): Add GigabitEthernet1/0/4/10.10.24.4 to (*, 233.1.2.3), Forward state, by PIM *G Join
PIM(0): Adding register decap tunnel (Tunnel0) as accepting interface of (10.10.10.10, 233.1.2.3).
PIM(0): Insert (10.10.10.10,233.1.2.3) join in nbr 10.10.12.1's queue
PIM(0): Update GigabitEthernet1/0/4/10.10.24.4 to (10.10.10.10, 233.1.2.3), Forward state, by PIM *G Join
PIM(0): Join-list: (10.10.10.10/32, 233.1.2.3), S-bit set
PIM(0): Update GigabitEthernet1/0/4/10.10.24.4 to (10.10.10.10, 233.1.2.3), Forward state, by PIM SG Join
PIM(0): Building Join/Prune packet for nbr 10.10.12.1
PIM(0):  Adding v2 (10.10.10.10/32, 233.1.2.3), S-bit Join
PIM(0): Send v2 join/prune to 10.10.12.1 (GigabitEthernet1/0/1)
  • SW-3 send prune messages to SW-1:
%LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/4, changed state to down
%LINK-3-UPDOWN: Interface GigabitEthernet1/0/4, changed state to down
PIM(0): Neighbor 10.10.34.4 (GigabitEthernet1/0/4) timed out
%PIM-5-NBRCHG: neighbor 10.10.34.4 DOWN on interface GigabitEthernet1/0/4 DR
%OSPF-5-ADJCHG: Process 1, Nbr 4.4.4.4 on GigabitEthernet1/0/4 from FULL to DOWN, Neighbor Down: Interface down or detached
PIM(0): Prune GigabitEthernet1/0/4/233.1.2.3 from (*, 233.1.2.3) - deleted
PIM(0): Prune GigabitEthernet1/0/4/233.1.2.3 from (10.10.10.10/32, 233.1.2.3)
PIM(0): Insert (10.10.10.10,233.1.2.3) prune in nbr 10.10.13.1's queue - deleted
PIM(0): Building Join/Prune packet for nbr 10.10.13.1
PIM(0): Adding v2 (10.10.10.10/32, 233.1.2.3), S-bit Prune
PIM(0): Send v2 join/prune to 10.10.13.1 (GigabitEthernet1/0/1)
  • And SW-1, receive the prune message from SW-3 and the join message from SW-2:
PIM(0): Received v2 Join/Prune on GigabitEthernet1/0/3 from 10.10.13.3, to us
PIM(0): Prune-list: (10.10.10.10/32, 233.1.2.3) 
PIM(0): Prune GigabitEthernet1/0/3/233.1.2.3 from (10.10.10.10/32, 233.1.2.3) - deleted

PIM(0): Received v2 Join/Prune on GigabitEthernet1/0/2 from 10.10.12.2, to us
PIM(0): Join-list: (10.10.10.10/32, 233.1.2.3), S-bit set
PIM(0): Add GigabitEthernet1/0/2/10.10.12.2 to (10.10.10.10, 233.1.2.3), Forward state, by PIM SG Join
PIM(0): Received v2 Join/Prune on GigabitEthernet1/0/2 from 10.10.12.2, to us
PIM(0): Join-list: (10.10.10.10/32, 233.1.2.3), S-bit set
PIM(0): Update GigabitEthernet1/0/2/10.10.12.2 to (10.10.10.10, 233.1.2.3), Forward state, by PIM SG Join
PIM(0): Send v2 Data-header Register to 5.5.5.5 for 10.10.10.10, group 233.1.2.3

PIM(0): Received v2 Register-Stop on GigabitEthernet1/0/3 from 5.5.5.5
PIM(0): for source 10.10.10.10, group 233.1.2.3
PIM(0): Clear Registering flag to 5.5.5.5 for (10.10.10.10/32, 233.1.2.3)

PIM(0): Received v2 Join/Prune on GigabitEthernet1/0/2 from 10.10.12.2, to us
PIM(0): Join-list: (10.10.10.10/32, 233.1.2.3), S-bit set
PIM(0): Update GigabitEthernet1/0/2/10.10.12.2 to (10.10.10.10, 233.1.2.3), Forward state, by PIM SG Join

Now, we can see on SW-4, the multicast traffic is still arriving, but from SW-2:

SW-4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 233.1.2.3), 00:09:16/stopped, RP 5.5.5.5, flags: SJCL
  Incoming interface: GigabitEthernet1/0/2, RPF nbr 10.10.24.2
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:09:16/00:02:43

(10.10.10.10, 233.1.2.3), 00:09:16/00:01:42, flags: LJT
  Incoming interface: GigabitEthernet1/0/2, RPF nbr 10.10.24.2
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:09:16/00:02:43

SW-4#show ip mroute count 
Use "show ip mfib count" to get better response time for a large number of mroutes.

IP Multicast Statistics
3 routes using 1544 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 233.1.2.3, Source count: 1, Packets forwarded: 72467, Packets received: 72467
RP-tree: Forwarding: 8/0/0/0, Other: 8/0/0
Source: 10.10.10.10/32, Forwarding: 72459/145/345/404, Other: 72459/0/0

 

ICMP Test

I made the same failover test as above, but this time I made a continuous ICMP ping from SW-1 to the group IP (233.1.2.3) at the same time. Here is the output:

SW-1#ping 233.1.2.3 repeat 100
Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 233.1.2.3, timeout is 2 seconds:

Reply to request 0 from 4.4.4.4, 1 ms
Reply to request 1 from 4.4.4.4, 1 ms
Reply to request 2 from 4.4.4.4, 1 ms
Reply to request 3 from 4.4.4.4, 1 ms... <-- Here, I made the shutdown on SW-4 interface to SW-3. We can see some packets are lost.
Reply to request 7 from 4.4.4.4, 20 ms
Reply to request 8 from 4.4.4.4, 1 ms  <-- Then, we are back to normal response time
Reply to request 9 from 4.4.4.4, 1 ms
Reply to request 10 from 4.4.4.4, 1 ms
Reply to request 11 from 4.4.4.4, 1 ms
Reply to request 12 from 4.4.4.4, 1 ms  <-- Here, I did a "no shut". We can see it's transparent.
Reply to request 13 from 4.4.4.4, 1 ms
Reply to request 14 from 4.4.4.4, 1 ms
Reply to request 15 from 4.4.4.4, 1 ms 
Reply to request 16 from 4.4.4.4, 1 ms
Reply to request 17 from 4.4.4.4, 1 ms
Reply to request 18 from 4.4.4.4, 1 ms
Reply to request 19 from 4.4.4.4, 1 ms

 

 

Troubleshooting

As we use static RP, most of the troubleshooting methods are similar as the multicast lab-1. Please refer to the multicast lab-1.

 

MSDP peer check

Check the status of the MSDP peering with the commands: show ip mdsp summary and show ip mdsp peer.

 

 

Others posts of this series

 


Did you like this article? Please share it…

1 Comment

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *