Featured

An Overview of Bit Index Explicit Replication (BIER)

Introduction

IP Multicast (IPMC) efficiently forwards one-to-many traffic and is leveraged for services like IPTV or multicast VPN (mVPN) [1]. In this article we explain the basic concept of traditional IPMC, describe its shortcomings, and present Bit Index Explicit Replication (BIER) as a solution.

An IPMC group may correspond to one specific IPTV channel. Packets destined to an IPMC group address are forwarded to all its members. Receivers leverage IGMP/MLD (Internet Group Management Protocol, RFC 3376/Multicast Listener Discovery, RFC 3810) to join an IPMC group. Within an IPMC domain, typical IPMC protocols use in-network traffic replication to ensure that at most a single copy of a packet traverses each link to reach multiple receivers. To that end, they establish per IPMC group one IPMC tree, possibly for each source, along which the traffic of that group is forwarded. The concept is shown in Figure 1. Examples for such protocols are PIM (Protocol Independent Multicast, RFC 7761), mLDP (Multicast Label Distribution Protocol), or RSVP-TE/P2MP (Resource Reservation Protocol – Traffic Engineering, RFC 3209, Point-to-Multipoint RFC 4875). The IPMC trees requires forwarding information in intermediate hops that we denote as ‘state’ in the following.

Two multicast trees
Figure 1: Two multicast trees.

Certain IPMC solutions for special use cases with static distribution trees – especially implementations of PIM – have proven to be useful and manageable. Nevertheless, traditional IPMC solutions suffer from limited scalability [1] [2]. Technologies to address these issues have been proposed but they cause further complexity and create new disadvantages.

BIER has been proposed by the IETF and is described in RFC 8279 [3]. The basic idea is to remove the IPMC-group-dependent state and the need for explicit-tree building from devices in the middle of the network to improve the scalability of the IPMC domain. This is achieved by adding a BIER header to IPMC packets. Within such a BIER domain, the packets are forwarded only according to this header.

Shortcomings of Traditional Multicast

Traditional IPMC solutions like PIM, mLDP, or RSVP-TE/P2MP rely on per-group IPMC tree state. This tree state limits scalability in three ways.

P0. Devices have to store state per IPMC group.

P1. The IPMC protocol has to actively create, change, and tear-down the IPMC trees whenever IPMC groups start, change, or stop.

P2. In case of a topology change, the forwarding structure may need to change. Thus, the states of all IPMC groups possibly require adaptation. The time needed for that process scales with the number of IPMC groups.

Several additional technologies have been introduced to address these issues but they come with new disadvantages. Ingress replication is a tunnel-based approach that avoids additional state by utilizing unicast tunnels for building an IPMC tree at the expense of reduced forwarding efficiency. PMSI (Provider Multicast Service Interfaces, RFC 6513) leverages aggregated trees to carry the traffic of multiple IPMC applications, which causes significant signaling overhead. RSVP-TE/P2MP is a heavyweight approach to reduce convergence time issues for IPMC with pre-established backup tunnels. All those approaches have to be managed by operators making traditional IPMC more complex, expensive, less reliable, and overall challenging to deploy.

Bit Index Explicit Replication (BIER)

BIER proposes a replicating fabric technology which allows an operator to forward IPMC traffic efficiently without the need for explicit IPMC tree state in intermediate devices. In this section, we describe the concept of BIER, explain BIER’s forwarding procedure in detail, and outline how it addresses the previously mentioned shortcomings of traditional IPMC.

BIER overview
Figure 2: Packets enter the BIER domain via Bit-Forwarding Ingress Routers (BFIRs). They construct and push a BIER header onto the packet which holds information for BIER’s forwarding procedure. At the Bit-Forwarding Egress Routers (BFERs), the BIER header is removed.

BIER Concept

The concept of BIER is illustrated in Figure 2. Traffic enters a BIER domain through a Bit-Forwarding Ingress Router (BFIR) and is replicated efficiently to potentially many Bit-Forwarding Egress Routers (BFERs). The BFIR adds a BIER header to the packets. This header contains information about the set of BFERs to which a copy of the packet is to be delivered. The BFERs remove the BIER header from the packets before they leave the BIER domain.

The BIER header is leveraged by all Bit-Forwarding Routers (BFRs) within the BIER domain to efficiently forward the traffic along a tree structure or even any acyclic graph that is determined from the underlay information, normally carried by the IGP (Interior Gateway Protocol). More specifically, the BIER header contains a bit string where each bit corresponds to a specific BFER. The BFIR sets that bit if the corresponding BFER should receive the packet.

A BFR relays and replicates BIER traffic based on that header information and its so-called Bit Index Forwarding Table (BIFT). The BIFT holds the next-hop information for every possible destination (BFER). Therefore, the size of the BIFT is independent of the number of IPMC groups. Real deployments may group the forwarding information for destinations that are reached via the same next-hop. This reduces the number of forwarding entries even further so that it scales with the number of a BFR’s next-hops. The forwarding procedure ensures that a next-hop receives only a single copy of a packet even though the packet’s BIER header indicates multiple destinations with that next-hop. To forward BIER traffic consistently, the BIFTs are commonly configured with shortest path entries towards the BFERs. BIER acquires this information from the IGP topology database of the underlying routing protocol, e.g. ISIS (Intermediate System to Intermediate System) or OSPF (Open Shortest Path First).

BIER Forwarding

In the following, we explain how traffic is forwarded with BIER along a shortest-path tree and illustrate it with an example. Figure 3 shows a network topology together with the shortest-path tree from Node 1 towards all destinations.

Example topology
Figure 3: Example topology with the shortest-path forwarding tree for Node 1.

The BFERs are numbered and assigned to the bit positions in the bitstring of a BIER header. Thereby, counting starts with the least-significant bit of the bitstring. That means, the bitstring ‘000001’ corresponds to Node 1 and ‘100000’ corresponds to Node 6.

The BFR needs to ensure that all destinations receive a copy of the packet. To that end, the BFR forwards a copy to each next-hop that is on the path to at least one destination indicated in the BIER header. In our example, we assume that Node 1 receives a packet with a bitstring ‘100100’ in the BIER header, i.e., the bits for Node 3 and Node 6 are activated. Therefore, Node 1 sends a copy of the packet to Node 3 and Node 2.

To prevent duplicates, a BFR clears all bits in the bitstring of a packet’s BIER header that are not reached via the next-hop the packet is forwarded to. This ensures that there is only a single packet on the way towards each desired destination in spite of packet replication. In our example, Node 1 unsets the bit for Node 6 when forwarding the packet to Node 3 (‘000100’) and it unsets the bit for Node3 when forwarding the packet to Node 2 (‘100000’).

We explain how a BFR achieves the explained forwarding behavior in an efficient way using the bitstring of a packet’s BIER header and its BIFT. The BIFT contains for every destination a so-called Forwarding Bit Mask (F-BM) and a next-hop. The F-BM is a bitmask whose bit positions correspond to the same BFERs as the bit positions in the bitstring of a BIER header. Activated bits in the F-BM indicate the BFERs that are reached via the specific next-hop. Therefore, all destinations reached via the same next-hop share the same F-BM. As an example, the BIFT of Node 1 is given in Table 1. For destination Node 3, the next-hop is Node 3 and the corresponding F-BM indicates that only Node 3 is reached via Node 3. For the destinations Node 2, Node 4, Node 5, and Node 6, the next-hop is Node 2 and the F-BM indicates that all these nodes share the next-hop Node 2.

Table 1: BIFT of Node 1
Table 1: BIFT of Node 1

To efficiently process a packet, the BFR creates an internal copy of the bitstring and performs the following algorithm until all bits of the internal copy of the bitstring are zero. The BFR finds a destination indicated in the bitstring of the internal copy of the bitstring. It looks up the F-BM for that destination in the BIFT and constructs a new BIER header using the bitstring of the packet ANDed with the F-BM. Then it sends a copy of the packet with the modified bitstring to the next-hop also indicated in the BIFT. Afterwards, the internal copy of the bitstring is modified by bitwise ANDing it with the complement of the F-BM. This action removes all destinations from the packet header that have been served by the last transmission of the packet.

BIER – A Scalable Multicast Approach

BIER overcomes the previously outlined problems of IPMC. It solves the problem of IPMC-group-dependent state within forwarding devices (P0) by moving this state to the BIER header. In case of changing IPMC-groups (P1), only BFIRs require an update as they construct the BIER header that indicates the destinations of the packet. At last, the BIFT of every BFR holds forwarding entries for all BFERs in the network in a compact form. In case of a topology change (P2), only that information has to be updated instead of the tree state of potentially many IPMC groups, which takes a long time. As a result, the reconvergence time of BIER can be compared to IP unicast rather than to one of the traditional IPMC protocols.

By transferring the state from the forwarding devices to the header, the size of the header becomes a scalability issue as one bit is required for every BFER. With current router technology, 256 bits will be the most commonly used bitstring length because this is equivalent to the two IPv6 addresses in every IPv6 header. Longer bitstrings may be supported by future hardware. If there are more than 256 BFERs within the network, BIER supports the possibility of separating BFERs into subsets. The BIER header contains a field that identifies the subset that is addressed by a BIER packet. Thus, if an IPMC packet targets BFERs from different subsets, for each of these subsets, one copy of a packet has to be forwarded.

Use Cases

At the beginning of the BIER standardization journey, ten use cases were envisioned as technology drivers [1]. In this section we briefly describe the most prominent use cases, namely various multicast Layer 2/3 VPNs (L2/3VPNs), IPTV media streaming, data center virtualization services, and financial services. We outline problems that occur when these use cases are supported with traditional IPMC approaches and point out how BIER may be used to solve these problems.

Multicast VPN Services

Multicast within VPNs is used for news ticker, broadcast-TV applications or in general, content delivery networks (CDNs). For signaling in traditional multicast VPN (mVPN) services, PIM, mLDP, RSVP-TE/P2MP, or ingress replication is used. Each implementation offers a trade-off between state and flooding. The Multidirectional Inclusive PMSI (MI-PMSI) relies on flooding frames to all provider edge (PE) routers of the VPN, regardless of whether an IPMC receiver joined behind the PE routers. This results in a rather steady IPMC tree at the expense of flooding. In Selective PMSI (S-PMSI) only PE routers with joined receivers are part of the IPMC tree. S-PMSI reduces flooding with a more dynamic tree, requiring more state on the provider’s core routers (P routers). Ingress replication causes the ingress PE router to send multiple copies of the same frame and forward it via unicast tunnels to the destinations. This poses a high replication burden on ingress routers and high bandwidth burden on paths.

Requiring IPMC-group-dependent state is a typical problem network operators are faced with (P0). With the introduction of BIER, this problem no longer exists.

IPTV Media Streaming

IPMC is leveraged for IPTV, or Internet video distribution in CDNs. Typical implementations like PIM, mLDP, or RSVP-TE/P2MP generate IPMC-group-dependent state as described in the previous use case. Additionally, such media streaming services may experience extensive subscription changes as every time a user switches a channel, the IPMC groups may have to be adapted. This may cause a high update frequency of IPMC state.

BIER solves the problem of requiring IPMC-group-dependent state (P0). In particular changes of subscriptions can be managed by reconfiguring BFIRs instead of potentially many devices (P1) so that core routers are not affected.

Data Center Virtualization Services

Virtual eXtensible LAN (VXLAN, RFC 7348) interconnects L2 networks over an L3 infrastructure. It encapsulates L2 frames in UDP and adds a 24-bit ID so that 16 million virtual network instances (VNIs) can be differentiated. Each VNI is an isolated virtual network similar to a VLAN. That technology is used to isolate VLANs of multiple tenants in modern multi-tenant datacenters.

Typically, a tenant interconnects its virtual machines (VMs) over an L3 infrastructure using one or multiple VNIs to logically separate its own traffic and to isolate it from other tenants’ traffic. If a VM is moved from one physical machine to another or even to another datacenter, there is no need to change its IP address as long as the VM remains in the same VNI.

IPMC can be leveraged to distribute broadcast, unknown, and multicast (BUM) traffic over the L3 infrastructure within a single VNI. One or even multiple IPMC groups are needed per tenant and, therefore, the number of IPMC groups may be very large. Thus, this use case faces again the IPMC state problem (P0), causing significant challenges for datacenter switches, data and forwarding planes, as well as for network operation and management. That problem may be solved by leveraging BIER instead of traditional IPMC protocols in the L3 underlay network.

Financial Services

IPMC is used to deliver real-time stock market data to subscribers. Such highly time-dependent data requires fast recalculation of paths in case of a topology change to satisfy latency requirements.

For traditional IPMC, a topology change requires a significant amount of time since potentially many IPMC trees have to be recomputed to restore connectivity and establish new shortest paths.

As BIER relies only on one IPMC-group-independent forwarding structure, its recomputation is significantly faster (P2).

Recent Working Group Achievements

The BIER working group developed BIER and provided several extensions, increasing its applicability and facilitating its deployment. We recap the results of the BIER working group below.

RFC 8279 [3] specifies the BIER architecture. Among others, it contains information about the BIER domain and its components, how the forwarding procedure works, and briefly explains the advantages of BIER compared to traditional IPMC solutions. RFC 8296 [4] defines the implementation of BIER encapsulation in MPLS and non-MPLS networks.

Signaling via PIM through the BIER domain, e.g. for subscriptions of receivers at a sender, is described in [9].

For operation in a real network, BIER devices need to share BIER-related information with each other. For example BFRs have to advertise their IDs, or bitstring lengths. BIER leverages link state routing protocols to perform this distribution. [5], [6] and [7] contain OSPF, ISIS and BGP extensions for this purpose. The latter is supported by a document for a BGP link state extension for BIER [8].

Outlook

With the standardization of BIER, a new charter for the BIER working group [10] has been proposed. The main goal is to generate new experimental RFCs and to move existing experimental RFCs to the Standards Track.

The BIER working group has to define a transition mechanism for BIER. It should describe how BIER could be introduced in existing IPMC networks. This will facilitate the deployment of BIER.

The charter proposes documenting the applicability of BIER and its use cases. A draft for the application of BIER to multicast L3VPN and EVPN is required. Mechanisms for the signaling between ingress and egress routers and improving scalability are also mentioned. Furthermore, a document that clearly discusses the benefits of BIER for specific use cases is desired.

Operation, administration, and management of the BIER domain have to be described. The simplification of IPMC traffic management with BIER is a particular focus and for this purpose management APIs are required.

The BIER working group will continue the work on BIER-TE, an extension to BIER to support traffic engineering (TE). In software-defined networks (SDN), BIER may profit from a controller-based architecture. A controller may calculate the entries of the BIFTs and configure them in the BFRs. It may also instruct the BIFRs with appropriate BIER headers for encapsulation of traffic from specific IPMC groups.

Summary

BIER is a new, innovative mechanism for efficient forwarding and replication of IPMC traffic. It addresses scalability, operational, and performance issues of traditional IPMC solutions. While the latter require per-IPMC-group state and explicit-tree building in the forwarding devices, BIER encodes the destinations of an IPMC group within the packet’s BIER header. The header is created by Bit-Forwarding Ingress Routers (BFIRs) when an IPMC packet enters the BIER domain. BIER scales very well as no IPMC-group-dependent information is required by forwarding nodes in the network core.

The collaboration in the BIER working group excels through participation of a large group of different vendors, operators, and researchers. Many companies have invested efforts in the standardization of BIER, which underlines its importance for future IPMC solutions. The spirit of the BIER working group is special even within the IETF. New ideas and use cases are always appreciated and discussed, and the community welcomes new members.

[1] Nagendra Kumar, Rajiv Asati, Mach Chen, Xiaohu Xu, Andrew Dolganow, Tony Przygienda, Arkadiy Gulko, Dom Robinson, Vishal Arya, and Caitlin Bestler. BIER Use Cases, January 2018.
[2] G. Shepherd, A. Dolganow, and A. Gulko. Bit Indexed Explicit Replication (BIER) Problem Statement. http://tools.ietf.org/html/draft-ietf-bier-problem-statement, April 2016.
[3] IJsbrand Wijnands, Eric C. Rosen, Andrew Dolganow, Tony Przygienda, and Sam Aldrin. Multicast Using Bit Index Explicit Replication (BIER). RFC 8279, November 2017.
[4] IJsbrand Wijnands, Eric C. Rosen, Andrew Dolganow, Jeff Tantsura, Sam Aldrin, and Israel Meilik. Encapsulation for Bit Index Explicit Replication (BIER) in MPLS and Non-MPLS Networks. RFC 8296, January 2018.
[5] P. Psenak, N. Kumar, I. Wijnands, A. Dolganow, T. Przygienda, J. Zhang, and S. Aldrin. OSPF Extensions For BIER. https://datatracker.ietf.org/doc/draft-ietf-bier-ospf-bier-extensions/, October 2015.
[6] L. Ginsberg, A. Przygienda, S. Aldrin, and J. Zhang. BIER support via ISIS. https://datatracker.ietf.org/doc/draft-ietf-bier-isis-extensions/, October 2015.
[7] Xiaohu Xu, Mach Chen, Keyur Patel, IJsbrand Wijnands, and Tony Przygienda. BGP Extensions for BIER. Technical report, January 2018.
[8] Ran Chen, Zheng Zhang, Vengada Prasad Govindan, and IJsbrand Wijnands. BGP Link-State extensions for BIER. Technical report, February 2018.
[9] Hooman Bidgoli, Andrew Dolganow, Jayant Kotalwar, Fengman Xu, IJsbrand Wijnands, and mankamana prasad mishra. PIM Signaling Through BIER Core. Technical report, February 2018.
[10] Alia Atlas, Tony Przygienda, and Greg Shepherd. Charter for the BIER WG. https://datatracker.ietf.org/doc/charter-ietf-bier/, February 2018.

  • I’m still a little confused as to the format of a BIER header – is it multicast or unicast? It sounds to me that it is (or may as well be) unicast, which would actually solve other issues such as transporting multicast over links which not multicast-enabled (meaning, we’d only need to multicast-enable sender / receiver links / subnets, not the transit links).

    • Sorry – not just the sender / receiver subnets, but anywhere ‘south’ of the BF(E/I)R’s, I guess. At that point, I presume the normal PIM-SM / RP – or whatever topology is being used – would be in effect (in the mentioned design, the RP could be a unicast addressed reachable through a BFIR).
      From RFC8296, as I skim through it, it looks as if the BFIR header is still multicast and – as above – I am not certain it needs to be anymore than any other encapsulation needs to be: let the BIER mechanisms replicate the BIER packets where necessary the same as IPMC does today.

    • I suppose that I’ve over-simplified, unless intermediate partners / providers would find it more amenable to establish a unicast (if that’s how it works since I am still unclear as to what a BIER header is constructed as – mcast or ucast) ‘BIER domain’ than just enable multicast.
      At my current thinking, the best that could be achieved is if the BIER header was actually a sub-header (within the payload of a normal unicast packet) that the BFER knows to check for (say when a packet arrives destined for a dedicated BFER loopback), and that the BFIR constructed the encapsulation header so that intermediate systems forwarded packets to the correct BFERs (presumably, setting the BFERs as the destination). This also means that unicast packet (encapsulation of a multicast packet) replication would need to occur at the BFIR if it needed to get that data to disparate BFERs. This is more like the VxLAN design where the next-hop is set to the correct VNI so that the underlay knows where to forward the traffic to (or as MPLS does with the shim header).
      With enough replicated packets (unicast wrappers around multicast), I imagine this would not be terribly scalable. However, it might be an option to get this multicast data across unicast-only links.

      • At that point, I suppose I might as well tunnel the ‘interesting’ traffic …

      • An advantage would be that one would not have to hit ever L3 interface and enable PIM across the transit networks if the BIER header is unicast. It’s also possible that I missed a detail that BIER essentially does just this for multicast – allows it to route across non-PIM-enabled interfaces, which is all I really want (hitting many known routers to put on a global config is easier than many routers plus hoping to account for all the necessary known interfaces to be PIM-enabled). Whether it’s mcast or ucast, if the BIER domain handles replication where necessary and does not necessitate PIM enablement (what about RP assignments – or is this outside of the BIER domain), then I think I’m getting a handle on it.

  • Wrt. BIER using unicast or multicast for transmission:

    PIM uses by default layer 2 multicast packets (like ethernet multicast packets) to pass IP multicast packets from router to router. That did result in a lot of complexity and issues. When mLDP (rfc6388) we introduced, the notion that the multicast payload (IP multicast or other) was forwarded between mLDP LSR primarily as L2 unicast and only use L2 multicast as an extension (which i think is not regularily deployed). This simplified network adoption of multicast with mLDP in MPLS networks significantly. Most links between routers in MPLS networks are p2p anyhow. In these environments, forwarding as L2 unicast is ideal. When a LAN has 3 LSR and a multicast packet would need to be sent by one of them to the two others, this would require 2 layer 2 unicast packets. This overhead is usually considered insignificant on these fast transit ethernet links. The simplicity of layer 2 unicast outweights it.

    Even though BIER is now defined to be transported via MPLS or non-MPLS underlays, most focus so far was put into transporting it across MPLS, so the currently standardized encapsulation for BIER (rfc8296) assumes by default layer 2 unicast (likely MPLS), see section 2.1.3, paragraph 2. I do not think we have written up the mechanisms to use layer 2 multicast, it would require similar extensions as for mLDP except that for BIER we would certainly not want to leverage LDP but any signaling to select layer 2 multicast would just use IGP extensions. I do not think there is currently sufficient demand for such an extension.

  • Wrt. partial deployment of BIER:

    Deploying BIER in domains does not require to enable PIM on any transit hop, just BIER. This overview article did not have the space left to detail the likely most simple models how we imagine BIER to be deployed outside of service providers, but one could consider to deploy BIER instead of PIM in enterprises, requiring only BIER with its IGP extensions, no PIM, native IP multicast packets (using
    layer 2 multicast) would only be seen on the edge ethernets connecting to sender/receivers.Simple BIER deployments today expect that every node in a network is enabled for BIER. Like it was necessary for PIM or mLDP. Because BIER only relies on IGP and IGP extensions, it shoul be possible though to also do incremental deployments of BIER, where only some routers in the IGP domain are BIER enabled. Of course you do need at least BIER on the senders connected to sources (BFIR) and to receivers (BFER). I have not checked if all the IGP extensions have been defined so that partial deployment will work without enhancements to the IGP extensions, but i think we have not described the complete mechanisms for BFRs to discover and select through the IGP non-directly connected BFRs and automatically set up the best remote adjacencies to them. Self-building such overlay distribution trees is a fairly advanced topic and most service providers would unfortunately prefer to have such mechanisms be directed from a central SDN controller instead of having them be performed autonomously by routers. Therefore i do not think there is enough interest to work on this. Instead i think the BIER, MPLS, SR and other current/upcoming data models could and will be used to build SDN controller software to enable orchestrations of networks with availability of BIER on just a subset of nodes.

    I hope the answer addresses the questions raised, layer 2 unicast or multicast transport of BIER and/or full or partial need for deployment. If not, please feel free to send email and we can publish further answers here after discussing OOB.

    • Thanks for the time taken to answer my questions! I think you bridged the gap for me in my main sticking point: I didn’t catch that all routers (BFRs) would need to be BIER-enabled versus being tricked into thinking the sourcedestination was BFIRBFER and just forwarding as normal jump Ethernet packets (and the BFER would know to decapsulate and forward on) similar to a GRE tunnel. If they’re all BIER-enabled, I then presume that there would be some BIER configuration replicating the effects of RP or MSDP behavior (or just leveraging these existing mechanisms) so that routers know when to forward traffic along versus black-holing for unrecognized groups.

    • Thanks: I think you helped me at least in understanding that in a BIER domain all routers have to be BIER enabled. I presumed the BFRs might merely forward BIER-encapsulated jumbo Ethernet frames with a SRC/DST of BFIR/BFER much like the outer encapsulation of GRE tunnels. I need to re-read your responses, but I didn’t pick up on how BIER-enabled routers learn where to route towards multicast groups: i.e., how is legacy RP / MSDP functionality either leveraged, emulated, or alleviated (clearly it’s not flooding, particularly when looking for a source … the replies back to interested receivers I can see following the BIER bit-index).

      • Re-reading and reconsidering, I think I understand how BIER will handle the legacy MSDP / RP work: MP-BGP (BIER), or in the case of OSPF whatever the LSA includes, etc. If I am correct, I then wonder what the meshing strategy would look like between BFRs (I am having trouble deciding if route-reflecting makes any sense, but so far leaning towards ‘no’), but I understand you folks are still working on the immediate need in MPLS (my concern is more for the spine-leaf topology, for corporations rather than providers).

        • Actually, I guess route-reflectors would be fine where needed (for example, spine-leaf), so long as the RR knows to change the bit-index to reflect the down-stream multicast-group BFERs rather than its own bit-index on forwarding BIER-encapsulated packets, or the RR would need to pass along the down-stream peers in his NP-BGP (BIER) advertisements to the RR clients so that the correct bit-index could be constructed by the BFIR/RR-client.

          • *MP-BGP (not NP-BGP – I’m spoiled by social networks ‘edit’ feature)