Spot the Issues: Review Questions

1. 

Why is the policy producing hidden routes when applied as an input policy for an EBGP peer?

[edit policy-options] lab@r6# show policy-statement transit-filter-in term rfc1918 {     from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;      } } term kill-27-or-longer {     from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term tag-t2 {      from neighbor 172.16.0.22;     then {          community add trans-2;      } } term nhs {     then {         next-hop self;      } }

the use of next hop self on an input policy would create a routing loop so such routes will behidden. you should only attempt to overwrite the bgp next hop as part of an ibgp export policy.

2. 

Your goal is to prepend your AS number to the routes that r3 is sending to T2 only. Will the following EBGP export policy achieve this goal?

[edit policy-options policy-statement prepend-T2] lab@r3# show term 1 {    from protocol bgp;    to neighbor 172.16.0.18;    then as-path-prepend "65412 65412";  }

no. the to neighbor match condition is a synonym for from neighbor when used in animport policy and is ignored when used in an export policy. the 5.2 code release used in this testbed resulted in no routes being prepended when policies like this were used. you would need toeliminate the to condition and apply the resulting policy as a t2 neighbor-level export to achievethe goal of prepending the routes sent to t2 but not t1.

3. 

Why is this configuration not damping any routes?

[edit] lab@r6# show protocols bgp group 65002 {      type internal;     local-address 10.0.9.6;     export ibgp;     neighbor 10.0.9.7;     neighbor 10.0.3.5; } group t2 {      type external;      metric-out igp;      import [ damp transit-filter-in ];      export no-192-24s;      remove-private;      neighbor 172.16.0.22 {          peer-as 65222;     } } [edit] lab@r6# show policy-options | find damp policy-statement damp {     term 1 {         from {             route-filter 200.0.0.0/16 orlonger damping none;              route-filter 0.0.0.0/0 prefix-length-range /0-/8 damping none;              route-filter 0.0.0.0/0 prefix-length-range /9-/16 damping low;             route-filter 0.0.0.0/0 prefix-length-range /17-/32 damping high;         }     } } . . . damping none {     disable; } damping high {      half-life 25;      reuse 1500; } damping low {      half-life 20;      reuse 1000; }

the operator has neglected to enable damping with the damping keyword in the bgp stanza.the damping option should be applied either globally or at the t2 peer group level.

4. 

Why is this policy not setting the local preference value in customer routes to 101?

[edit policy-options] lab@r4# show policy-statement cust-filter-in term rfc1918 {      from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;         route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;      } } term kill-27-or-longer {      from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term tag-c1 {     from as-path cust-1;      then {          community add cust-1;         accept;      } } term tag-c2 {     from as-path cust-2;      then {          community add cust-2;         accept;      } } term prefer-cust {     from as-path [ cust-1 cust-2 ];     then {         local-preference 101;         next policy;      } } term kill-rest {     then reject; }

the accept actions in the tag-c1 and tag-c2 terms are causing matching routes to break outof further policy processing so that they do not encounter the prefer-cust term.

5. 

What is wrong with this EBGP case study configuration for r3?

[edit protocols bgp] lab@r3# show advertise-inactive; remove-private; group 65000 {      type internal;      local-address 10.0.3.3;      hold-time 180;      export ibgp;      neighbor 10.0.6.1; } group c-bgp {      type external;     multihop;      local-address 10.0.3.3;      export ibgp;      neighbor 10.0.3.4 {          peer-as 65001;      }      neighbor 10.0.3.5 {          peer-as 65002;      } } group t1-t2 {      type external;     damping;     import [ damp transit-filter-in ];     export [ no-192-24s prepend ];      peer-as 65222;     multipath;      neighbor 172.16.0.14;      neighbor 172.16.0.18; }

the application of remove-private will break your ibgp confederation by removing the privateas numbers used by the member ass. care should be taken to apply this option only to ebgppeers when used in a confederation environment.

6. 

You are testing forwarding paths at the end of this chapter's case study. Do the extra hops in the traceroute indicate some type of BGP problem?

[edit] lab@r2# run traceroute 130.130.0.1 traceroute to 130.130.0.1 (130.130.0.1), 30 hops max, 40 byte packets  1 10.0.4.9 (10.0.4.9) 0.354 ms 0.259 ms 0.222 ms  2 10.0.2.5 (10.0.2.5) 0.281 ms 0.243 ms 0.229 ms  3 130.130.0.1 (130.130.0.1) 0.189 ms 0.172 ms 0.160 ms

it is hard to say for sure, but in this case the extra hop is caused by the 10/8 static route thatpoints to r4 as the primary next hop. similar forwarding inefficiencies will result when routersin a stub or level 1 area decide to install one default route from one abr/attached router ratherthan another. knowing when extra hops indicate a problem and when they are simply the`nature of the beast` is an invaluable skill. these captures prove that bgp is not to blame forthe suboptimal routing: [edit] lab@r2# run show route 130.130.0.1 detail inet.0: 111625 destinations, 111628 routes (111622 active, 0 holddown, 5 hidden) 130.130.0.0/16 (1 entry, 1 announced)*bgp preference: 170/-101 source: 10.0.3.4 nexthop: 10.0.4.9 via fe-0/0/1.0, selected protocol nexthop: 10.0.3.3 indirect nexthop: 8429198 56 state: -active int ext- local as: 65001 peer as: 65001 age: 1:56:27 metric: 0 metric2: 0 task: bgp_65001.10.0.3.4+1035 announcement bits (2): 0-krt 4-resolve inet.0 as path: (65000) 65222 i communities: 65412:101 localpref: 100 router id: 10.0.3.4[edit] lab@r2# run show route 10.0.3.3 inet.0: 111606 destinations, 111609 routes (111603 active, 0 holddown, 5hidden) + = active route, - = last active, * = both 10.0.0.0/8*[static/5] 02:13:08 - to 10.0.4.9 via fe-0/0/1.0 [static/10] 02:13:59- to 10.0.4.1 via fe-0/0/2.0

Answers

1. 

The use of next hop self on an input policy would create a routing loop so such routes will be hidden. You should only attempt to overwrite the BGP next hop as part of an IBGP export policy.

2. 

No. The to neighbor match condition is a synonym for from neighbor when used in an import policy and is ignored when used in an export policy. The 5.2 code release used in this test bed resulted in no routes being prepended when policies like this were used. You would need to eliminate the to condition and apply the resulting policy as a T2 neighbor-level export to achieve the goal of prepending the routes sent to T2 but not T1.

3. 

The operator has neglected to enable damping with the damping keyword in the BGP stanza. The damping option should be applied either globally or at the t2 peer group level.

4. 

The accept actions in the tag-c1 and tag-c2 terms are causing matching routes to break out of further policy processing so that they do not encounter the prefer-cust term.

5. 

The application of remove-private will break your IBGP confederation by removing the private AS numbers used by the member ASs. Care should be taken to apply this option only to EBGP peers when used in a confederation environment.

6. 

It is hard to say for sure, but in this case the extra hop is caused by the 10/8 static route that points to r4 as the primary next hop. Similar forwarding inefficiencies will result when routers in a stub or level 1 area decide to install one default route from one ABR/attached router rather than another. Knowing when extra hops indicate a problem and when they are simply the "nature of the beast" is an invaluable skill. These captures prove that BGP is not to blame for the suboptimal routing:

 [edit] lab@r2# run show route 130.130.0.1 detail inet.0: 111625 destinations, 111628 routes (111622 active, 0 holddown, 5   hidden) 130.130.0.0/16 (1 entry, 1 announced)          *BGP   Preference: 170/-101                 Source: 10.0.3.4                 Nexthop: 10.0.4.9 via fe-0/0/1.0, selected                 Protocol Nexthop: 10.0.3.3 Indirect nexthop: 8429198 56                 State: <Active Int Ext>                 Local AS: 65001 Peer AS: 65001                 Age: 1:56:27 Metric: 0 Metric2: 0                 Task: BGP_65001.10.0.3.4+1035                 Announcement bits (2): 0-KRT 4-Resolve inet.0                 AS path: (65000) 65222 I                 Communities: 65412:101                 Localpref: 100                 Router ID: 10.0.3.4 [edit] lab@r2# run show route 10.0.3.3 inet.0: 111606 destinations, 111609 routes (111603 active, 0 holddown, 5  hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8          *[Static/5] 02:13:08                      > to 10.0.4.9 via fe-0/0/1.0                      [Static/10] 02:13:59                      > to 10.0.4.1 via fe-0/0/2.0




JNCIP. Juniper Networks Certified Internet Professional Study Guide Exam CERT-JNCIP-M
JNCIP: Juniper Networks Certified Internet Professional Study Guide
ISBN: 0782140734
EAN: 2147483647
Year: 2003
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net