ont 2

  1. qos must be implemted consistently
    across the entire network
  2. If data travels over even a small portion of a network where different policies (or no policies) are applied,
    the entire QoS policy is destroyed.
  3. A trust boundary is
    the point within the network where markings such as CoS or DSCP begin to be accepted.
  4. the trust boundary must be implemented
    at one of three locations in a network as shown:
    • Endpoint or end system
    • Access layer
    • Distribution layer
  5. Trusted endpoints have the capabilities and intelligence to mark
    • application traffic to the appropriate CoS and/or
    • DSCP values.
    • and remark traffic
  6. endpoints connected to a switch command
    mls qos trust dscp interface command.
  7. if end point is not trusted then boundry should be at the
    access layer
  8. classification should be done as close to the
    network edge as possible.
  9. IP phones are trusted devices while
    pcs are not
  10. to discover weather the device can be trusted
    Cisco Discovery Protocol (CDP)
  11. Network-Based Application Recognition (NBAR)
    is a
    • classification and protocol discovery feature of Cisco IOS
    • software that recognizes a wide variety of applications, including
    • web-based applications and client/server applications that dynamically
    • assign TCP or UDP port numbers.
  12. NBAR features
    • the ability to guarantee bandwidth
    • to critical applications, limit bandwidth to other applications, drop
    • selective packets to avoid congestion, and mark packets appropriately
    • so that the network and the service provider's network can provide QoS
    • from end to end.
  13. . NBAR ensures that network bandwidth is used
    • efficiently by classifying packets and
    • then applying QoS to the classified traffic.
  14. NBAR performs the following two functions:
    Identification of applications and protocols (Layer 4 to Layer 7)

    Protocol discovery
  15. NBAR introduces several new classification features that
    identify applications and protocols from Layer 4 through Layer 7:


    • Statically assigned TCP and UDP port
    • numbers.

    Non-UDP and non-TCP IP protocols.

    • Dynamically assigned TCP and UDP port
    • numbers.
  16. NBAR includes a Protocol Discovery feature that
    provides an easy way to discover application protocols that are transversing an interface.
  17. Protocol Discovery maintains the following
    per-protocol statistics for enabled interfaces:
    Total number of input and output packets and bytes

    Input and output bit rates
  18. Packet Description Language Module (PDLM)
    • that can be loaded at run time to extend the NBAR list of recognized protocols. PDLMs can also be used
    • to enhance an existing protocol-recognition capability
  19. You must enable Cisco Express Forwarding (CEF) before you configure
    NBAR
  20. NBAR cannot support the following:
    • More than 24 concurrent URLs, hosts, or
    • Multipurpose Internet Mail Extension (MIME)-type matches

    • Matching beyond the first 400 bytes in a
    • packet payload

    • Multicast and switching modes other than
    • CEF

    Fragmented packets

    • URL, host, or MIME classification with
    • secure HTTP

    • Packets originating from or destined to
    • the router running NBAR
  21. NBAR is not supported on Fast EtherChannel,
    but is supported on
    Gigabit Ethernet interfaces.
  22. Interfaces configured to use tunneling or
    encryption
    • do not support NBAR;that is, you cannot use NBAR to
    • classify output traffic on a WAN link where tunneling or encryption is
    • used
  23. NBAR looks into the TCP/UDP payload itself and classifies packets on content within the
    • payload such as transaction identifier, message type, or
    • other similar data.
  24. HTTP URL matching in NBAR supports most HTTP request methods such as
    GET, PUT, HEAD, POST, DELETE, and TRACE
  25. NBAR protocal discovery provides an easy way to discover
    application portocals transmitting on the interface
  26. NBAR protocal discoverycan be applied to an interface
    to monitor both input and output traffic
  27. Modular QoS CLI
    simple configuration manual configured.
  28. stateful recgnition
    deeper packet recgnition
  29. NBAR can classify applications that use:
    Statically assigned TCP and UDP port numbers

    Non-UDP and non-TCP IP protocols

    Dynamically assigned TCP and UDP port numbers negotiated during connection establishment (requires stateful inspection)

    Subport and deep packet inspection classification

    Can customize TCP and UDP port numbers to an application
  30. Packet Description Language Module
    • §PDLMs allow NBAR to recognize new protocols
    • matching text patterns in data packets without requiring a new Cisco IOS
    • software image or a router reload.

    • §An external PDLM can be loaded at run time to
    • extend the NBAR list of recognized protocols.

    • §PDLMs can also be used to enhance an existing
    • protocol recognition capability.

    §PDLMs must be produced by Cisco engineers.
  31. look at 250/424 4.2.3
  32. NBAR get statistics from
    • polling Simple Network Management Protocol (SNMP) statistics from the NBAR Protocol Discovery
    • (PD) Management Information Base (MIB).
  33. NBAR Protocol Discovery
    • Analyzes application traffic patterns in real
    • time and discovers which traffic is running on the network

    Provides bidirectional, per-interface, and per-protocol statistics

    • Important monitoring tool supported by Cisco QoS
    • management tools:

    Generates real-time application statistics

    Provides traffic distribution information at key network locations
  34. NBAR Protocol Discovery can be applied to
    interfaces and can be used to monitor both
    input and output traffic.helps in defining QoS classes and policies
  35. he NBAR feature has two components:
    One component monitors applications traversing a network.

    The other component classifies traffic by protocol.
  36. 252/424
  37. Steps for Configuring NBAR for Static Protocols
    Required steps:

    • Enable NBAR Protocol
    • Discovery.

    Configure a traffic class.

    Configure a traffic policy.

    • Attach the traffic policy
    • to an interface.

    Enable PDLM if needed.
  38. Steps for Configuring Stateful NBAR for Dynamic Protocols
    Required steps:

    Configure a traffic class.

    Configure a traffic policy.

    • Attach the traffic policy to an
    • interface
  39. The ability of NBAR to classify traffic by protocol and then apply QoS to that traffic uses
    the MQC class map match criteria.
  40. 258
  41. Real-Time Transport Protocol (RTP) consists
    of a data part and a control part.
  42. The data part of RTP is a
    • thin protocol providing support for applications with real-time properties (such as continuous media [audio
    • and video]), which includes timing reconstruction, loss detection, and security and content identification.
  43. NBAR RTP payload classification not only allows you to
    statefully identify real-time audio and video traffic, but it also can differentiate on the basis of audio and video codecs to provide more granular QoS.
  44. Congestion
    • can occur at any point in the network where
    • there are points of speed mismatches or aggregation
  45. Queuing
    manages congestion to provide bandwidth and delay guarantees.
  46. queuing algorithm
    to sort the traffic and then determine some method of prioritizing it onto an output link.
  47. Speed mismatches are the most common reason for
    congestion
  48. Speed Mismatch
    Speed mismatches are the most typical cause of congestion.

    •Possibly persistentwhen going from LAN to WAN.

    •Usually transient when going from LAN to LAN.
  49. aggregation occurs
    • in WANs when multiple remote sites feed into a central
    • site.
  50. Queuing is
    a congestion-management mechanism that allows you to control congestion on interfaces.
  51. Queuing is
    • designed to accommodate temporary congestion on an interface of a network device by storing excess
    • packets in buffers until bandwidth becomes available.
  52. Complex queuing generally happens on
    outbound interfaces only. A router queues packets it sends out an interface.
  53. First-in, first-out (FIFO)

    Priority queuing (PQ)

    Round robin

    Weighted round robin (WRR)
    • FIFO: First-in,
    • first-out; the simplest algorithmPriority queuing
    • (PQ): Allows traffic to be prioritizedRound robin:
    • Allows several queues to share bandwidthWeighted round
    • robin (WRR): Allows sharing of bandwidth with prioritization
  54. FIFO
    First packet in is first packet out

    Simplest of all

    One queue

    All individual queues are FIFO
  55. all interfaces except serial interfaces at E1 (2.048 Mbps) and below use
    FIFO by default.
  56. Serial interfaces at E1 (2.048 Mbps) and below
    use weighted fair queuing (WFQ) by default.
  57. Priority Queuing
    §Uses multiple queues

    §Allows prioritization

    • §Always empties first queue before going to
    • the next queue:

    §Empty queue number 1.

    • §If queue number 1 is empty, then dispatch one
    • packet from queue number 2.

    • §If both queue number 1 and queue number 2 are
    • empty, then dispatch one packet from queue number 3.

    §Queues number 2 and number 3 may “starve”
  58. PQ gives priority queues absolute preferential
    treatment over
    low-priority queues;
  59. A priority list is a set of rules that describe how packets
    should be assigned to priority queues
  60. Keepalives sourced by the network server are always assigned to the
    high-priority queue
  61. PQ provides absolute preferential treatment to
    high-priority traffic, ensuring that mission-critical traffic traversing various WAN links gets priority treatment
  62. PQ introduces extra
    overhead
  63. Round robin refers to an
    arrangement that involves choosing all elements in a group equally in some rational order, usually starting from the top to the bottom of a list and then starting again at the top of the list and so on.
  64. Round Robin Queuing
    Uses multiple queues

    No prioritization

    Dispatches one packet from each queue in each round:

    • One packet from
    • queue number 1

    • One packet from
    • queue number 2

    • One packet from
    • queue number 3

    Then repeat
  65. Weighted Round Robin Queuing
    Allows prioritization

    §Assign a weight to each queue

    • §Dispatches packets from each queue
    • proportionately to an assigned weight:

    • §Dispatch up to four from
    • queue number 1.

    • §Dispatch up to two from
    • queue number 2.

    • §Dispatch 1 from
    • queue number 3.

    §Go back to queue number 1.
  66. weighted round robin (WRR) algorithm provides
    prioritization capabilities for round-robin queuing
  67. drawbacks of WRR queuing
    • it does not allocate bandwidth accurately
    • ratio between the byte count and the MTU is too large, WRR queuing will cause long delays.
  68. Problem with WRR:
    Some implementations of WRR dispatch a configurable number of bytes (threshold) from each queue for each round—several packets can be sent in each turn.

    The router is allowed to send the entire packet even if the sum of all bytes is more than the threshold.
  69. Router Queuing Components
    • Hardware queue: Uses FIFO strategy,
    • which is necessary for the interface drivers to transmit packets one by one. The hardware queue is sometimes referred to as the transmit
    • queue.
    • Software queuing system: Schedules
    • packets into the hardware queue based on the quality of service (QoS) requirements.
  70. Router queuing is needed bc
    The input interface is faster than the output interface.

    The output interface is receiving packets from multiple other interfaces.
  71. The software queue activates
    only when data must wait to be placed into the hardware queue.
  72. The hardware queue (transmit queue) is a
    final interface FIFO queue that holds frames to be immediately transmitted by the physical interface
  73. The Software Queue
    • Generally, a full hardware queue indicates
    • interface congestion, and software queuing is used to manage it.

    • When a packet is being forwarded, the router
    • will bypass the software queue if the hardware queue has space in it (no congestion).
  74. Reducing the size of the hardware queue has
    two benefits:
    It reduces the maximum amount of time that packets wait in the FIFO queue before being transmitted.

    It accelerates the use of QoS in Cisco IOS software.
  75. Improper tuning of the hardware queue may produce undesirable results:
    A long transmit queue may result in poor performance of the software queuing system.

    • A short transmit queue may result in a large number of interrupts, which causes high CPU utilization and
    • low link utilization.
  76. The Hardware Queue
    • §Routers determine the length of the hardware
    • queue based on the configured bandwidth of the interface.

    • §The length of the hardware queue can be
    • adjusted with the tx-ring-limit command.
  77. Congestion on Software Interfaces
    • Subinterfaces and software interfaces (dialers, tunnels, Frame Relay subinterfaces) do not have their own
    • separate transmit queue.


    Subinterfaces and software interfaces congest when the transmit queueof their main hardware interface congests.

    The tx-ring state (full, not-full) is an indication of hardware interface congestion.

    The terms “TxQ” and “tx-ring” both describe the hardware queue and are interchangeable.
  78. Weighted Fair Queuing (WFQ)
    A queuing algorithm should share the bandwidth fairly among flows by:

    Reducing response time for
    interactive flows by scheduling them to the front of the queue

    Preventing high-volume flows ( shares flow) from monopolizing an interface
    Weighted Fair Queuing (WFQ)

    • Reducing response time for interactive flows by scheduling them to the front of the queue
    • Preventing high-volume flows from monopolizing an interface
  79. In the WFQ implementation, conversations are sorted into flows and transmitted by the
    order of the last bit crossing its channel
  80. Unfairness is reinstated
    by introducing weight to give proportionately more bandwidth to flows with
    • higher IP precedence
    • (lower weight).
  81. WFQ is a dynamic scheduling method that provides
    fair bandwidth allocation to all network traffic.
  82. WFQ applies
    weights to identified traffic, classifies traffic into flows,
    • and
    • determines how much bandwidth each flow is allowed, relative to other
    • flows.
  83. WFQ allows you to give low-volume traffic, such as Telnet sessions,
    priority over high-volume traffic, such as FTP sessions.
  84. WFQ gives
    concurrent file transfers balanced use of link capacity; that is, when
    multiple file transfers occur,
  85. The WFQ method works as the default
    queuing mode on serial interfaces configured to run at or below E1
    speeds (2.048 Mbps)
  86. WFQ provides the solution for situations
    • to provide consistent response times to heavy
    • and light network users alike, without adding excessive bandwidth
  87. WFQ can manage duplex data flows,
  88. WFQ classification has to identify
    individual flows
  89. taken
    from the IP header and the TCP or User Datagram Protocol (UDP) headers:
    • Source IP addressDestination IP addressProtocol number (identifying TCP or
    • UDP)Type of service fieldSource TCP or UDP port numberDestination TCP or UDP port number
  90. WFQ uses a fixed number of queues. The
    hash function is used to assign a queue to a flow. There are eight
    additional queues for system packets and optionally up to 1000 queues
    for Resource Reservation Protocol (RSVP) flows. The number of dynamic
    queues that WFQ uses by default is based on the interface bandwidth.
    With the default interface bandwidth, WFQ uses 256 dynamic queues
  91. WFQ uses the following two parameters that affect
    the dropping of packets:
    • The congestive discard threshold (CDT) is used to start dropping
    • packets of the most aggressive flow, even before the hold-queue limit
    • is reached.



    • The hold-queue limit defines the maximum number of packets that can be
    • held in the WFQ system at any time.
  92. There are two exceptions to the WFQ insertion and drop policy:
    • If the WFQ system is above the CDT limit, the packet is still enqueued
    • if the specific per flow queue is empty.



    The dropping strategy is not directly influenced by IP precedence.
  93. Implementing
    WFQ Classification
    • §A fixed number of per-flow queues is
    • configured.

    • §A hash function is used to translate flow
    • parameters into a queue number.

    • §System packets (eight queues) and RSVP flows
    • (if configured) are mapped into separate queues.

    • §Two or more flows could map into the same
    • queue, resulting in lower per-flow bandwidth.

    • §Important: The
    • number of queues configured should be significantly larger than the expected
    • number of flows.
  94. benifites and draw backs of wfq
    • benefit
    • provides simple
    • configuration (no manual classification is necessary) and guarantees
    • throughput to all flows. It drops packets of the most aggressive flows.
    • Because WFQ is a standard queuing mechanism, most platforms and most
    • Cisco IOS versions support WFQ
    • drawback

    • Multiple flows can end up in a single queue.
    • WFQ does not allow a network engineer to
    • manually configure classification. Classification and scheduling are
    • determined by the WFQ algorithm.
    • WFQ is supported only on links with a
    • bandwidth less than or equal to 2 Mb.
    • WFQ cannot provide fixed guarantees to
    • traffic flows.
  95. Cisco routers automatically enable WFQ on all interfaces that have a
    default bandwidth of less than 2.048 Mbps
Author
Tjr31
ID
67582
Card Set
ont 2
Description
ont 2
Updated