دانلود مقاله ISI انگلیسی شماره 15674
ترجمه فارسی عنوان مقاله

سیاست مدیریت بافر هارمونیک برای سوئیچ های حافظه به اشتراک گذاشته شده

عنوان انگلیسی
Harmonic buffer management policy for shared memory switches
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
15674 2004 22 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Theoretical Computer Science, , Volume 324, Issues 2–3, 20 September 2004,Pages 161-182

ترجمه کلمات کلیدی
تجزیه و تحلیل رقابتی - سوئیچ های حافظه مشترک - سیاست های غیر انحصاری - مدیریت بافر -
کلمات کلیدی انگلیسی
Competitive analysis, Shared memory switches, Non-preemptive policies, Buffer management,
پیش نمایش مقاله
پیش نمایش مقاله  سیاست مدیریت بافر هارمونیک برای سوئیچ های حافظه به اشتراک گذاشته شده

چکیده انگلیسی

In this paper we consider shared-memory switches. We introduce a novel general non-preemptive buffer management scheme, which considers the queues ordered by their size. We propose a new scheduling policy, based on our general scheme, which we call the Harmonic policy. We analyze the performance of the Harmonic policy by means of competitive analysis and demonstrate that its throughput competitive ratio is at most ln(N)+2, where N is the number of output ports. We also present a lower bound of View the MathML source on the performance of any online deterministic policy. Our simulations also show that the Harmonic policy achieves high throughput and easily adapts to changing load conditions.

مقدمه انگلیسی

The Internet is built around a large variety of transmission and switching systems and the information transfer is aggregated into packets, which are individually forwarded and switched toward their destinations. The main tasks of a router are to receive a packet from an input port, :nd its destination port using the routing table, transfer the packet to the output port via the switch fabric, and :nally transmit it on the output link. If a burst of packets destined to the same output port arrives, not all packets can be transmitted on the ;y and thus some of them need to be bu"ered. A critical aspect of the switch architecture is placement of bu"ers. In the output queuing (OQ) architecture, packets arriving from input ports immediately cross the switching fabric,and join a queue at the switch output port. It is well-known that the OQ architecture allows one to maximize throughput and control packet latency accurately. The shared memory (SM) switch architecture is an OQ architecture in which output queues are dynamically allocated from a shared memory pool. The main bene:t of SM switches is their ;exibility. At the one extreme, they can be used for complete sharing, where the memory is used as one bu"er to serve all the output ports. The main bene:t of complete sharing is that it takes advantage of statistical multiplexing. However, complete sharing may perform poorly under overload conditions [10]. The problem is that a single output port can take over most of the memory, preventing packets destined for other output ports from gaining access, which causes the total switch throughput to drop. At the other extreme, shared memory can also simulate complete partition, i.e. a dedicated bu"er for each output port. In this case the ;ow of each output port is fully protected from the ;ows of other output ports. This comes at the cost of underutilization of resources, thus risking higher loss during bursts. The ;exibility of shared memory allows one to specify many other policies on the spectrum between complete sharing and complete partition. The goal of a bu"er management policy is to achieve as much of the bene:ts of both schemes while su"ering as little as possible from their weaknesses. The bu"er management policies are traditionally classi:ed into two categories: preemptive and non-preemptive, according to whether they utilize the preempt action in which a packet that has been accepted into the bu"er can be later dropped in order to free space for newly arriving packets. The tradeo" here is between ease of implementation and hardware (where non-preemptive policies have an advantage) and higher performance (where preemptive policies have an advantage). Both types of policies have been widely considered in the networking literature. For a good survey of shared-memory bu"er management policies the reader can refer to [2]. The main class of non-preemptive scheduling policies are static threshold schemes. Irland [10] considers some simple policies of this type. In sharing with maximum queue lengths (SMXQ) scheme, each output queue has a static bound on its length and a packet is accepted if there is a free space in the bu"er and the corresponding bound is not violated. In some schemes, like sharing with a maximum queue and minimum allocation (SMQMA) due to Kamoun and Kleinrock [11], each port always has access to a minimum allocated space. The main problem of the static threshold schemes is that they are not adaptive. When many queues are active and the sum of their thresholds exceeds the bu"er capacity, the bu"er may :ll up completely, even though all queues are obeying the threshold constraints. Thus, some queues can be starved, which leads to underutilization of the switch. On the other hand, when very few queues are active, they are denied access to the idle bu"er space beyond the sum of their thresholds. This creates higher packet loss rate for the active queues (see [5]). Another class of non-preemptive policies includes dynamic threshold schemes. In the Dynamic Threshold (DT) policy due to Choudhury and Hahne [5], the threshold on a queue length at any instant of time is proportional to the current amount of unused bu"er space in the switch multiplied by some constant. Packet arrivals for an output port are blocked whenever the corresponding queue length equals or exceeds the current threshold value. The main idea is that the DT policy deliberately holds a small amount of bu"er space in reserve and divides the remaining bu"er space equally among the active output queues. The class of preemptive policies has been also studied extensively. A delayed resolution policy (DRP) was proposed by Thareja and Agrawala in [24]. The DRP does not discard an arriving packet if there is space in the common bu"er. If a packet arrives and the common bu"er is full, the arriving packet, or some other packet that was already accepted, is discarded. The decision to drop a packet from a certain queue can be made based on the state of the system or based on di"erent priority classes. Wei et al. [26] propose the Longest Queue Drop (LQD) policy, which drops a packet from the longest queue in the switch, when the memory is full. Competitive analysis of preemptive scheduling policies appears in [8]. They show that LQD is 2-competitive and present a lower bound of 43 on the competitive ratio of any deterministic online policy. Unfortunately, non-preemptive policies cannot achieve a constant competitive ratio, as we will show in this paper. A comparison of preemptive policies has been provided in [16]. Our model is as follows. We assume that all the packets are of equal size. When a packet arrives, the bu"er management policy determines whether to accept or reject it. Packets cannot be preempted and once a packet is accepted, it is eventually transmitted on the output link. Conceptually, one may think of the packets in the shared memory as partitioned to separate queues destined to distinct output ports. Clearly, the sum of the lengths of the queues is bounded by the size of the shared memory. At every time unit, from every non-empty output queue, the :rst packet is sent on the corresponding output link. In addition, any number of packets destined to di"erent output ports may arrive. The basic goal of the bu"er management policy is that of maximizing the total number of packets transmitted. In this work we introduce a general non-preemptive bu"er management scheme for shared memory packet switches. The main novel idea is to consider not only the size of the queue as a function of the output port, but also the ability to consider the queues sorted by their current size. For example, our scheme allows one to bound the size of the second largest queue, regardless of the identity of the output port. This way one can specify that there will be at most two large queues and the other queues should be of moderate size. Although we specify the number of large queues, we do not specify their identities, thus in the above example any two output ports can take the large allocations. In a nutshell, the scheme allows to bound the length of an individual queue and the total length of a subset of queues. We use competitive analysis [23,4] to study the performance of our policies. In competitive analysis, the online policy is compared with an optimal oMine policy OPT, that knows the entire input sequence in advance. The online algorithm A is called ccompetitive if for any input sequence of packets, the number of packets sent by OPT is at most c times that delivered by A plus a constant independent of . The advantage of competitive analysis is that a uniform performance guarantee is provided over all input instances.