محدوده تاخیر برای تجمع FIFO : یک مطالعه موردی
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|6393||2005||13 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 8612 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Computer Communications, Volume 28, Issue 3, 24 February 2005, Pages 287–299
In a DiffServ architecture, packets with the same marking are treated as an aggregate at core routers, independently of the flow they belong to. Nevertheless, for the purpose of QoS provisioning, derivation of upper bounds on the delay of individual flows is of great importance. In this paper, we consider a case study network, composed by a tandem of rate-latency servers that is traversed by a tagged flow. At each different node, the tagged flow is multiplexed into a FIFO buffer with a different interfering flow. The tagged flow and the interfering flows are all leaky-bucket constrained at the network entry. We introduce a novel methodology based on well-known results on FIFO multiplexing from Network Calculus, by means of which we derive an end-to-end delay bound for tagged flow traffic. The delay bound assesses the contribution to the delay due to the interference of other flows precisely, and to the best of our knowledge, it is better than any other applicable result available from the literature. Furthermore, we utilize the delay bound formula to quantify the level of overprovisioning required in order to achieve delay bounds comparable to those of a flow-aware architecture.
In a nearby future, the Internet will be used as an infrastructure for providing a wide variety of services to its users. New services, such as video streaming, IP telephony and interactive video conference, requiring real-time constraints, are already being deployed and are likely to gain more widespread diffusion. Different services require different Quality of Service (QoS): for instance, a guaranteed delivery service model, enabling packets to reach their destination within pre-specified time constraints, needs to be developed beside the traditional best-effort delivery. The Differentiated Services (DS) architecture, proposed within the IETF , is aimed at providing differentiated QoS within the Internet at a manageable complexity. According to DS, packets from different flows are marked at the DS domain ingress as belonging to a small number of different QoS classes, called Behavior Aggregates (BAs), each one receiving a differentiated delivery service within the network. Packets belonging to a BA are then treated at core routers according to a specified per-hop behavior (PHB), independently of the flow they belong to. Currently, the Expedited Forwarding (EF) PHB is specified  and  for providing delay guarantees. Practical implementations of the EF PHB assume that all EF traffic is shaped and policed at the DS domain ingress, and then shares a single FIFO queue at each core router, which is either serviced at the highest priority or is guaranteed a large bandwidth share in a weighted fair scheduler. In this last case, capacity is statically reserved to the aggregate traffic at core routers, whereas appropriate admission control is performed at the DS domain edges, to provide specific QoS guarantees to the aggregate. Regarding DiffServ, two different approaches have been developed: relative Differentiated Services, which is aimed at providing differentiated performance to the different aggregates, without taking into account explicit per-flow guarantees  and . On the other hand, absolute Differentiated Services is aimed at providing end-to-end absolute performance guarantees to single flows, without the need of per-flow state in the core . This last approach follows from considering that single Internet users would require individual, per-flow (rather than per-aggregate) end-to-end QoS guarantees. Within this last framework, analytical derivation of end-to-end delay bounds for individual flows is of great importance, since they can be used as the base for call admission control. In , per-flow delay bounds were derived for a generic network as a function of the utilization factor, the maximum hop count, and the parameters of the ingress shapers, without any assumption on the topology. However, in a network domain employing centralized resource management, it is reasonable to assume that the management entity is aware of all the requests at the domain edge, as well as of the domain topology: this is the case, for example, of the Bandwidth Broker (BB) service proposed in . However, how the knowledge of the network topology and current load can be exploited in order to derive delay bounds useful for performing careful admission control is an open issue. In this paper, we consider a case study network, composed by a tandem of rate-latency servers that is traversed by a tagged flow that is leaky-bucket constrained. The tagged flow is multiplexed into a FIFO buffer with another leaky-bucket constrained flow at each node. Our purpose is to derive an end-to-end delay bound for the traffic of the tagged flow. To this aim, we base our work on well-known results on FIFO multiplexing that have been derived by means of network calculus, and we describe a methodology that appropriately combines these results to derive a delay bound for the case study. To the best of our knowledge, the derived delay bound is better than any other applicable result from the literature. Furthermore, we show that a possible interpretation of the obtained result, is that the so-called ‘pay bursts only once’ property would no longer hold for the tagged flow in the case study network, reflecting the fact that sometimes paying the burst more than once, but at a higher rate, is better than paying it only once at a lower rate. The remainder of the paper is organized as follows. In Section 2 the key concepts and some basic results of network calculus are given. By showing the limits of current results related to delay bounds for FIFO aggregates, Section 3 lays the foundation for our study, whose results are presented in Section 4. Section 5 addresses the related work, while Section 6 introduces some numerical results useful to get an insight into the analytical results. Finally, Section 7 draws some conclusions.
نتیجه گیری انگلیسی
In this paper, we have derived an end-to-end delay bound for a tagged flow that is FIFO multiplexed with a different interfering flow at each node of a tandem of rate-latency servers. To this aim, we have used the concept of equivalent service curve from network calculus, and derived an infinity of equivalent service curves for the tagged flow, from which we have chosen the one giving the lowest upper bound to the end-to-end delay. The obtained result, summarized by Theorem 2, is not very intuitive but is proved to be better than any other applicable result available in the literature, at least to the best of our knowledge. This paper lays the basis for future work in at least two different directions. Firstly, it should be determined whether the bound given by Theorem 2 is tight or not. To prove the tightness, it would be sufficient to find a specific scenario of traffic arrivals conforming to leaky-bucket constraints for which a bit of the tagged flow traffic experiences exactly the delay calculated by Theorem 2. We have not found such a scenario yet. Secondly, it would be interesting to extend Theorem 2 to a more general network configuration, where interfering flows are allowed to share the same path of the tagged flow for any number of consecutive nodes, instead of just one node as in Fig. 3. Regarding this second issue, the methodology we have introduced is not directly applicable.