سیستم لگانت: مسیریابی معامله آگاه-تازه در یک خوشه پایگاه داده
|کد مقاله||سال انتشار||تعداد صفحات مقاله انگلیسی||ترجمه فارسی|
|9141||2007||24 صفحه PDF||سفارش دهید|
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله تقریباً شامل 15078 کلمه می باشد.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
- تولید محتوا با مقالات ISI برای سایت یا وبلاگ شما
- تولید محتوا با مقالات ISI برای کتاب شما
- تولید محتوا با مقالات ISI برای نشریه یا رسانه شما
پیشنهاد می کنیم کیفیت محتوای سایت خود را با استفاده از منابع علمی، افزایش دهید.
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Information Systems, Volume 32, Issue 2, April 2007, Pages 320–343
We consider the use of a database cluster for Application Service Provider (ASP). In the ASP context, applications and databases can be update-intensive and must remain autonomous. In this paper, we describe the Leganet system which performs freshness-aware transaction routing in a database cluster. We use multi-master replication and relaxed replica freshness to increase load balancing. Our transaction routing takes into account freshness requirements of queries at the relation level and uses a cost function that takes into account the cluster load and the cost to refresh replicas to the required level. We implemented the Leganet prototype on an 11-node Linux cluster running Oracle8i. Using experimentation and emulation up to 128 nodes, our validation based on the TPC-C benchmark demonstrates the performance benefits of our approach.
Database clusters now provide a cost-effective alternative to parallel database systems. A database cluster is a cluster of PC servers, each running an off-the-shelf DBMS. A major difference with parallel database systems implemented on PC clusters , e.g., Oracle Real Application Cluster, is the use of a “black-box” DBMS at each node which avoids expensive data migration. However, since the DBMS source code is not necessarily available and cannot be changed or extended to be “cluster-aware”, additional capabilities like parallel query processing must be implemented via middleware. Database clusters make new businesses like Application Service Provider (ASP) economically viable. In the ASP model, customers’ applications and databases (including data and DBMS) are hosted at the provider site and need be available, typically through the Internet, as efficiently as if they were local to the customer site. Thus, the challenge for a provider is to fully exploit the cluster's parallelism and load balancing capabilities to obtain a good cost/performance ratio. The typical solution to obtain good load balancing in a database cluster is to replicate applications and data at different nodes so that users can be served by any of the nodes depending on the current load. This also provides high-availability since, in the event of a node failure, other nodes can still do the work. This solution has been successfully used by Web search engines using high-volume server farms (e.g., Google). However, Web search engines are typically read-intensive which makes it easier to exploit parallelism. In the ASP context, the problem is far more difficult. First, applications and databases must remain autonomous, i.e., remain unchanged when moved to the provider site's cluster and remain under the control of the customers as if they were local, using the same DBMS. Preserving autonomy is critical to avoid the high costs and problems associated with code modification. Second, applications can be update-intensive and the use of replication can create consistency problems  and . For instance, two users at different nodes could generate conflicting updates to the same data, thereby producing an inconsistent database. This is because consistency control is done at each node through its local DBMS. The main solution readily available to enforce global consistency is to use a parallel database system such as Oracle Real Application Cluster or DB2 Parallel Edition. If the customer's DBMS is from a different vendor, this requires heavy migration (for rewriting customer applications and converting databases). Furthermore, this hurts the autonomy of applications and databases which must be under the control of the parallel database system. In this paper, we describe a new solution for routing transactions in a database cluster which addresses these problems. This work has been done in the context of the Leg@Net project sponsored by the RNTL1 whose objective was to demonstrate the viability of the ASP model for legacy (pharmacy) applications in France. Our solution exploits a replicated database organization. The main idea is to allow the system administrator to control the tradeoff between database consistency and performance when placing applications and databases onto cluster nodes. Databases and applications are replicated at multiple nodes to increase access performance. Application requirements are captured (at compile time) and stored in a shared directory used (at run time) to allocate cluster nodes to user requests. Depending on the users’ requirements, we can control database consistency at the cluster level. For instance, if an application is read-only or the required consistency is weak, then it is easy to execute multiple requests in parallel at different nodes. But if an application is update-intensive and requires strong consistency (e.g., integrity constraint satisfaction), an extreme solution is to run it at a single node and trade performance for consistency. There are important cases where consistency can be relaxed. With lazy replication , transactions can be locally committed and different replicas may get different values. Replica divergence remains until reconciliation. Meanwhile, the divergence must be controlled for at least two reasons. First, since synchronization consists in producing a single history from several diverging ones, the higher the divergence is, the more difficult the reconciliation. The second reason is that read-only applications may tolerate reading inconsistent data. In this case, inconsistency reflects a divergence between the values actually read and the values that should have been read in ACID mode. In most approaches (including ours), consistency reduces to freshness: update transactions are globally serialized over the different cluster nodes, so that whenever a query is sent to a given node, it reads a consistent state of the database. In this paper, global consistency is achieved by ensuring that conflicting transactions are executed at each node in the same relative order. However, the consistent state may not be the latest one, since update transactions may be running at other nodes. Then, the data freshness of a node reflects the difference between the data state of the node and the state it would have if all the running transactions had already been applied to that node. In this paper, we describe the design and implementation of the Leganet system which performs freshness-aware transaction routing in a database cluster. We use multi-master replication and relaxed replica freshness to increase load balancing. The Leganet architecture, initially proposed in , preserves database and application autonomy using non-intrusive techniques that work independently of any DBMS. The main contribution of this paper is a transaction router which takes into account freshness requirements of queries at the relation level to improve load balancing. This router uses a cost function that takes into account not only the cluster load in terms of concurrently executing transactions and queries, but also the estimated time to refresh replicas to the level required by incoming queries. Using the Leganet prototype implemented on an 11-node cluster running Oracle8i and using emulation up to 128 nodes, our validation based on the TPC-C OLTP benchmark  demonstrates the performance benefits of our approach. This paper is organized as follows. Section 2 provides a motivating example for transaction routing with freshness control. Section 3 introduces the basic concepts and assumptions regarding our replication model and freshness model. Section 4 describes the architecture of our database cluster, focusing on the transaction router. Section 5 presents the strategies for transaction routing with freshness control, with the cost functions used by those strategies. Section 6 describes the Leganet prototype. Section 7 gives a performance evaluation using a combination of experimentation and emulation. Section 8 compares our approach with related work. Section 9 concludes.
نتیجه گیری انگلیسی
In this paper, we described the Leganet system which performs freshness-aware transaction routing in a database cluster. To optimize load balancing, we use lazy multi-master database replication with freshness control, and strive to capitalize on the work on relaxing freshness for higher performance. The Leganet system preserves database and application autonomy using non-intrusive techniques that work independently of any DBMS. The main contribution of this paper is a transaction router which takes into account freshness requirements of queries at the relation level to improve load balancing. It uses a cost function that takes into account not only the cluster load in terms of concurrently executing transactions and queries, but also the estimated time to refresh replicas to the level required by incoming queries. The model to estimate replica freshness estimates the freshness of databases updated by autonomous applications at the level of relations, which is accurate enough to improve transaction routing. It works with multi-master replication which provides the highest opportunities for transaction load balancing. We also proposed two CB routing strategies that improve load balancing. The first routing strategy (CB) assesses the synchronization cost to respect the tolerated staleness by queries and transactions and chooses the node with minimal cost. The second strategy (BRT) is a variant with a parameter, Tmax, which represents the maximum response time users can accept for update transactions. It dedicates as many cluster nodes as necessary to ensure that updates are executed in less than Tmax, and uses the remaining nodes for processing queries. We implemented our solution on an 11-node cluster running Oracle 8i under Linux. We used this implementation for initial performance experiments and to calibrate an emulation model that deals with larger cluster configurations (up to 128 nodes). First, we showed that, compared with two baseline cost functions (one based on the nodes’ current load and the other based on the nodes’ freshness), our cost function yields better load balancing and performance. Second, the experiments showed that CB outperforms BRT in the general case and that BRT should be preferred only when update transactions are more important than queries. Third, our approach scales very well (almost linearly) for clusters up to 32 nodes and has good scale up until 96 nodes. Finally, we showed that relaxing freshness has a great impact on transaction processing performance (up to a factor 5), for both updates and queries, thanks to better load balancing and reduced node synchronization. In this paper, we have made the simplifying assumption of full replication for concentrating on the problem of freshness-aware transaction routing. However, we could extend our approach to deal with partial replication, with a mix of partitioned relations (typically the largest relations) and replicated relations over a subset of the cluster nodes as in . Although this approach does not violate database autonomy, it would require some careful database design. Another improvement we are investigating is to use asymmetric synchronization, i.e., sending the modified tuples obtained at the initial node of a transaction instead of replaying the whole transaction. As explained in Section 7.6, this solution is not straightforward using black-box DBMSs, since it implies log sniffing. Our experimentation with Oracle's Logminer tool showed that reading the log takes at least 0.34 s, thus we must study carefully in which conditions asymmetric synchronization may be used. Finally, the current Leganet system uses a centralized router which can obviously be a single point of failure and a performance bottleneck. A solution to this problem suggests replicating the router and its metadata at two or more nodes. Maintaining the consistency of the replicated metadata is an interesting issue, for instance, using either eager replication or distributed shared memory software, and the subject of future work.