قفل شدن معوق با معامله سایه ای برای سیستم های مدیریت پایگاه داده مشتری سرور
|کد مقاله||سال انتشار||مقاله انگلیسی||ترجمه فارسی||تعداد کلمات|
|9118||2006||21 صفحه PDF||سفارش دهید||12590 کلمه|
هزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.
این مقاله شامل 12590 کلمه می باشد.
نسخه انگلیسی مقاله همین الان قابل دانلود است.
هزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Journal of Systems Architecture, Volume 52, Issue 7, July 2006, Pages 373–393
Data-shipping systems that allow inter-transaction caching raise the need of a transactional cache consistency maintenance (CCM) protocol because each client is able to cache a portion of the database dynamically. Deferred locking (DL) is a new CCM scheme that is capable of reducing the communication overhead required for cache consistency checking. Due to its low communication overhead, DL could show a superior performance, but it tends to exhibit a high ratio of transaction abort. To cope with this drawback, we develop a new notion of shadow transaction, which is a backup-purpose one that is kept ready to replace an aborted transaction. This notion and the locking mechanism of DL have been incorporated into deferred locking with shadow transaction. Using a distributed database simulation model, we evaluate the performance of the proposed schemes under a wide range of workloads.
With the proliferation of inexpensive, high-performance workstations and networks, client–server database management systems (DBMSs) have been receiving a lot of interest in distributed environments. Client–server DBMSs can be categorized into two classes ,  and : query-shipping and data-shipping. Most commercial relational DBMSs are based on a query-shipping approach in which most query processing is performed at the server. In contrast, data-shipping systems transfer data items to clients so that query processing can be performed at clients. Most distributed object-oriented DBMSs ,  and  today are based on a data-shipping approach. Since data-shipping systems migrate much of DBMS functions from the server to clients, they can off-load shared server machines. However, they are susceptible to network congestion that can arise if a high volume of data is requested by clients. In data-shipping systems, inter-transaction caching, where cached contents of clients are retained even across transaction boundaries, is an effective technique for improving the performance. Inter-transaction caching enables client resources such as memories and CPUs to be effectively used for database-related processing. This also makes it possible to reduce the network traffic and message processing overhead for both the server and clients. However, multiple copies of a data item inevitably happen to exist because each client is able to cache a portion of the database dynamically. Therefore, inter-transaction caching raises the need of a transactional cache consistency maintenance (CCM) protocol that basically possesses both aspects of replica management and concurrency control. Since the CCM scheme employed is considered to be substantially affecting the performance of DBMSs, various algorithms , , , , , , , ,  and  have been proposed in the literature. Transactional CCM schemes must ensure that no transactions that access any stale data copies are allowed to commit. According to their policy for guaranteeing this, CCM schemes can be classified into two classes : avoidance-based and detection-based. Avoidance-based schemes ensure that all existing copies at clients are always valid. They usually employ a read-one and write-all (ROWA) protocol  for replica management. The nature of ROWA makes them perform read operations on cached copies efficiently without contacting the server. However, they require a substantial amount of work for ensuring that cached pages are valid whenever an updating transaction attempts to commit or to perform a write operation. In addition, these consistency maintenance actions must be performed at one or more remote clients. In contrast, detection-based schemes allow stale data copies to reside in client caches, and thus each transaction is required to check the validity of any cached copy that it accesses sometime prior to its commit. In detection-based schemes, these consistency checking actions for a single transaction involve only its origin site and the server. Therefore, they could be more robust against client failure than avoidance-based ones. The adaptive optimistic concurrency control (AOCC)  and  and caching two-phase locking  and  can be categorized in this class. AOCC can significantly reduce the number of messages required for consistency checking since each transaction is allowed to access cached copies without any constraint. However, it could induce a high ratio of transaction abort, since transactions may read stale data copies. The caching two-phase locking (C2PL), which has been designed on the basis of the primary-copy locking (PCL) algorithm , could cope with the frequent transaction aborts. C2PL presumes that cached copies at client buffers are not valid, thus each client transaction is required to check the validity of cached copies before accessing them. A PCL-based scheme usually has the following desirable properties: (1) it is simple and easy to implement, (2) it could be relatively robust against client failure, (3) it could show a low abort ratio. Despite these potential advantages, many studies ,  and  report that C2PL shows a low performance mainly due to its overhead associated with message requirements for consistency checking. These observations imply that a PCL-based algorithm can be widely adopted if it is refined with a mechanism to reduce the communication overhead. In this respect, we devise a novel CCM scheme fundamentally on the basis of the PCL algorithm. In the proposed scheme, validity check of cached copies is deferred until a client–server interaction is inevitably required due to a cache miss, hence we call this approach deferred locking scheme, DL for short. Although DL is a PCL-based algorithm like C2PL, it differs from C2PL in that consistency checking is performed on each cache miss, rather than on each access. DL is capable of reducing client–server interactions significantly by combining a number of lock requests and a data-shipping request into a single message packet. However, it could induce a high ratio of transaction abort due to its lazy buffer validation. This may not be a problem for some kinds of application domain, where transactions are strictly under program control, and also user-initiated requests can be redone on transaction abort without further input. However, the high abort rate of DL could make it unsuitable for interactive application domains since a user is obliged to perform all his/her activities again in case of transaction aborts. For this kind of application domain, a mechanism for minimizing negative impacts of transaction aborts would be very beneficial even if it burdens some extra overheads on system resources. From this perspective, we develop a new notion of shadow transaction, which is a backup-purpose one that is kept ready to replace an aborted transaction. Then, we have incorporated this notion and the locking mechanism of DL into deferred locking with shadow transaction, DL–ST for short. The remainder of this paper is organized as follows. We present various CCM schemes in Section 2 and then we describe the details of the proposed schemes in Section 3. We present a simulation model for evaluating performance tradeoffs in Section 4 and we describe performance results in Section 5. Finally, Section 6 has concluding remarks.
نتیجه گیری انگلیسی
In this paper, we have proposed new CCM schemes, named DL and DL–ST, on the basis of the primary-copy locking algorithm. The primary design philosophy of DL in optimizing its performance was the minimization of the resulting communication cost. We have realized this by combining a number of lock requests and a data-shipping request into a single message packet. However, DL tends to show a high ratio of transaction aborts due to its lazy buffer validation. The high abort ratio of DL would make it unsuitable for interactive application domains since it could induce a large amount of wasted work in user activities. To minimize this negative impact of transaction aborts, we have introduced the notion of shadow transaction. This notion and the locking mechanism of DL have been incorporated into DL–ST. We have evaluated the performance of the proposed schemes using a simulation approach in a client–server DBMS architecture. The simulation results indicate that the proposed schemes are capable of providing reasonable or superior performances across a wide range of workloads. The following essential properties in them lead to their superior performances: (1) they could significantly reduce the number of message exchanges, (2) they could keep relatively large effective cache sizes, (3) they could provide enhanced levels of concurrency. The most perceptible advantage of DL-based schemes is that they are capable of outperforming O2PL-I and C2PL by a significant margin in high data contention environments. This property would make them more attractive, since database engines of these days are required to support a substantially enhanced level of concurrency due to ever-increasing CPU and I/O processing speeds, which tends to cause a high level of data contention. DL-based schemes can outperform AOCC in most workloads except in environments where system resources are sparsely utilized. Between the proposed schemes, DL–ST is inferior to DL at low data contention environments due to the overhead associated with shadow transactions. However, DL–ST is capable of reducing a substantial amount of the wasted work in user activities compared to DL. A more obvious advantage of DL–ST is that it provides a better performance than DL in an environment where the level of data contention is so severe that even DL could not withstand. In the future we plan to further investigate cache consistency maintenance schemes for hybrid architecture where data-shipping and query-shipping are combined.