دانلود مقاله ISI انگلیسی شماره 8204
ترجمه فارسی عنوان مقاله

الگوریتم ژنتیک آزمون نمونه توزیع بهینه سازی شده برای طبقه بندی مدولاسیون

عنوان انگلیسی
Genetic algorithm optimized distribution sampling test for M-QAM modulation classification
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
8204 2013 14 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : Signal Processing, Available online 7 June 2013

ترجمه کلمات کلیدی
مدولاسیون های اندازه گیری (سنجشی) - طبقه بندی رادیو شناختی - آزمون توزیع - الگوریتم ژنتیک
کلمات کلیدی انگلیسی
پیش نمایش مقاله
پیش نمایش مقاله  الگوریتم ژنتیک آزمون نمونه توزیع بهینه سازی شده برای طبقه بندی مدولاسیون

چکیده انگلیسی

With the classification performance and computational complexity in mind, we propose a new optimized distribution sampling test (ODST) classifier for automatic classification of M-QAM signals. In ODST, signal cumulative distributions are sampled at pre-established locations. The actual sampling process is transformed into simple counting task for reduced computational complexity. The optimization of sampling locations is based on theoretical signal models derived under various channel conditions. Genetic Algorithm (GA) is employed to optimize distance metrics using sampled distribution parameters for distribution test between signals. The final decision is made based on distances between tested signal and candidate modulations. By using multiple sampling locations on signal cumulative distributions, the classifier's robustness is enhanced for possible signal statistical variance or signal model mismatching. AWGN channel, phase offset, and frequency offset are considered to evaluate the performance of the proposed algorithm. Experimental results show that the proposed method has advantages in both classification accuracy and computational complexity over most existing classifiers.

مقدمه انگلیسی

Automatic Modulation Classification (AMC) has been an established research topic for many years. The initial application of AMC was mostly in military electronic warfare, surveillance and threat analysis [1]. The main purpose of AMC is to classify automatically the modulation type of the intercepted signal so that it can be correctly demodulated. Many papers, e.g. [2], [3], [4], [5] and [6], have been published suggesting different solutions for this problem. Recently, as intelligent radio communication systems emerges in modern civilian communication applications, AMC, which is an important component in the adaptive modulation module, has attracted much attention from Cognitive Radio (CR) and Software Defined Radio (SDR) developers, e.g. [7], [8] and [9]. The fundamental task of AMC remains the same, though new challenges arise in the current CR and SDR development environments. One obvious difficulty comes from the different modulation types being used. In recent years, the use of signal modulations has migrated towards Quadrature Amplitude Modulations (QAM) due to their ability of efficient high capacity data transmission. The popularity of QAM modulations can be easily verified with their presence in many modern radio communication standards. In IEEE 802.11a [10], BPSK, 4-QAM, 16-QAM and 64-QAM are employed as modulations for many wireless communications applications. In Digital Video Broadcasting Terrestrial (DVB-T) [11], 4-QAM, 16-QAM and 64-QAM are also universal selection for digital TV broadcasting. In this paper, 4-QAM, 16-QAM and 64-QAM have been chosen for the development of the proposed classifier. Nevertheless, modifications can be easily made to accommodate other QAM or wider selection of modulations. The classification of QAM modulations has its unique challenges as most of the signal features are very similar between different M-ary QAM (M-QAM) modulations. Another challenge for AMC is the demand for accurate classification performance under different channel conditions. In addition to channel effects, the demand for short processing time is also an interest for different applications which require real-time reconfiguration of the communication system. In short, the goal is to develop a simple AMC classifier which gives accurate and robust classification performance. Most existing AMC classifiers can be grouped into two categories: likelihood based (ML) classifiers and feature based classifiers. A likelihood based classifier could give the upper bound of the classification accuracy given the condition of accurate channel estimation. Wei and Mendel [12] presented the ML method that provides the optimum performance with the correct channel estimation. More ML based classifiers, e.g. [13], [14] and [15], have been developed recently to suit different modulations and channel conditions. However, the computational complexity is a major concern. It has led many sub-optimal approaches to be developed in order to have reduced complexity. Wong and Nandi used Minimum Distance (MD) classifier [16] to reduce the complexity. In a different way, Xu, Su and Zhou approached the complexity reduction problem by storing the pre-calculated values in quantized databases for avoiding complex operations [17]. These aforementioned methods have all successfully reduced the complexity to different degrees with certain amount of performance degradation. For the purpose of further reducing the computational complexity, algorithms based on distribution tests have been developed and presented in some recent publications. Wang and Wang [18] used Kolmogorov–Smirnov Test (K–S test) [19] to formulate a solution by comparing the testing signal Cumulative Distribution Functions (CDFs) with the reference modulation CDFs. This method successfully achieved an improved performance especially when limited signal length is available. It was pointed out in [20] that the K–S test approach requires the complete construction of signal CDFs which is relatively complex and has the potential to be simplified. In the same paper, an optimized approach was presented which reduces the complexity of K–S classifier by analyzing the CDFs between two modulations at a single given location. When multiple modulations are considered, multiple locations, each responsible for the classification of two modulations, have been used. The classification accuracy is comparable to the K–S classifier and the complexity of the algorithm is reduced significantly. However, it is clear that the embedded information in CDFs is underutilized and the robustness of this approach can be improved. To overcome these limitations, we have developed the optimized distribution sampling test (ODST) classifier which conducts simplified distribution tests at multiple optimized sampling locations to achieve the balance between simplicity and performance. A feature based classifier generally consists of few steps including feature extraction, feature selection, and classification. A classic example of feature based AMC method can be found in [21] where multiple features are used with a decision tree classifier and Artificial Neural Network (ANN) classifier. In recent years, higher-order statistics [5] has proven to be well suited for M-QAM signals classification. Notably, the adoption of machine learning techniques show great potential in further enhancing the classification accuracy. Support Vector Machine [22], Artificial Neural Network [23], and Genetic Programming [24] have all been experimented with to select and combine existing features to exploit the full potential in them. Wong and Nandi [23] proposed to automate the feature selection process with Genetic Algorithm (GA) and successfully reduced the feature dimensions without degrading the classification performance. As distribution parameters estimated at sampling locations could be considered as features, GA has been employed for the optimization of distance metrics for the proposed ODST classifier. This paper is arranged in the following order. The signal models in different channel conditions are presented in Section 2. It is followed by a detailed description of the classification strategy as well as some extensive analysis in Section 3. Experiment setups are explained in Section 4, with detailed results and analysis given in Section 5. The conclusion is drawn at the end.

نتیجه گیری انگلیسی

In this section, results collected from simulation tests are presented with detailed analysis. The computational complexity is also discussed.The classification performance under different amount of additive noise has always been the prime criteria for an AMC solution. In Fig. 4, four different types of AMC classifiers are included. It is clear that ML provides the most accurate classification throughout the SNR range. Excluding the ML classifier, the results show that the proposed ODST classifier has a clear advantage in mid to high SNRs. At 10 dB, the proposed method achieves almost the same accuracy of 98.9% as the ML classifier and the 100% classification is achieved at 11 dB. At the same SNR settings, K–S test provides a successful classification of 95.3% and the perfect classification performance is achieved at 12 dB. For cumulant based GP-KNN classifier, it can be seen that its performance is limited by the signal length that is available for analysis. In the mid and lower range of SNRs, the proposed ODST classifier maintains the advantage over K–S test. The biggest difference is exhibited at 9 dB where ODST offers an accuracy of 93.9% and K–S test offers 88.6%. However, the accuracy advantage is gradually reduced along with the decreasing SNR until the performance become equivalent below 3 dB. On the other hand, this cumulant based GP-KNN classifier shows a robust performance in low SNRs, offering better classification performance from 3 dB to 8 dB against ODST and from 3 dB to 9 dB than K–S test. The performance at SNR below 3 dB is generally very similar among all classifiers with only ML classifier having a more than 5% higher accuracy. Complementary results from ODST for different modulations are listed in Table 2. Performance means and standard deviations are collected from 100 sets of tests, each includes 30,000 signal realizations (three modulations times 10,000 signal realizations from each modulation).In addition to the benchmarking classifiers, several existing classifiers from other literature have been listed in Table 3 for performance comparison with ODST. Results for ODST come from experiments conducted under the same specific condition as each existing classifiers. It is clear that the proposed classifier outperforms the K–S classifier [18], the reduced complexity version of K–S classifier (rcKS) [20], phase based ML classifier [15], as well as cumulant based classifiers [5] and [30]. The Minimum Distance (MD) [16] classifier, which is a low-complexity version of the ML classifier, presents similar level of performance at or above 14 dB as compared to the proposed ODST classifier. However, with the SNR at or lower than 10 dB, its classification accuracy is significantly degraded. The comparison between MD classifier and ODST classifier at SNR of 10 dB clearly demonstrates the performance advantage of the proposed method.Having analyzed the performance of ODST against other existing AMC classifier, let us have a look at the effect of GA optimized weighted decision making on the classification performance. The same experimental setup is used only with SNR limited between 0 dB and 10 dB to investigate the effect of GA optimization on low SNR performance. According to the classification performance in Fig. 5, both GA optimized classifiers follow the performance degradation pattern of the original ODST with an increase in classification accuracy of 1% to 3% sustained over the SNR range. The biggest performance improvement is shown between SNR of 7 dB to 10 dB. At 8 dB, GA optimized ODST with analogue weight achieves a classification accuracy of 90.5% providing the largest performance improvement of 4% as compared to the 86.5% classification accuracy of the original ODST classifier. The reason for such improvement can be explained with the analysis of sampling location quality in Section 3. In Fig. 3, it is clear that some of the sampling locations start to merge and disappear between 7 dB and 10 dB. The performance improvement provided by GA optimized weights verified these sampling locations need to be given lower weights to achieve better classification performance. Between the binary weights and analogue weights, analogue weights provide better performance at 8 dB, 9 dB and 10 dB while being almost equal to the binary weights from 0 dB to 7 dB. Overall, both types of optimized weights help to improve the classification by a fair amount. The robustness against a limited signal length is another important quality for a good AMC classification. In the experiments, same four classifiers are tested and compared in Fig. 6. Again, ML excels in all signal length from N=100 to N=1000. Excluding ML classifier, ODST is the best among the remaining classifiers. The largest performance difference of ODST against ML is about 5% at N=100. As the signal length increases the difference starts to reduce and at N=600 ODST achieves performance similar to ML classifier. When compared with K–S test, OSDT shows a superior robustness especially when the signal length is in the range from N=150 to N=500. The biggest advantage of ODST is observed at N=250, where K–S test returns a classification accuracy of 93.0%, which is 1.7% below ODST's 94.7%. Unfortunately, cumulant based GP-KNN classifier suffers severely with the reduced signal length. However, as its performance is improving consistently with the increasing signal length, it is clear that, with large enough signal length, GP-KNN classifier is still able to achieve equal level of performance. In the flat fading channel with unknown phase offset, we have included the original ODST classifier, the original K–S test and ODST classifier with EML phase estimation and recovery. The results are presented in Fig. 7. All signals are simulated with a signal length of N=512 and SNR of 10 dB. With no phase error, the classification accuracy difference between the original ODST and K–S test coincide the results in pure AWGN channel. The original ODST starts with an advantage of 3.4%. As more phase offset is introduced, both classifiers' performance starts to degrade. Nevertheless, ODST sees less degradation before the phase offset reaches θo=6°θo=6°. Once again, this illustrates the robustness of ODST when compared with K–S test. The degradation of ODST performance accelerates after 6 dB. At θo=8.3°θo=8.3°, K–S test surpass ODST to have a better performance with more phase offset. It is an understandable phenomenon, as the ODST relies on an accurate signal model more than the K–S test, when the signal model mismatching exceeds a certain level, the distribution tests at different locations become barely capable of providing positive contribution towards an accurate classification. Nevertheless, when ODST is teamed up with an accurate phase offset estimation and recovery scheme, this should not be a concern since the mismatching could be limited within a reasonable amount. It is demonstrated with the results from ODST–EML. Regardless of the amount of phase offset experimented with, the classifier delivers a consistent classification accuracy of 98.8%. Under similar conditions, ML classifier and GP-KNN classifier have both exhibited a strong robustness seeing less than 10% degradation in classification accuracy. As can be seen in Fig. 8, both ODST and K–S test perform poorly when frequency offset is considered. With a frequency offset of 1×10−41×10−4 to 2×10−42×10−4, classification accuracy from both classifier drops significantly. For ODST, its classification accuracy is reduced to 95.5% with a frequency offset of 1×10−41×10−4. As the amount of frequency offset increases to 2×10−42×10−4, the classification performance decreases almost linearly to 77%. The K–S test sees similar performance degradation. However, it starts with lower classification accuracy of 92% with frequency offset at 1×10−41×10−4 and reduces to 77% with frequency offset of 2×10−42×10−4. The ODST classifier provides about 3.5% better classification accuracy between 1×10−41×10−4 and 1.3×10−41.3×10−4. The performance advantage is gradually reduced beyond 1.3×10−41.3×10−4. One of the causes of this reduced performance comes from the modulations being used, especially 16-QAM and 64-QAM. With their dense signal constellations, there is little room for any frequency offset. The other reason is to do with the nature of distribution test based classifiers, which rely on a solid signal distribution with little frequency shifting. Even though ODST performs better than K–S test, it is difficult to claim its robustness under channels with frequency offsets. Although the frequency offset condition is optimistic, some effective blind frequency offset estimation and compensation approaches for QAM modulated signals have been developed (e.g. [31]) which would help to achieve the required level of frequency offset.