In this post I want to discuss the deficiencies of practical link adaptation in current wireless systems and how algorithms that exploit machine learning will be the solution to all of these problems. First, a little background on link adaptation.
The quality of the medium through which communication occurs, i.e., the wireless channel, is constantly changing. Consider the following figure, courtesy of the Laboratory for Information Technology at Aachen University, which shows the channel quality of a cellular network throughout a rural town in Germany (green=good, red=bad). Reality is actually much worse than this since the channel also fluctuates due to small scale fading.
Digital communication parameters used to construct wireless waveforms (e.g., QAM order, FEC rate, the number of spatial streams, etc.) observe a tradeoff between reliability, e.g., the probability of a dropped call in a voice network, and data rate in terms of the wireless channel quality. Consequently, modern wireless networks get the most “bang for their buck'’ in today’s spectrum limited wireless market through link adaptation where link adaptation is the process of selecting digital communication parameters based on real-time channel quality measurements. Link adaptation allows for joint optimization of reliability and data rate tailored to each application.
Most published research makes link adaptation seem like a straightforward problem.
- Measure the wireless channel.
- Extract a link quality metric from the wireless channel.
- Map the link quality metric to digital communication parameters using a look-up-table or rate/reliability formulas.
Look-up-tables/formulas are created offline through analysis, simulations, or measurements. When I attempted to do this on our IEEE 802.11n prototype, Hydra, I found out that link adaptation in practice can be very difficult. Here are a few of the lessons I learned.
- System nonlinearities make look-up-tables created through analysis or simulations inaccurate.
- Non-Gaussian noise make look-up-tables created through analysis or simulations inaccurate.
- Channel estimation error must be accounted for.
- Look-up-tables based on actual over-the-air measurements result in the most accurate adaptation.
- Look-up-tables based on actual over-the-air measurements for one device may result in inaccurate adaptation if placed on a different wireless device.
After my initial testing I was concerned that it might be difficult to use our prototype, even with significant amplifier backoff to design/evaluate practical link adaptation algorithms. Apparently, however, commercial systems suffer from the same issues. For example, the Roofnet project found that the SNR link quality metric did not consistently reflect the expected reliability of the associated digital communication parameters, even when interference/collisions are not an issue.
Another large problem we discovered is that good link quality metrics for MIMO-OFDM systems just weren’t available. It turns out that analyzing the error rate of practical MIMO-OFDM links is very difficult. Consequently, finding a good, single-dimensional (which is required for look-up-tables) link quality metric that modeled the spatial and frequency selective effects of the channel was also very difficult.
So what do we do? The linear, time-invariant models we have used to create link adaptation rate/reliability expressions or to run link simulations (which may then result in look-up-tables) do not reflect the actual system. One approach is to make our analysis model more complex to include nonlinear and non-Gaussian noise effects. This seemed like a difficult undertaking. Analysis was already difficult with our simplistic linear, time-invariant system and additive Gaussian noise. Using a more complex system model would only lead to more design time for engineers. Moreover, even if we are able to find a good link quality metric, it will likely be multi-dimensional, and look-up-tables aren’t easily created in this case. All of this lead us (Professor Heath and I) to machine learning.
Machine learning algorithms allow systems (in our case the link adaptation algorithm) to learn behavior solely from data observations. Hence, as long as we were able to define an accurate link quality metric and pose the problem correctly, machine learning should be able to discover the relationship between the link quality metric and the reliability of the link. First, we created the multi-dimensional ordered SNR link quality metric based on our new expression of packet error rate in MIMO-OFDM systems. Then, with the help of Professor Caramanis, we validated the efficacy of classification algorithms that exploited this new link quality metric for link adaptation. However, all this work was done using system models. To compensate for unique hardware nonidealities, we needed an online algorithm that tuned link adaptation to each transmit/receive device pair. Consequently, Professor Heath and I designed online classifiers that harness training information on-the-fly. These algorithms constantly observe packets transmitted over channels and improve the link adaptation classifier in real time based on these observations.
For example, see the following figure which shows a plot of throughput and packet error rate for offline and online machine learning in a static wireless channel with our wireless prototype and a packet error rate reliability constraint of 10%. The offline algorithm is not able to tune itself to the unique hardware characteristics of the transmit/receive pair, resulting in lost rate/reliability. The online algorithm, however, discovers the correct digital communication parameters in real-time. There have been other rate adaptation algorithms, notably auto-rate fallback (ARF), which adapt online. Unfortunately, they don’t take advantage of explicit link quality metrics and so cannot adapt well in dynamic channel conditions (see following figure).
The best part of our online learning algorithms for link adaptation is simplicity. The algorithms are installed and we’re done. No complex analysis of the reliability curves. No calibration algorithms to determine amplifier backoff. Additionally, our recent results with support vector machines also show that online learning can be implemented with low memory/processing complexity.
For More Information:
R. C. Daniels and R. W. Heath, Jr., “Online Adaptive Modulation and Coding with Support Vector Machines,” to appear in Proceedings of the IEEE European Wireless Conference, Lucca, Italy, April 2010.
R. C. Daniels and R. W. Heath, Jr, “An Online Learning Framework for Link Adaptation in Wireless Networks,” Proceedings of the Information Theory and Applications Workshop, San Diego, CA, February, 2009.
R. C. Daniels, C. M. Caramanis, and R. W. Heath, Jr, “Adaptation in Convolutionally-Coded MIMO-OFDM Wireless Systems through Supervised Learning and SNR Ordering,” IEEE Transactions on Vehicular Technology, January, 2010.