Publications/Thesis
Thesis |
|
2008
Student: Mahmoud Taherzadeh, PhD |
Title: Lattice-Based Precoding And Decoding in MIMO Fading Systems |
The PDF file of this
thesis
Abstract:
In this thesis, different aspects of lattice-based precoding and decoding for the transmission of digital and analog data over MIMO fading channels are investigated: 1) Lattice-based precoding in MIMO broadcast systems: A new viewpoint for adopting the lattice reduction in communication over MIMO broadcast channels is introduced. Lattice basis reduction helps us to reduce the average transmitted energy by modifying the region which includes the constellation points. The new viewpoint helps us to generalize the idea of lattice-reduction-aided precoding for the case of unequal-rate transmission, and obtain analytic results for the asymptotic behavior of the symbol-error-rate for the lattice-reduction-aided precoding and the perturbation technique. Also, the outage probability for both cases of fixed-rate users and fixed sum-rate is analyzed. It is shown that the lattice-reduction-aided method, using LLL algorithm, achieves the optimum asymptotic slope of symbol-error-rate (called the precoding diversity). 2) Lattice-based decoding in MIMO multiaccess systems and MIMO point-to-point systems: Diversity order and diversity-multiplexing tradeoff are two important measures for the performance of communication systems over MIMO fading channels. For the case of MIMO multiaccess systems (with single-antenna transmitters) or MIMO point-to-point systems with V-BLAST transmission scheme, it is proved that lattice-reduction-aided decoding achieves the maximum receive diversity (which is equal to the number of receive antennas). Also, it is proved that the naive lattice decoding (which discards the out-of-region decoded points) achieves the maximum diversity in V-BLAST systems. On the other hand, the inherent drawbacks of the naive lattice decoding for general MIMO fading systems is investigated. It is shown that using the naive lattice decoding for MIMO systems has considerable deficiencies in terms of the diversity-multiplexing tradeoff. Unlike the case of maximum-likelihood decoding, in this case, even the perfect lattice space-time codes which have the non-vanishing determinant property can not achieve the optimal diversity-multiplexing tradeoff. 3) Lattice-based analog transmission over MIMO fading channels: The problem of finding a delay-limited schemes for sending an analog source over MIMO fading channels is investigated in this part. First, the problem of robust joint source-channel coding over an additive white Gaussian noise channel is investigated. A new scheme is proposed which achieves the optimal slope for the signal-to-distortion-ratio (SDR) curve (unlike the previous known coding schemes). Then, this idea is extended to MIMO channels to construct lattice-based codes for joint source-channel coding over MIMO channels. Also, similar to the diversity-multiplexing tradeoff, the asymptotic performance of MIMO joint source-channel coding schemes is characterized, and a concept called diversity-fidelity tradeoff is introduced in this thesis.
|
Student: Alireza Bayesteh, PhD |
Title: Scheduling in Large Scale MIMO Downlink Systems |
The PDF file of this
thesis
Abstract:
This dissertation deals with the problem of scheduling in wireless MIMO (Multiple-Input Multiple-Output) downlink systems. The focus is on the large-scale systems when the number of subscribers is large. In part one, the problem of user selection in MIMO Broadcast channel is studied. An efficient user selection algorithm is proposed and is shown to achieve the sum-rate capacity of the system asymptotically (in terms of the number of users), while requiring (i)~low-complexity precoding scheme of zero-forcing beam-forming at the base station, (ii)~low amount of feedback from the users to the base station, (iii)~low complexity of search. Part two studies the problem of MIMO broadcast channel with partial Channel State Information (CSI) at the transmitter. The necessary and sufficient conditions for the amount of CSI at the transmitter (which is provided to via feedback links from the receivers) in order to achieve the sum-rate capacity of the system are derived. The analysis is performed in various singnal to noise ratio regimes. In part three, the problem of sum-rate maximization in a broadcast channel with large number of users, when each user has a stringent delay constraint, is studied. In this part, a new definition of fairness, called short-term fairness is introduced. A scheduling algorithm is proposed that achieves: (i) Maximum sum-rate throughput and (ii) Maximum short-term fairness of the system, simultaneously, while satisfying the delay constraint for each individual user with probability one. In part four, the sum-rate capacity of MIMO broadcast channel, when the channels are Rician fading, is derived in various scenarios in terms of the value of the Rician factor and the distribution of the specular components of the channel.
|
2007
Student: Hamidreza Farmanbar, PhD |
Title: Communication over Channels with Causal Side Information at the Transmitter |
The PDF file of this
thesis
Abstract:
This work deals with communication over the AWGN channel with additive discrete interference, where the sequence of interference symbols is known causally at the transmitter. We use Shannon's treatment for channels with side information at the transmitter as a framework to derive ``optimal precoding" and ``channel code design criterion" for the channel with known interference at the transmitter. Communication over Shannon's state-dependent discrete memoryless channel where the state sequence is known causally at the transmitter requires encoding over the so-called \emph{associated} channel which has exponential input alphabet cardinality with respect to the number of states. We show that by using at most linearly many input symbols of the \emph{associated} channel, the capacity is achievable. In particular, we consider $M$-ary signal transmission over the AWGN channel with additive $Q$-ary interference where the sequence of i.i.d. interference symbols is known causally at the transmitter. We investigate the problem of maximization of the transmission rate under the uniformity constraint, where the channel input given any current interference symbol is uniformly distributed over the channel input alphabet. For this setting, we propose the general structure of a communication system with optimal precoding. We also investigate the extension of the proposed precoding scheme to continuous channel input alphabet. We also consider the problem of channel code design with causal side information at the encoder. We derive the code design criterion at high SNR by defining a new distance measure between the input symbols of the Shannon's \emph{associated} channel. For the case of the binary-input channel, i.e., $M=2$, we show that it is sufficient to use only two (out of $2^Q$) input symbols of the \emph{associated} channel in encoding as far as the distance spectrum of code is concerned. This reduces the problem of channel code design for the binary-input AWGN channel with known interference at the encoder to design of binary codes for the binary symmetric channel where the Hamming distance among codewords is the major factor in the performance of the code.
|
Student: Amin Mobasher, PhD |
Title: Applications of Lattice Codes in Communication Systems |
The PDF file of this
thesis
Abstract:
In the last decade, there has been an explosive growth in different applications of wireless technology, due to users' increasing expectations for multi-media services. With the current trend, the present systems will not be able to handle the required data traffic. Lattice codes have attracted considerable attention in recent years, because they provide high data rate constellations. In this thesis, the applications of implementing lattice codes in different communication systems are investigated. The thesis is divided into two major parts. Focus of the first part is on constellation shaping and the problem of lattice labeling. The second part is devoted to the lattice decoding problem. In constellation shaping technique, conventional constellations are replaced by lattice codes that satisfy some geometrical properties. However, a simple algorithm, called lattice labeling, is required to map the input data to the lattice code points. In the first part of this thesis, the application of lattice codes for constellation shaping in Orthogonal Frequency Division Multiplexing (OFDM) and Multi-Input Multi-Output (MIMO) broadcast systems are considered. In an OFDM system a lattice code with low Peak to Average Power Ratio (PAPR) is desired. Here, a new lattice code with considerable PAPR reduction for OFDM systems is proposed. Due to the recursive structure of this lattice code, a simple lattice labeling method based on Smith normal decomposition of an integer matrix is obtained. A selective mapping method in conjunction with the proposed lattice code is also presented to further reduce the PAPR. MIMO broadcast systems are also considered in the thesis. In a multiple antenna broadcast system, the lattice labeling algorithm should be such that different users can decode their data independently. Moreover, the implemented lattice code should result in a low average transmit energy. Here, a selective mapping technique provides such a lattice code. Lattice decoding is the focus of the second part of the thesis, which concerns the operation of finding the closest point of the lattice code to any point in N-dimensional real space. In digital communication applications, this problem is known as the integer least-square problem, which can be seen in many areas, e.g. the detection of symbols transmitted over the multiple antenna wireless channel, the multiuser detection problem in Code Division Multiple Access (CDMA) systems, and the simultaneous detection of multiple users in a Digital Subscriber Line (DSL) system affected by crosstalk. Here, an efficient lattice decoding algorithm based on using Semi-Definite Programming (SDP) is introduced. The proposed algorithm is capable of handling any form of lattice constellation for an arbitrary labeling of points. In the proposed methods, the distance minimization problem is expressed in terms of a binary quadratic minimization problem, which is solved by introducing several matrix and vector lifting SDP relaxation models. The new SDP models provide a wealth of trade-off between the complexity and the performance of the decoding problem.
|
Student: Mehdi Ansari, PhD |
Title: Studies on Trade-off Between Throughput and Reliability in Wireless Systems |
The PDF file of this
thesis
Abstract:
In the first part of the thesis, we study the trade-off between the transmission reliability and data rate in high signal-to-noise ratio regime in ad-hoc wireless networks. Bandwidth allocation plays a significant role in this trade-off, since dividing bandwidth reduces the number of users on each band and consequently decreases the interference level, however it also decreases the data rate. Noting that the interference power is substantially influenced by the network density, this trade-off introduces a measure for appropriate bandwidth allocation among users considering the network density. The diversity-multiplexing trade-off is derived for a one-dimensional regular ad-hoc network. In the second part of the thesis, we study the performance of point-to-point and broadcast systems with partial channel state information at the transmitter in a time-varying environment. First, the capacity of time-varying channels with periodic feedback at the transmitter is evaluated. It is assumed that the channel state information is perfectly known at the receiver and is fed back to the transmitter at the regular time-intervals. The system capacity is investigated in two cases: i) finite state Markov channel, and ii) additive white Gaussian noise channel with time-correlated fading. In a multiuser scenario, we consider a downlink system in which a single-antenna base station communicates with single antenna users, over a time-correlated fading channel. It is assumed that channel state information is perfectly known at each receiver, while the rate of channel variations and the fading gain at the beginning of each frame are known to the transmitter. The asymptotic throughput of the scheduling that transmits to the user with the maximum signal to noise ratio is examined applying variable code rate and/or variable codeword length signaling. It is shown that by selecting a fixed codeword length for all users, the order of the maximum possible throughput (corresponding to quasi-static fading) is achieved.
|
Student: Masoud Ebrahimi, PhD |
Title: Throughput Limits of Wireless Networks With Fading Channels |
The PDF file of this
thesis
Abstract:
Wireless Networks have been the topic of fundamental research in recent years with the aim of achieving reliable and efficient communications. However, due to their complexity, there are still many aspects of such configurations that remain as open problems. The focus of this thesis is to investigate some throughput limits of wireless networks. The network under consideration consists of $n$ source-destination pairs (links) operating in a single-hop fashion. In Chapters 2 and 3, it is assumed that each link can be active and transmit with a constant power P or remain silent. Also, fading is assumed to be the dominant factor affecting the strength of the channels between transmitter and receiver terminals. The objective is to choose a set of active links such that the throughput is maximized, where the rate of active links are either unconstrained or constrained. For the unconstrained throughput maximization, by deriving an upper bound and a lower bound, it is shown that in the case of Rayleigh fading: (i) the maximum throughput scales like $\log n$, (ii) the maximum throughput is achievable in a distributed fashion. The upper bound is obtained using probabilistic methods, where the key point is to upper bound the throughput of any random set of active links by a chi-squared random variable. To obtain the lower bound, a threshold-based link activation strategy (TBLAS) is proposed and analyzed. The achieved throughput of TBLAS is by a factor of four larger than what was obtained in previous works with centralized methods and with multihop communications. When the active links are constrained to transmit with a constant rate $\lambda$, an upper bound is derived that shows the number of active links scales at most like $\frac{1}{\lambda} \log n$. It is proved that TBLAS \emph{asymptotically almost surely(a.a.s.)} yields a feasible solution for the constrained throughput maximization problem. This solution, which is suboptimal in general, performs close to the upper bound for small values of $\lambda$. To improve the suboptimal solution, a double-threshold-based link activation strategy (DTBLAS) is proposed and analyzed based on some results from random graph theory. It is demonstrated that DTBLAS performs very close to the optimum. Specifically, DTBLAS is a.a.s. optimum when $\lambda$ approaches $\infty$ or $0$. The optimality results are obtained in an interference-limited regime. However, it is shown that, by proper selection of the algorithm parameters, DTBLAS also allows the network to operate in a noise-limited regime in which the transmission rates can be adjusted by the transmission powers. The price for this flexibility is a decrease in the throughput scaling law by a factor of $\log \log n$. In Chapter 4, the problem of throughput maximization by means of power allocation is considered. It is demonstrated that under individual power constraints, in the optimum solution, the power of at least one link should take its maximum value. Then, for the special case of $n=2$ links, it is shown that the optimum power allocation strategy for throughput maximization is such that either both links use their maximum power or one of them uses its maximum power and the other keeps silent.
|
Student: Abdorreza Heidari, PhD |
Title: Utilizing Channel State Information for Enhancement of Wireless Communication Systems |
The PDF file of this
thesis
Abstract:
One of the fundamental limitations of mobile radio communications is their time-varying fading channel. This thesis addresses the efficient use of channel state information to improve the communication systems, with a particular emphasis on practical issues such as compatibility with the existing wireless systems and low complexity implementation. The closed-loop transmit diversity technique is used to improve the performance of the downlink channel in MIMO communication systems. For example, the WCDMA standard endorsed by 3GPP adopts a mode of downlink closed-loop scheme based on partial channel state information known as mode 1 of 3GPP. Channel state information is fed back from the mobile unit to the base station through a low-rate uncoded feedback bit stream. In these closed-loop systems, feedback error and feedback delay, as well as the sub-optimum reconstruction of the quantized feedback data, are the usual sources of deficiency. In this thesis, we address the efficient reconstruction of the beamforming weights in the presence of the feedback imperfections, by exploiting the residual redundancies in the feedback stream. We propose a number of algorithms for reconstruction of beamforming weights at the base-station, with the constraint of a constant transmit power. The issue of the decoding at the receiver is also addressed. In one of the proposed algorithms, channel fading prediction is utilized to combat the feedback delay. We introduce the concept of Blind Antenna Verification which can substitute the conventional Antenna Weight Verification process without the need for any training data. The closed-loop mode 1 of 3GPP is used as a benchmark, and the performance is examined within a WCDMA simulation framework. It is demonstrated that the proposed algorithms have substantial gain over the conventional method at all mobile speeds, and are suitable for the implementation in practice. The proposed approach is applicable to other closed-loop schemes as well. The problem of (long-range) prediction of the fading channel is also considered, which is a key element for many fading-compensation techniques. A linear approach, usually used to model the time evolution of the fading process, does not perform well for long-range prediction applications. We propose an adaptive algorithm using a state-space approach for the fading process based on the sum-sinusoidal model. Also to enhance the widely-used linear approach, we propose a tracking method for a multi-step linear predictor. Comparing the two methods in our simulations shows that the proposed algorithm significantly outperforms the linear method, for both stationary and non-stationary fading processes, especially for long-range predictions. The robust structure, as well as the reasonable computational complexity, makes the proposed algorithm appealing for practical applications.
|
Student: Mohammad Ali Maddah-Ali, PhD |
Title: Communication over MIMO Multi-User Systems: Signalling and Fairness |
The PDF file of this
thesis
Abstract:
Employment of the multiple-antenna transmitters/receivers in communication
systems is known as a promising solution to provide high-data-rate wireless
links. In the multi-user environments, the problems of signaling and fairness
for multi-antenna systems have emerged as challenging problems. This
dissertation deals with these problems in several multi-antenna multi-user
scenarios.
In part one, a simple signaling method for the multi-antenna broadcast channels
is proposed. This method reduces the MIMO broadcast system to a set of parallel
channels. The proposed scheme has several desirable features in terms of: (i)
accommodating users with different number of receive antennas, (ii) exploiting
multi-user diversity, and (iii) requiring low feedback rate. The simulation results
and analytical evaluations indicate that the achieved sum-rate is close to the
sum-capacity of the underlying broadcast channel.
In part two, for multiple-antenna systems with two transmitters and two receivers,
a new non-cooperative scenario of data communication is studied in which each
receiver receives data from both transmitters. For such a scenario, a signaling
scheme is proposed which decomposes the system into two broadcast or two multi-access
sub-channels. Using the decomposition scheme, it is shown that this signaling
scenario outperforms the other known non-cooperative schemes in terms of the
achievable multiplexing gain. In particular for some special cases, the achieved
multiplexing gain is the same as the multiplexing gain of the system, where the
full cooperation is provided between the transmitters and/or between the receivers.
Part three investigates the problem of fairness for a class of systems for which a
subset of the capacity region, which includes the sum-capacity facets, forms a
polymatroid structure. The main purpose is to find a point on the sum-capacity
facet which satisfies a notion of fairness among active users. This problem is
addressed in the cases where the complexity of achieving interior points is not
feasible, and where the complexity of achieving interior points is feasible.
In part four, K-user memoryless interference channels are considered; where each
receiver sequentially decodes the data of a subset of transmitters before it
decodes the data of the designated transmitter. A greedy algorithm is developed
to find the users which are decoded at each receiver and the corresponding decoding
order such that the minimum rate of the users is maximized. It is proven that the
proposed algorithm is optimal.
The results of the parts three and four are presented for general channels which
include the multiple-antenna systems as special cases.
|
Student: Seyed Reza Mirghaderi, MASc |
Title: Information Theoretic Aspects of Wireless Networks with Coverage Constraint |
The PDF file of this
thesis
Abstract:
A wireless multicast network with a stringent decoding delay constraint and a minimum coverage requirement is characterized when the fading channel state information is available only at the receiver side. In the first part, the optimal expected rate achievable by a random user in the network is derived in a single antenna system in terms of the minimum multicast requirement in two scenarios: hard coverage constraint and soft coverage constraint. In the first case, the minimum multicast requirement is expressed by multicast outage capacity while in the second case, the expected multicast rate should satisfy the minimum requirements. Also, the optimum power allocation in an infinite layer superposition code, achieving the highest expected typical rate, is derived. For the MISO case, a suboptimal coding scheme is proposed, which is shown to be asymptotically optimal, when the number of transmit antennas grows at least logarithmically with the number of users in the network. In the second part, a joint source-channel coding scheme is motivated, where a multi-resolution Gaussian source code is mapped to a multi-level channel code. In this part, the hard and soft coverage constraints are defined as maximum outage multicast distortion and maximum expected multicast distortion, respectively. In each scenario, the minimum expected distortion of a typical user is derived in terms of the corresponding coverage constraint. The minimization is first performed for the finite state fading channels and then is extended to the continuous fading channels.
|
Student: Joseph Pierri, MASc |
Title: Design and Implementation of an OFDM WLAN Synchronizer |
The PDF file of this
thesis
Abstract:
With the advent of OFDM for WLAN communications, as exemplified by IEEE 802.11a, it has become imperative to have efficient and reliable synchronization algorithms for OFDM WLAN receivers. The main challenges with synchronization deal with the delay spread and frequency offset introduced by the wireless channel. In this work, rigorous research is done into OFDM WLAN synchronization algorithms, and a thorough synchronizer implementation is presented. This synchronizer performs packet detection, frequency offset estimation, and time offset estimation. Competing timing offset estimation algorithms are compared under a variety of channel conditions, with varying delay spreads, frequency offsets, and channel SNR. The metrics used to select between competing algorithms are statistical variance, and incremental hardware complexity. The timing offset estimation algorithms chosen are a dropoff detection algorithm for coarse timing offset estimation, and a quantized cross-correlator with a maximum detector for fine timing offset estimation.
|
Student: Hajar Mahdavidoost, MASc |
Title: Characterization of Rate Region and User Removal in Interference Channels with Constrained Power |
The PDF file of this
thesis
Abstract:
Channel sharing is known as a unique solution to satisfy the increasing demand for the spectral-efficient communication. In the channel sharing technique, several users concurrently communicate through a shared wireless medium. In such a scheme, the interference of users over each other is the main source of impairment. The task of performance evaluation and signaling design in the presence of such interference is known as a challenging problem. In this thesis, a system including $n$ parallel interfering AWGN transmission paths is considered, where the power of the transmitters are subject to some upper-bounds. For such a system, we obtain a closed form for the boundaries of the rate region based on the Perron-Frobenius eigenvalue of some non-negative matrices. While the boundary of the rate region for the case of unconstrained power is a well-established result, this is the first result for the case of constrained power. This result is utilized to develop an efficient user removal algorithm for congested networks. In these networks, it may not be possible for all users to attain a required Quality of Service (QoS). In this case, the solution is to remove some of the users from the set of active ones. The problem of finding the set of removed users with the minimum cardinality is claimed to be an NP-complete problem. In this thesis, a novel sub-optimal removal algorithm is proposed, which relies on the derived boundary of the rate region in the first part of the thesis. Simulation results show that the proposed algorithm outperforms other known schemes.
|
2006
Student: Farzaneh Kohandani, PhD |
Title: Application of Non-linear Optimization Techniques in Wireless Telecommunication Systems |
The PDF file of this
thesis
Abstract:
Non-linear programming has been extensively used in wireless telecommunication systems design. An important criterion in optimization is the minimization of mean square error. This thesis examines two applications: peak to average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) systems and wireless airtime traffic estimation. These two applications are both of interests to wireless service providers. PAPR reduction is implemented in the handheld devices and low complexity is a major objective. On the other hand, exact traffic prediction can save a huge cost for wireless service providers by better resource management through off-line operations.
High PAPR is one of the major disadvantages of OFDM system which is resulted from large envelope fluctuation of the signal. Our proposed technique to reduce the PAPR is based on constellation shaping that starts with a larger constellation of points, and then the points with higher energy are removed. The constellation shaping algorithm is combined with peak reduction, with extra flexibilities defined to reduce the signal peak. This method, called MMSE-Threshold, has a significant improvement in PAPR reduction with low computational complexity.
The peak reduction formulated into a quadratic minimization problem is subsequently optimized by the semidefinite programming algorithm, and the simulation results show that the PAPR of semidefinite programming algorithm (SDPA) has noticeable improvement over MMSE-Threshold while SDPA has higher complexity. Results are also presented for the PAPR minimization by applying optimization techniques such as hill climbing and simulated annealing. The simulation results indicate that for a small number of sub-carriers, both hill climbing and simulated annealing result in a significant improvement in PAPR reduction, while their degree of complexity can be very large.
The second application of non-linear optimization is in airtime data traffic estimation. This is a crucial problem in many organizations and plays a significant role in resource management of the company. Even a small improvement in the data prediction can save a huge cost for the organization. Our proposed method is based on the definition of extra parameters for the basic structural model. In the proposed technique, a novel search method that combines the maximum likelihood estimation with mean absolute percentage error of the estimated data is presented. Simulated results indicate a substantial improvement in the proposed technique over that of the basic structural model and seasonal autoregressive integrated moving average (SARIMA) package. In addition, this model is capable of updating the parameters when new data become available.
|
Student: Mohammadhadi
Baligh, PhD |
Title: Analysis of the Asymptotic Performance of Turbo Codes |
The PDF file of this
thesis
Abstract:
Battail shows that an appropriate criterion for the design of
long block codes is the closeness of the normalized weight
distribution to a Gaussian distribution. A subsequent work shows
that iterated product of single parity check codes satisfy this
criterion. Motivated by these earlier works, in this thesis, we
study the effect of the interleaver on the performance of turbo
codes for large block lengths. A parallel concatenated turbo
code that consists of two or more component codes is considered.
We demonstrate that for large block lengths, the normalized
weight of the systematic, and the parity check sequences become
a set of jointly Gaussian distributions for the typical values
of weight. To optimize the turbo code performance in the
waterfall region which is dominated by high-weight codewords, it
is desirable to reduce the correlation coefficient between the
systematic and the parity weights. It is shown that: (i) the
correlation coefficients are nonnegative, (ii) the correlation
coefficients between the systematic and the two parity weights
tend to zero as the block length increases, and (iii) the
correlation coefficient between the two parity weights tends to
zero as the block length increases for "almost'' any random
interleaver. This indicates that for large block lengths, the
optimization of the interleaver has a diminishing effect on the
distribution of high-weight error events, and consequently, on
the error performance in the waterfall region. We show that for
the typical weights, this weight distribution approaches the
average spectrum defined by Poltyrev. We also apply the
tangential sphere bound (TSB) on the Gaussian distribution in
AWGN channel with BPSK signaling and show that it performs very
close to the capacity for code rates of interest.
We also study
the statistical properties of the low-weight codeword
structures. We prove that for large block lengths, the number of
low-weight codewords of these structures are some Poisson random
variables. These random variables can be used to evaluate the
asymptotic probability mass function of the minimum distance of
the turbo code among all the possible interleavers. We show that
the number of indecomposable low-weight codewords of different
types tend to a set of independent Poisson random variables. We
find the mean and the variance of the union bound in the error
floor region and study the effect of expurgating low-weight
codewords on the performance. We show that the weight
distribution in the transition region between Poisson and
Gaussian follows a negative binomial distribution. We also
calculate the interleaver gain for multi-component turbo codes
based on these Poisson random variables. We show that the
asymptotic error performance for multi-component codes in
different weight regions converges to zero either exponentially
(in the Gaussian region) or polynomially (in the Poisson and
negative binomial regions) with respect to the block length,
with the code-rate and energy values close to the channel
capacity.
|
2005
No graduates from the lab. in this
year.
2004
Student: Viola Wing Sum
Lai, MASc |
Title: ARQ Scheme Using Turbo
Codes under Slow Fading Channels |
The PDF file of this
thesis
Abstract: We
have applied ARQ scheme to slow fading environment and use Turbo
Codes as the coding method. If the system can tolerate certain
delay, ARQ scheme would help to decrease the probability of bits
received in error. The concept of different levels of bits in
the signal constellation offering different levels of protection
is being discussed in this thesis. We also proposed two ARQ
schemes based on this protection concept. In the first proposed method, we
proposed to change the constellation during the second
transmission to take advantage of different bit protection
level. The bit probabilities of both transmissions are combined
and send to the Turbo decoding in order to increase the accuracy
of decoding.
The results show that changing the constellation
and combining the probabilities would outperform traditional retransmission. In the second proposed
method, different levels of bits are separated into different
frames and are decoded separately. The result shows that the
proposed method increases the throughput of the system under low
signal-noise-ratio.
|
Student: Ali Abedi,
PhD |
Title: Invariance Properties and
Performance Evaluation of Bit Decoding Algorithms |
The PDF file of this
thesis Abstract:
Certain properties of optimal bitwise APP (A Posteriori
Probability) decoding of binary linear block codes are studied.
The focus is on the Probability Density Function (pdf) of the
bit Log-Likelihood-Ratio (LLR). A general channel model with
discrete (not necessarily binary) input and discrete or
continuous output is considered. It is proved that under a set
of mild conditions on the channel, the pdf of the bit LLR of a
specific bit position is independent of the transmitted
code-word. It is also shown that the pdf of a given bit LLR,
when the corresponding bit takes the values of zero and one, are
symmetric with respect to each other (reflection of one another
with respect to the vertical axis). In the case of channels with
binary inputs, a sufficient condition for two bit positions to
have the same pdf is presented. An analytical method for
approximate performance evaluation of binary linear block codes
using an Additive White Gaussian Noise (AWGN) channel model with
Binary Phase Shift Keying (BPSK) modulation is proposed. The pdf
of the bit LLR is expressed in terms of the Gram-Charlier series
expansion. This expansion requires knowledge of the statistical
moments of the bit LLR. An analytical method for calculating
these moments which is based on some recursive calculations
involving certain weight enumerating functions of the code is
introduced. It is proved that the approximation can be as
accurate as desired, using enough numbers of terms in the
Gram-Charlier series expansion.
A new method for the performance
evaluation of Turbo-Like Codes is presented. The method is based
on estimating the pdf of the bit LLR by using an exponential
model. The moment matching method is combined with the maximum
entropy principle to estimate the parameters of the new model. A
simple method is developed for computing the Probabilities of
the Point Estimates (PPE) for the estimated parameters, as well
as for the Bit Error Rate (BER). It is demonstrated that this
method requires significantly fewer samples than the
conventional Monte-Carlo (MC) simulation.
|
2003
Student:
Vahid Garousi, MASc |
Title: Methods to
Reduce Memory Requirements of Turbo Codes |
The PDF file of this
thesis Abstract:
Turbo coding is a powerful encoding and decoding
technique that can provide highly reliable data
transmission at extremely low signal-to-noise
ratios. According to the computational complexity
of the employed decoding algorithms, such as MAP,
BCJR and APP, the realization of turbo decoders
usually takes a large amount of memory spaces and
potentially long decoding delay. Therefore, an
efficient memory management strategy becomes one
of the key factors toward successfully designing
turbo decoders. Either the forward or the
backward path metric needs to be stored for the
whole frame. In this thesis, reducing memory
requirement of turbo decoding algorithms is
approached by two methods. The first method tries
to find a proper quantization scheme for
storing trellis variables in turbo decoding
stage. Uniform and non-uniform schemes are the
broad classes of quantization schemes studied.
The average distortion measure is computed for
some of the quantization schemes.
The second method proposes a technique to reduce
the storage of the path metrics in MAP
algorithm's trellis. The novelty in this
technique is to calculate MAP algorithm's forward
path metric alpha in backward by utilizing matrix
inversion. In this way, most path metrics are
calculated as needed with only a small portion of
those metrics drawn from memory. This method will
reduce the memory requirement by at least 50%
without big impact on the system performance
while incurring a small penalty in computation
complexity.
|
2002
Student:
Shahram Yousefi, PhD |
Title: Bounds on
the Performance of Maximum-Likelihood Decoded
Binary Block Codes in AWGN Interference |
The PDF file of this thesis Abstract:
The error probability of Maximum-Likelihood (ML)
soft-decision decoded binary block codes rarely
accepts nice closed forms. In addition, for long
codes ML decoding becomes prohibitively complex.
Nevertheless, bounds on the performance of ML
decoded systems provide insight into the
effect of system parameters on the overall system
performance as well as a measure of efficiency of
the suboptimum decoding methods used in practice.
In this dissertation, using the so-called
Gallager's first bounding technique (involving a
so-called Gallager region) and within the
framework of Tangential Sphere Bound (TSB) of
Poltyrev, a general bound referred to as the
Generalized Tangential Sphere Bound (GTSB) is
developed. The Gallager region is chosen to be a
general Hyper-Surface of Revolution (HSR) which
is optimized to tighten the bound. The search for
the optimal Gallager region is a classical
problem dating back to Gallager's thesis in the
early 1960's. For the random coding case,
Gallager provided the optimal solution in a
closed form while for the non-random case the
problem has been an active area of research in
information theory for many years.
GTSB is applied to two problems. First, to the
word error probability of BPSK-modulated binary
block codes with ML decoding, and second, to the
nonuniform signaling of binary block codes with
Maximum-A-Posteriori (MAP) decoding. For both,
the optimum HSR is obtained and it is noted that
in the former case the resulting bound is the TSB
of Poltyrev which has been the tightest bound
developed for binary block codes to date. This
terminates the search for a better Gallager
region in the groundwork of the GTSB for the
underlying application.
In the development of TSB, union bound which is
the simplest inequality from the large class of
so-called Bonferroni-type inequalitiesin
probability theory is utilized. In this work,
first, a new 2nd-order Bonferroni-type inequality
is proposed, and then, a new upper bound on the
word error probability of sphere codes is
developed within the framework of Gallager's
First Bounding Technique (GFBT) and the new
Bonferroni-type inequality. The resulting bound
in its first version is rather complicated as it
requires the global geometrical properties of the
code.
Subsequently, a second version of the above upper
bound, referred to as the Added-Hyper-Plane (AHP)
bound, is presented. This bound is very simple as
it only requires the spectrum of the code. For
some long binary block codes, improvements over
the TSB of Poltyrev are reported; making the
bound the new tightest bound on the performance
of ML-decoded binary block codes.
|
Student:
James Yang, MASc |
Title:
Statistical Decision Making in AdaptiveModulation
and Coding for 3G Wireless Systems |
The PDF file of this
thesis Abstract:
In this thesis, the application of Adaptive
Modulation and Coding (AMC) for 3rd-Generation
(3G) wireless systems is addressed. A new method
for selecting the appropriate Modulation and
Coding Schemes (MCS) according to the estimated
channel condition is proposed.
In this method, a statistical decision making
approach is taken to maximize the average
throughput while maintaining an acceptable Frame
Error Rate (FER). A first-order finite-state
Markov model is used to represent the average
channel Signal-to-Noise Ratio (SNR) in subsequent
frames. The MCS is selected in each state of this
Markov model (among the choices proposed in the
3G standards) to maximize the statistical average
of the throughput in that state. Using this
decision-making approach, a simplified Markov
model with fewer parameters is also proposed.
This model is suitable in AMC systems where
changes in the fading characteristics need to be
accounted for in an adaptive fashion.
Numerical results are presented showing that both
of the proposed models substantially outperform
the conventional techniques that use a
threshold-based decision making.
|
Student:
Nazanin Samavati, MASc |
Title: Serial
Concatenation of Turbo Code and Space-time Block
Code (Turbo Space-Time Code) over Rayleigh Fading
Channel |
The PDF file of this thesis Abstract:
This work presents the application of several
important concepts of digital communications,
i.e., serial concatenation, turbo principle,
temporal and antenna diversity, in a wireless
system. It has been shown that employing each of
the above concepts gives an improvement in
performance of the system.
We expect that by employing all of them in a
framework properly, we achieve a system with
better performance than a system employing each
concept individually. Serial concatenation of a
turbo code with a space-time block code is
considered and turbo principle (iterative
demodulation/decoding) is applied. Simulation
results show the gain in performance as we
expected.
|
Student:
Farshad Lahouti, PhD |
Title:
Quantization and Reconstruction of Sources with
Memory |
The PDF file of this thesis Abstract:
A fundamental problem in telecommunications is
the reliable transmission of a source over a
noisy channel. As an important result of the
Shannon's celebrated paper shannon, the problem
can be theoretically separated, without loss of
optimality, into two parts: source coding and
channel coding. However, in practise, due to the
strict design constraints, such as the
limitations on complexity of the systems
involved, the joint design of source and channel
coders has found increasing interest.
This thesis deals with the design of efficient
reliable communication systems with a particular
emphasis on practical issues such as complexity
and delay. A low bit-rate low complexity
Block-based Trellis Quantization (BTQ) scheme is
proposed for the quantization of speech spectrum
in speech coding applications. The proposed
scheme is based on the modeling of the LSF
intraframe dependencies with a trellis structure.
The BTQ search and design algorithms are
discussed and an efficient algorithm for the
index generation is proposed. Extensive
comparisons with several other techniques from
the literature are provided.
Efficient source decoding schemes are presented,
which take advantage of the residual redundancy
in the source coder output stream as a bandwidth
efficient way to combat the noisy channel
degradations. This falls into the category of
joint source channel (de)coding. In this part, a
family of solutions is proposed for the
asymptotically optimum Minimum Mean Squared Error
(MMSE) reconstruction of a source over memoryless
noisy channels when the residual redundancy is
exploited by use of a gamma-order Markov model
gamma >= 1 and a delay of delta is allowed in
the decoding process. Considering the same
problem setup, several other simplified MMSE and
maximum a posteriori symbol and sequence decoders
are also presented.
The problem of reconstruction of the predicatively
quantized signals over a noisy channel is also
considered. As well, methods for the
reconstruction of speech encoded with the
Enhanced Full Rate Codec (EFRC, IS-641) and
transmitted over a noisy channel are presented. A
methodology to efficiently approximate and store
the a priori probabilities characterizing the
residual redundancies is introduced. Numerical
results are presented which demonstrate the
efficiency of the proposed algorithms.
|
2001
Student:
Sasan Nikneshan, PhD |
Title:
Lattice/Trellis based Fixed-rate
Entropy-constrained Quantization |
The PDF file of this thesis Abstract:
The fixed-rate entropy-constrained vector
quantizer draws its motivation from the large gap
in the performance of the optimal
entropy-constrained scalar quantizer (ESCQ) and
the fixed-rate LMQ for most non-uniform sources
and tries to bridge this gap while maintaining a
fixed-rate output. Having a fixed-rate output
avoids all problems associated with a variable
rate output such as error propagation and
buffering.
For the task of
codebook search in a fixed-rate
entropy-constrained quantizer, one can use the
dynamic programming approach to achieve an
optimum performance. However, for high dimension
and rates, the implementation complexity of
dynamic programming approach is not affordable by
the most practical systems.
In this thesis,
we introduce two new low complexity algorithms
for the codebook search in a fixed-rate
entropy-constrained vector quantizer. It is shown
that both schemes are offering the same
performance as that of dynamic programming
approach while they are reducing the complexity
substantially (for most important class of
sources). We also propose a decoder for the
fixed-rate entropy-constrained vector quantizer
to improve its performance in transmission over a
noisy channel. In order to add quantization
packing gain, we apply the idea of tail-biting to
the trellis coded quantization combined with
fixed-rate entropy-constrained quantization. The
tail-biting trellis significantly reduces the
required block length and also mitigates the
effect of error propagation.
|
Student:
Atousa Mohammadi, PhD |
Title: Tubo-codes
for wireless applications |
Abstract: This
thesis aims at providing results and insight
towards the application of turbo-codes in digital
communication systems, mainly in three parts. The
first part considers systems of combined
turbo-code and modulation. This section follows
the pragmatic approach of the first proposed such
system. It is shown that by optimizing the
labeling method and/or modifying the puncturing
pattern, improvements of more than 0.5 dB in
signal to noiseratio (SNR) are achieved at no
extra cost of energy, complexity, or delay.
Conventional turbo-codes with binary signaling
divide the bit energy equally among the
transmitted turbo-encoder output bits. The second
part of this thesis proposes a turbo-code scheme
with unequal power allocation to the encoder
output bits. It is shown, both theoretically and
by simulation, that by optimizing the power
allocated to the systematic and parity check
bits, improvements of around 0.5 dB can be
achieved over the conventional turbo-coding
scheme.
The
third part of this thesis tackles the question of
"the sensitivity of the turbo-code
performance towards the choice of the
interleaver", which was brought up since the
early studies of these codes. This is the first
theoretical approach taken towards this subject.
The variance of the bound is evaluated. It is
proven that the ratio of the standard deviation
over the mean of the bound is asymptotically
constant (for large interleaver length, N),
decreases with N, and increases with SNR. The
distribution of the bound is also computationally
developed. It is shown that as SNR increases, a
very low percentage of the interleavers deviate
quite significantly from the average bound but
the majority of the random interleavers result in
performances very close to the average. The
contributions of input words of different weights
in the variance of performance bound are also
evaluated. Results show that these contributions
vary significantly with SNR and N. These
observations are important when developing
interleaver design algorithms.
|
Student:
Erik Hons, MASc |
Title:
Interference cancellation in CDMA |
The PDF file of this
thesis Abstract:
The suitability of general constrained quadratic
optimization as a modeling and solution tool for
wireless communication is discussed. It is found
to be appropriate due to the many quadratic
entities present in wireless communication
models. It is posited that decisions which affect
the minimum mean squared error at the receiver
may achieve improved performance if those
decisions are energy constrained. That theory is
tested by applying a specific type of constrained
quadratic optimization problem to the forward
link of a cellular DS-CDMA system. Through
simulation it is shown that when channel coding
is used, energy constrained methods typically
outperform unconstrained methods. Furthermore, a
new energy-constrained method is presented which
significantly outperforms a similar published
method in several different environments.
|
2000
Student:
Pragat Chaudhari, MASc |
Title: Channel
modeling in soft output decoding |
The PDF file of this thesis Abstract:
The modeling of the soft-output decoding of a
binary linear block code using a Binary Phase
Shift Keying (BPSK) modulation system (with
reduced noise power) is the main focus of this
work. With this model, it is possible to provide
bit error performance approximations to help in
the evaluation of the performance of binary
linear block codes. As well, the model can be
used in the design of communications systems
which require knowledge of the characteristics of
the channel, such as combined source-channel
coding. Assuming an Additive White Gaussian Noise
channel model, soft-output Log Likelihood Ratio
(LLR) values are modeled to be Gaussian
distributed. The bit error performance for a
binary linear code over an AWGN channel can then
be approximated using the Q-function that is used
for BPSK systems. Simulation results are
presented which show that the actual bit error
performance of the code is very well approximated
by the LLR approximation, especially for low
signal-to-noise ratios (SNR) .A new measure of
the coding gain achievable through the use of a
code is introduced by comparing the LLR variance
to that of an equivalently scaled BPSK system.
Furthermore, arguments are presented which show
that the approximation requires fewer samples
than conventional simulation methods to obtain
the same confidence in the bit error probability
value. This translates into fewer computations
and therefore, less time is needed to obtain
performance results. Other work was completed
that uses a discrete Fourier Transform technique
to calculate the weight distribution of a linear
code. The weight distribution of a code is
defined by the number of codewords which have a
certain number of ones in the codewords. For
codeword lengths of small to moderate size, this
method is faster and provides an easily implement
able and methodical approach over other methods.
This technique has the added advantage over other
techniques of being able to methodically
calculate the number of codewords of a particular
Hamming weight instead of calculating the entire
weight distribution of the code.
|
|
Student:
Dwayne Stienstra, MASc |
Title:
Interference cancelation in CDMA |
The PDF file of this thesis Abstract:
The topic of interference cancellation in a coded
Code Division Multiple Access (CDMA) system has
been the focus of much recent research. Earlier
works have studied methods for jointly decoding
all the users resulting in most having an
exponential complexity for the corresponding
receiver. In this thesis, a number of different
iterative decoding methods are proposed for the
multi-user interference cancellation in a CDMA
system where Turbo Codes are utilized for forward
error correction. In the proposed decoding
schemes, the individual users are decoded
separately with the operation of iterative
interference cancellation being mixed with the
iterative decoding of Turbo Codes.
These iterative decoders result in only a
marginal increase in the overall complexity as
compared to a conventional single user receiver
utilizing Turbo Codes for forward error
correction. Numerical performance results are
presented showing that in the cases of practical
interest for CDMA applications, the multiple
access interference can be essentially removed at
a reasonable level of complexity. These iterative
decoders achieve a similar to better performance
with a reduction in complexity as compared to
similar previously known research work.
|
1999
Student:
Mohsen Ghazel, MASc |
Title:
Fractal-based signal compression |
Abstract:
Standard fractal image coding methods seek to
find a contractive fractal transform operator T
that best scales and copies subregions of a
target image I onto smaller subimages. The
attractor of this operator is an approximation to
the image I and can be generated by iteration of
T. For most of the standard fractal-based
schemes, the transform coefficients exhibit some
degree of linear dependence which can be
exploited by using an appropriate vector
quantizer such as the LEG algorithm. Additional
compression can be achieved by lossless Huffman
coding of the quantized coefficients. Although
good fidelity is generally achieved at relatively
high compression ratios, standard fractal schemes
in the spatial domain are not without
limitations. Some of the disadvantages of the
con- ventional fractal schemes include expensive
computational requirement and blockiness
artifacts in fractal representation of images.
This inspired the introduction of fractal-wavelet
transforms which operate on the wavelet domain of
images: Wavelet coefficient subtrees are scaled
and copied onto lower subtrees. In spite of their
numerous advantages, these fractal-wavelet
schemes can still be rather restrictive in the
sense that they adopt the standard three-subband
wavelet tree decomposition. This results in a
static representation for each level that varies
significantly from one level to the next. However
, intermediate approximations may be desired,
perhaps for purposes of "bit-budgeting"
. We propose and implement an adaptive and
unrestricted fractal-wavelet scheme that adopts a
dynamic partitioning of the wavelet decomposition
tree, resulting in intermediate representation
between the various levels. Some of the benefits
of such a simple and adaptive fractal-wavelet
scheme include: Generating a continuous and
relatively smooth rate distortion curve for
fractal-wavelet schemes, and encoding images at a
pre-defined bit rate or representation tolerance
error. |
|
Student:
A. Fazel, MASc |
Title: Coding of
LSF parameters |
Abstract: A
switched lattice-based quantizer for the
quantization of speech LSF parameters is
presented. LSF difference values are first
quantized using a scalar quantizer, as well as a
two dimensional vector quantizer, and
subsequently adjusted for a lower bit-rate using
a lattice. Euclidean distance is used as the
distortion measure for the design,
while a spectral distortion measure is used for
the evaluation of the performance. LSF
differences are formed in a closed loop manner
for which a first-order prediction filter is
used. The parameters for the prediction filter
are calculated from a LSF database. Lattice-based
double frame and single frame quantization is
performed for each frame and the one which
results in a lower distortion is chosen.
Numerical results are presented showing an
excellent performance with very low complexity.
The results are compared to the split vector
quantizer using common database for test and
training showing substantial improvement.
Finally, the issue of the robustness to channel
errors is investigated. |
1998
Student:
H. Hind, MASc |
Title: Signal
compression for wireless applications |
Abstract:
Compression schemes suitable for use in
(personal) wireless communication systems are
investigated in this thesis. The aim of the
thesis is to identify techniques which meet the
specific requirements of the personal wireless
communication environment.
A lossless data compression algorithm is
presented. The algorithm is based on a recent
transform technique due to Burrows and Wheeler.
It is shown that the algorithm has signifi-
cantly improved compression when compared to the
V.42bis algorithm which is the industry
standard for wireline communication devices.
Speech compression is also investigated. In
particular, efficient quantization techniques for
transmitting speech model parameters ( the Linear
Prediction Coefficients) are described. It is
shown that quantizing an equivalent set of speech
model parameters, Line Spectral Pairs, is more
efficient than quantizing the Linear Prediction
Coefficients themselves. Various quantization
techniques for Line Spectral Pairs are
investigated. It is shown that an integer lattice
quantization technique, Pyramid Vector
Quantization introduced by Fischer, gives a low
computational complexity quantizer with
competitive performance in terms of the
compression ratio (bits used per speech frame)
and the distortion introduced by the quantizer .
Finally, it is shown that the performance of the
Pyramid Vector Quantizer can be improved through
the use of the DII lattice to replace the Zll
lattice used by Fischer. |
|
Student:
Jan Bakus, MASc |
Title: Combined
source and channel coding |
The PDF file of this
thesis Abstract:
A new method of combined source-channel coding
using Turbo-codes is presented. The system
presented is designed to transmit a discrete
time, continuous amplitude signal over an
Additive White Gaussian Noise (AWGN) channel
where a Turbo-code is used for error correction
purposes. The proposed system takes advantage of
the available reliability information (generated
by the Turbo decoder) to improve the quality of
the reconstructed signal. The design rules for a
quantizer and reconstructor to minimize the
overall mean square distortion are presented.
Numerical results are presented showing up to 1
dB improvement in the end-to-end distortion with
respect to a traditional channel optimized
quantizer. This improvement is obtained at the
price of a small increase in the system
complexity.
|
|
Student:
Mark Bingeman, MASc |
Title: Tubo-codes
for wireless applications |
The PDF file of this thesis Abstract:
This thesis studies a new method of combining
Turbo Codes with various modulation techniques to
take advantage of inherent tradeoffs between BER
performance, code rate, spectral efficiency and
decoder complexity. Our new method, which we call
Symbol-Based Turbo Codes, parses the parallel
data streams of the Turbo Code encoder into n-bit
symbols and maps each symbol to a point in a
2n-ary signal set. In the case of Symbol-Based
Turbo Codes with BPSK modulation, the BER
performance can be improved while at the same
time decreasing the decoder complexity as
compared with the traditional Turbo Code.
Symbol-Based Turbo Codes are good candidates for
spread spectrum communication systems such as
CDMA. Specifically, Symbol-Based Turbo Codes with
orthogonal modulation are suitable for a
non-coherent up-link, or bi-orthogonal modulation
for a coherent up-link. Furthermore, Symbol-Based
Turbo Codes with either M-ary PSK modulation or
BPSK modulation are suitable for a coherent
down-link.
|
|
Student:
Bernd Schneider, MASc |
Title: Design and
evaluation of concatenated codes |
Abstract:
Recently, C. Berrou, A. Glavieux, and P.
Thitimajshima introduced an advanced class of
concatenated codes named Turbo-codes. These codes
provide outstanding performance and can achieve
near-Shannon limit error correcting performance
with relatively simple component codes and large
pseudo-random interleavers. In this thesis, three
concatenated coding methods are designed and
evaluated. In the first two schemes, concatenated
coding systems which consists of Turbo-codes in
serial concatenation with Reed-Muller (RM) codes
are introduced. The first system, I called
Turbo-Reed-Muller code, uses a Turbo-code as an
outer code and a Reed-Muller code as an inner
code. These codes provide up to 0.51dB
improvement over a Turbo-code at a Bit-Error-Rate
(BER) of 10-3 during the 20th iteration. The
second system, called Reed-Muller-Turbo code,
uses a Turbo-code as an inner code and an
Reed-Muller code as an outer code. These codes
show a performance improvement over the
Turbo-code of up to 1dB at a BER of 10-5.
Furthermore, a general method for soft decision
decoding of product codes is introduced using the
example of a (12,8,3) Hamming code in combination
with a single parity check and another (12,8,3)
Hamming code. At a BER of 10-5, this scheme
performs 3dB better than hard decision decoding
and 2.3dB better than soft decision decoding of
the (12,8,3) Hamming code. |
1997 and before
Student:
A. H. Banihashemi (1997), PhD |
Title: Lattice
decoding by Integer Programming |
Abstract:
Decoding operation is the major obstacle
associated with using a lattice in communication
applications. There are two general methods for
lattice decoding: i) The integer programming
method based on geometry of numbers, and ii) The
trellis method. This thesis has contributions to
both methods, and provides results which make the
comparison between the two methods possible.
Regarding method (i), Kannan's algorithm, which
is currently known as the fastest method for the
decoding of a general lattice, is analyzed. Based
on a geometrical interpretation of this
algorithm, it is shown that it is a special case
of a wider category of algorithms, called
recursive cube search (RCS) algorithms. In this
category, we improve Kannan's algorithm, and
establish tight upper and lower bounds on the
decoding complexity of lattices. The lower bounds
prove that the RCS decoding complexity of any
sequence of lattices with possible application in
communications increases at least exponentially
with dimension and coding gain.
Regarding method (ii), we discuss and develop a
universal approach to the construction and
analysis of the trellis diagrams of lattices
using their bases. Based on this approach, we
derive tight upper bounds on the trellis
complexity of lattices, and study the problem of
finding minimal trellis diagrams for lattices.
The upper bounds both improve and generalize the
previously known similar results. Minimal trellis
diagrams for many important lattices are also
constructed. These trellises, which are novel in
many cases, can be employed to efficiently decode
the lattices via the Viterbi algorithm. Moreover,
we establish tight lower bounds on the trellis
complexity of lattices. For many of the obtained
trellises, these lower bounds provide a proof for
minimality.
Finally, we derive some results in lattice theory
with possible application in communications.
These include an upper bound on covering radius
of a lattice in terms of its successive minima,
and an inequality on the coding gain of densest
lattice packings in successive dimensions. |
|
Student:
Edwin Vai Hou Iun (1996), MASc |
Title: Combined
source and channel coding |
The PDF file of this
thesis Abstract:
This work considers a combined source-channel
coding scheme for image transmission over the
uplink of a wireless IS-95 CDMA channel. By
adjusting the dimension of the orthogonal
signaling scheme, we trade the system
error-correction capability for a faster bit
rate. The increase in channel error is relieved
by employing a set of quantizers which are
designed using a joint source-channel
optimization algorithm. The bit allocation
problem of the quantizer array is solved by using
a new approach based on integer programming
technique. Without bandwidth expansion, our
proposed scheme results in a substantial
improvement in the reconstructed image quality,
especially for good channel condition.
|
|