Migration to new template (skin)

Hi, Those visiting the blog might have noticed a fresh look to the dspLog. This new feel is thanks to the Thesis Magazine Skin provided by FourBlogger Skins. Click here to view more details. There some more tinkering required at some places. But, in general most of the settings are taken care. Hope you like…

Read More

Viterbi decoder

Coding is a technique where redundancy is added to original bit sequence to increase the reliability of the communication. Lets discuss a simple binary convolutional coding scheme at the transmitter and the associated Viterbi (maximum likelihood) decoding scheme at the receiver. Update: For some reason, the blog is unable to display the article which discuss…

Read More

Scaling factor in QAM

When QAM (Quadrature Amplitude Modulation) is used, typically one may find a scaling factor associated with the constellation mapping operation. It may be reasonably obvious that this scaling factor is for normalizing the average energy to one. This post attempts to compute the average energy of the 16-QAM, 64-QAM and M-QAM constellation (where is a…

Read More

MIMO with ML equalization

We have discussed quite a few receiver structures for a 2×2 MIMO channel namely, (a) Zero Forcing (ZF) equalization (b) Minimum Mean Square Error (MMSE) equalization (c) Zero Forcing equalization with Successive Interference Cancellation (ZF-SIC) (d) ZF-SIC with optimal ordering and (e) MIMO with MMSE SIC and optimal ordering From the above receiver structures, we…

Read More

Matlab or C for Viterbi Decoder?

Are you bothered by speed of the speed of the simulations which you develop in Matlab/Octave? I was not bothered much, till I ran into the Viterbi decoder. If you recall, the Matlab/Octave simulation script for BER computation with hard soft decision Viterbi algorithm provided in post Viterbi with finite survivor state memory took around…

Read More

Stochastic Gradient Descent

For curve fitting using linear regression, there exists a minor variant of Batch Gradient Descent algorithm, called Stochastic Gradient Descent. In the Batch Gradient Descent, the parameter vector  is updated as, . (loop over all elements of training set in one iteration) For Stochastic Gradient Descent, the vector gets updated as, at each iteration the…

Read More