Closed form solution for linear regression

In the previous post on Batch Gradient Descent and Stochastic Gradient Descent, we looked at two iterative methods for finding the parameter vector  which minimizes the square of the error between the predicted value  and the actual output  for all  values in the training set. A closed form solution for finding the parameter vector  is possible, and in this post…

Read More

Comparing BPSK, QPSK, 4PAM, 16QAM, 16PSK, 64QAM and 32PSK

I have written another article in DSPDesginLine.com. This article can be treated as the third post in the series aimed at understanding Shannon’s capacity equation. For the first two posts in the series are: 1. Understanding Shannon’s capacity equation 2. Bounds on Communication based on Shannon’s capacity The article summarizes the symbol error rate derivations…

Read More

Convolutional code

Coding is a technique where redundancy is added to original bit sequence to increase the reliability of the communication. In this article, lets discuss a simple binary convolutional coding scheme at the transmitter and the associated Viterbi (maximum likelihood) decoding scheme at the receiver. Update: For some reason, the blog is unable to display the…

Read More

OT: Happy Schools Blog

Mr. Raghuram contacted me and informed about Happy Schools Blog. He writes about Graduate School Admission in U.S., Job opportunities for students, University Selection based on his personal experience. He recently published few articles which might of interest to some of our readers. Here are the URL for few articles:

Read More

Scaling factor in QAM

When QAM (Quadrature Amplitude Modulation) is used, typically one may find a scaling factor associated with the constellation mapping operation. It may be reasonably obvious that this scaling factor is for normalizing the average energy to one. This post attempts to compute the average energy of the 16-QAM, 64-QAM and M-QAM constellation (where is a…

Read More