Convolutional code

Coding is a technique where redundancy is added to original bit sequence to increase the reliability of the communication. In this article, lets discuss a simple binary convolutional coding scheme at the transmitter and the associated Viterbi (maximum likelihood) decoding scheme at the receiver. Update: For some reason, the blog is unable to display the…

Read More

Batch Gradient Descent

I happened to stumble on Prof. Andrew Ng’s Machine Learning classes which are available online as part of Stanford Center for Professional Development. The first lecture in the series discuss the topic of fitting parameters for a given data set using linear regression.  For understanding this concept, I chose to take data from the top…

Read More

Happy Birthday – dspLog

An important milestone for the dspLog happened on Oct 21st 2008. On this day last year, the blog migrated from the Blogger platform to the independently hosted platform at www.dsplog.com ! Belated birthday wishes for the blog!!! 🙂 Looking back, the first year was satisfying – both in terms of contents and traffic. We started…

Read More

Transmit beamforming

In this post lets discuss a closed-loop transmit diversity scheme, where the transmitter has the knowledge of the channel. As there is a feedback path required from the receiver, to communicate the channel seen by the receiver to the transmitter, the scheme is called closed-loop transmit diversity scheme. Recall that the transmit diversity using Space…

Read More

ICCBN 2008, July 17-20 2008, IISc, Bangalore

Advanced Computing and Communication Society (ACS) of India is organizing ICCBN 2008 conference (International Conference on Communication, Convergence, and Broadband Networking) from July 17th to 20th 2008 at National Science Seminar Complex at Indian Institute of Science (IISc), Bangalore. ICCBN Conference aims to provide a premier forum for researchers, industry practitioners and educators to present…

Read More

Closed form solution for linear regression

In the previous post on Batch Gradient Descent and Stochastic Gradient Descent, we looked at two iterative methods for finding the parameter vector  which minimizes the square of the error between the predicted value  and the actual output  for all  values in the training set. A closed form solution for finding the parameter vector  is possible, and in this post…

Read More