Equal Gain Combining (EGC)

This is the second post in the series discussing receiver diversity in a wireless link. Receiver diversity is a form of space diversity, where there are multiple antennas at the receiver. The presence of receiver diversity poses an interesting problem – how do we use ‘effectively‘ the information from all the antennas to demodulate the…

Read More

GATE-2012 ECE Q46 (math)

Question 46 on math from GATE (Graduate Aptitude Test in Engineering) 2012 Electronics and Communication Engineering paper. Q46. The maximum value of  in the interval [1, 6] is (A) 21 (B) 25 (C) 41 (D) 46 Solution Let us start by finding the critical points of the function . The first derivative is, . Solving by…

Read More

GATE-2012 ECE Q39 (communication)

Question 39 on communication from GATE (Graduate Aptitude Test in Engineering) 2012 Electronics and Communication Engineering paper. Q39. The signal  as shown is applied both to  a phase modulator (with  as the phase constant) and a frequency modulator (with as the frequency constant) having the same carrier frequency.  The ratio  for the same maximum phase deviation is,…

Read More

GATE-2012 ECE Q24 (math)

Question 24 on math from GATE (Graduate Aptitude Test in Engineering) 2012 Electronics and Communication Engineering paper. Q24. Two independent random variables X and Y are uniformly distributed in the interval [-1, 1]. The probability that max[X,Y] is less than 1/2 is (A) 3/4 (B) 9/16 (C) 1/4 (D) 2/3

Read More

Batch Gradient Descent

I happened to stumble on Prof. Andrew Ng’s Machine Learning classes which are available online as part of Stanford Center for Professional Development. The first lecture in the series discuss the topic of fitting parameters for a given data set using linear regression.  For understanding this concept, I chose to take data from the top…

Read More

Closed form solution for linear regression

In the previous post on Batch Gradient Descent and Stochastic Gradient Descent, we looked at two iterative methods for finding the parameter vector  which minimizes the square of the error between the predicted value  and the actual output  for all  values in the training set. A closed form solution for finding the parameter vector  is possible, and in this post…

Read More