Tuesday, July 28, 2015

What is your favorite Number? 1591521

While playing with the vowels of the English alphabet, I discovered the following facts about the mathematics of the vowels ---
The position occupied by the vowels in numbers are- a (1) e (5) i (9) o(15) and u (21). The numbers clearly show that all the vowel positions are odd numbers. It means there are no even vowels. The three vowel positions are divisible by 3. The two vowel positions are divisible by 5 and one vowel position is divisible by 7.

The other interesting play with the vowels is in terms of the scale. When all the vowel positions are written as a number, the number which is formed is 1591521. This is only divisible by 3 and none of the other 120549 available prime numbers less than 1591521.
$\frac{1591521}{3}=530507$, which is a prime number. It means the number formed is interesting to the number theorists.

When some manipulations are done on the digits of the number, this can give the scale, we are not sure up to which value but to a considerable large value. Here some manipulations mean we can arrange the digits of the number according to their order e.g. we can consider ($15$ or $91$ or $52$ ) or we can take ($59$ or $15$ or $21$) and so on but not the random ordering. we can construct a scale based on the digits available in $1591521$ as- $1=1$ (already there), $2=2$ (already there), $3=2+1$, $4=5-1$ or $2+1+1$, $5=5$ (already there),$ 6=5+1$;$7=5+2$; $8 =5+2+1$; $9= 5+5-1$; $10= 5+5$; $11=9+2$; $ 12=9+2+1 $ and so on.

By doing so, we can develop the scales continuously up to a large number using the available digits. Like some of the scales of large number are
 $43=59-15-1; 45=49-15+1; \\ 69=91-21-1; 70=91-21; \\  89=91-2; 100=91+5+2+1+1;\\120=21\times 5 +15;  200=21 \times 9 + 5+5+1 \\ $

and so on. This magical number seems interesting. Hence 1591521 is my best number for now.

Tuesday, July 21, 2015

Riemann Zeta Function

The Riemann zeta function, denoted by $\zeta(s)$ is a function of $s$ and given by
$\zeta(s)=\displaystyle \sum_{n=1}^\infty \frac{1}{n^s}=1+\frac{1}{2^s}+\frac{1}{3^s}+\cdots$,
 where $s$ is a complex variable and the function is absolutely convergent when the real value of s is greater than 1.

Riemann zeta function can also be expressed in terms of integral as
$\zeta(s)=\frac{1}{\Gamma(s)}\int_0^\infty \frac{x^{s-1}}{e^x-1}dx$

This is the convergent series for any $s > 1$.Consider the case when $s=2$, then
$\zeta(2)=\displaystyle \sum_{n=1}^\infty \frac{1}{n^2}=1+\frac{1}{4}+\frac{1}{9}+\cdots = \frac{\pi^2}{6}$.

For $s=1$, it is divergent infinite series, called the harmonic series as given by
$\zeta(1)=\displaystyle \sum_{n=1}^\infty \frac{1}{n}=1+\frac{1}{2}+\frac{1}{3}+\cdots=\infty$.

For even positive integers, the Riemann zeta function can be expressed in terms of the Bernoulli number ($B_0=1, B_1=\pm\frac{1}{2}, B_2=\frac{1}{6}, B_3=0, B_4=\frac{-1}{30}, B_5=0, B_6=\frac{1}{42}, B_7=0, B_8=\frac{-1}{30}, B_9=0,\\ B_{10}=\frac{5}{66}, \cdots $ and so on) because all odd Bernoulli's number are zero except 1. More about Bernoulli's number are referred to  https://en.wikipedia.org/wiki/Bernoulli_number . The expression is given by
$\zeta(2n)= \frac{(-1)^{n+1}B_{2n}(2\pi)^{2n}}{2(2n)!}$ for any $n \ge 1$.

For negative integers, Riemann zeta function is also expressed in terms of Bernoulli number as
$\zeta(-n)=-\frac{B_{n+1}}{n+1}$ .
Hence $\zeta(-1)=1+2+3+4+\cdots= -\frac{B_2}{2}=-\frac{1}{12}$. This also explains the Grandi's series as we discussed earlier. In this case, the series is divergent and this specific infinite sum of natural numbers have been used to explain the string theory. More about this is available in the following link https://plus.maths.org/content/infinity-or-just-112.

Thursday, July 16, 2015

Godel's Incompleteness Theorem

Godel's incompleteness theorem- a theorem that proves the "Un-Provable". A mathematical proof that it is possible or impossible to prove some given mathematical problems. Gödels theorem is the basis of modern mathematics to state that axioms exist without the formal proof. Kurt Gödel proposed two incompleteness theorems and these theorems are also one of the most important theorems in modern logic.

First incompleteness theorem states that for a consistence and formal system, there are statements which can be neither proved and nor disproved.

Formal system is a system that contains axioms, based on which new theorems can be defined. A number of mathematical approaches today are based on axiomatic definition. e.g. Probability theory. The number of axioms should be finite or there are ways which determine if the given statement is axiom or not. An axiomatic (formal) system is consistent if it doesn't contain the contradiction or if it is possible to prove that the statement and its negation are both true.

The Second incompleteness theorem is the extension of the first theorem which states that, any consistence axiomatic system, where certain elementary arithmetic operations are carried out, does not demonstrate its own consistency. Or in other words, if a formal system forms its own consistency then the system itself is inconsistent.

One of the interesting questions that always arose in my mind is that- "Is the universe analog or digital?" Is it only the outcome of 0 or 1 ? In other words, are there any possibilities that anything exist and do not exist at the same time? Godel's theorem could  answer them to some extent. 

Thursday, July 2, 2015

Grandi's Series

After the name of Italian monk, priest, philosopher, mathematician and engineer Luigi Guido Grandi (1671-1742), the infinite series $1 - 1 + 1 - 1 \cdots \infty$, is called the Grandi's Series. It is a divergent series, meaning that if we take the partial sums of the series, the limit over the partial sum doesn't exist. Let us say the partial sum is
$S_1 =1 \\
 S_2= 1-1=0 \\
 S_3 = 1-1+1=1 \\
 S_4=1-1+1-1 =0\\
\vdots\\
S_n=\displaystyle\sum_{i=0}^n (-1)^i = 0 \;(\text {if $n$ is odd})\\
\quad\quad\quad \quad\quad\quad= 1 \;(\text {if $n$ is even})\\
\vdots\\
S_\infty=\displaystyle\sum_{i=0}^\infty (-1)^i$

Clearly, it is very difficult to conclude the value of the series. Grandi, however concluded that the sum of the series is $\frac{1}{2}$.

Consider the series,
$ S= 1 -1 + 1 -1 \cdots \infty, $ then
$1-S=1- 1 -1 + 1 -1 \cdots \infty = S  \Rightarrow 2S=1 \Rightarrow S=\frac{1}{2}. $

If we take the partial sum of the series and take the mean over it, then we obtain $\frac{1}{2}$. Such summation of infinite series is called Cesaro sum, after the name of another Italian mathematician Ernesto Cesaro (1859 - 1906).

Take the series $1+2+4+8+\cdots \infty$, What is the sum of the series?
Using the similar approach , Say
$c_1 = 1+2+4+8+\cdots \infty \\
\quad= 1+ 2(1+2+4+\cdots \infty)\\
\quad= 1+ 2c_1\\
\Rightarrow c_1-2c_1=1 \Rightarrow c_1=-1.$

These geometric series are quite trickier and gives the values way beyond expectation. Another such series, which we can obtain in similar way using Grandi's series is
$c_2= 1 -2 +3 -4 +5 -6 + \cdots \infty\\
c_2=\quad \;\;1 -2 +3 -4 +5 -6 + \cdots \infty\\
\underline {\hspace{8cm}} \\
2c_2=1-1+1-1+1-1+\cdots \infty\\
\Rightarrow 2c_2=\frac{1}{2}\\
\Rightarrow c_2= \frac{1}{4}.$

One of the interesting results which Indian mathematician Srinivasa Ramanujan (1887-1920) found in his earlier works is the sum of the series,
$c_3=1+2+3+4+\cdots \infty =\frac{-1}{12}$

We can express $c_3 - c_2$ as
$c_3=1+2+3+4+5+6+\cdots \infty \\
  c_2=1-2+3-4+5-6+\cdots \infty\\
-  \;\;  -\;\; +\;\;- \;\; +\;\; \; -\;\; \; +\;\;\;- \cdots \\
\underline {\hspace{8cm}}\\
 c_3-c_2=4+8+12+\cdots \infty\\
\Rightarrow c_3-c_2=4(1+2+3+\cdots \infty)\\
\Rightarrow c_3-c_2=4c_3\\
\Rightarrow-3c_3=c_2\\
\Rightarrow c_3=\frac{-c_2}{3}=\frac{-1}{12},\quad \text{as}\; c_2=\frac{1}{4} $




Friday, June 19, 2015

2,3,4,82000 series

2,3,4,82000 is an interesting series because of the interesting properties they possess.
It is interesting because
2 is the smallest number that can be expressed in 1's and 0's in base 2.
3 is the smallest number that can be expressed in 1's and 0's in base 2 and base 3.
4 is the smallest number that can be expressed in 1's and 0's in base 2, base 3 and base 4.
82000 is the smallest number that can be expressed in 1's and 0's in base 2, base 3, base 4 and base 5.

2 in base 2 = 10

3 in base 2 = 11
3 in base 3 = 10

4 in base 2 = 100
4 in base 3 = 11
4 in base 4 = 10

82000 in base 2 = 10100000001010000 =$2^{16}+2^{14}+2^6+2^4$
82000 in base 3 = 11011111001 =  $3^{10}+3^9+3^7+3^6+3^5+3^4+3^3+3^0$
82000 in base 4 = 110001100 =  $4^8+4^7+4^3+4^2$
82000 in base 5 = 10111000 = $5^7+ 5^5+ 5^4+ 5^3$

The next number on the series is still unknown and is an open problem for the researchers to find out.


Friday, October 31, 2014

What is Interference Alignment?

In the multi-user communication, the interference is one of the challenges to be mitigated. If we consider the channels such as, the X channel (XC), the interference channel (IC) and the Z channel (ZC); multiple signals from multiple users  are transmitted at the same time with the same frequency causing interference to the unwanted receivers. This interference limits the system capacity considerably. Modern day research in the wireless communication is focused on the management of such interference. Interference alignment (IA) is one of the cooperative techniques of interference management, where all the interference observed at a particular receiver from the unwanted transmitters are aligned in a lower dimensional space, thus giving higher degrees of freedom to the desired streams.

The initial idea of interference alignment was discussed by Maddah Ali and Syed Jafar in 2007-08 in the multiple input multiple output (MIMO) XC and single input single output (SISO) channels. The alignment of interference in the lower dimension is achieved by designing suitable precoders that satisfy the power constraints.

To understand the concept of alignment, let us suppose that there are three rooms in a house, where three very important persons have to be hosted. But suddenly, four more random persons appeared in the house asking for the room and struggling to get the single rooms. They appeared in a group of two and the ones in the same group share no room in any cost. It is not easy to remove them because they do not have elsewhere to go. In that case, the owner has to allocate at least two rooms to four of them, keeping one person from different groups in one room. This leaves one more single room to the owner and he can allocate that room to 1 VIP.  Hence, by cooperatively allocating two rooms to 4 people, the owner can at least manage to gain 1 extra room for the VIP, which is an achievement.

The VIP is the desired signal and random people are the interference. By giving each interference signal from different group a room or by aligning those signals in a direction, better degrees of freedom is achieved for the desired streams.

The figure below shows an interference alignment in the case of three user MIMO interference channel where each transmitter-receiver pair has two antennas. The aim of alignment is to design precoders, $\mathbf{V}_1, \mathbf{V}_2$  and $\mathbf{V}_3$, that align the interference space observed at each receiver. For the channels from transmitter $j$ to receiver $i$, denoted by $\mathbf{H}_{ij}$, the alignment is achieved by fulfilling the following mathematical conditions:
                   span($\mathbf{H}_{12}\mathbf{V}_2$)=span($\mathbf{H}_{13}\mathbf{V}_3$),
                   span($\mathbf{H}_{21}\mathbf{V}_1$)=span($\mathbf{H}_{23}\mathbf{V}_3$),
                   span($\mathbf{H}_{31}\mathbf{V}_1$)=span($\mathbf{H}_{32}\mathbf{V}_2$),
for $\mathbf{V}_i^H \mathbf{V}_i=\mathbf{I}_d$ is guaranteed for the power constraint assumption, where $.^H$ represents the conjugate transpose of a matrix. This requires that the channel is globally known to all the transmitters, which is a challenge in wireless communications.



Friday, September 12, 2014

Wireless Communication: Overview


The practice of wireless communication is not new. We have been using this principle from the ages without understanding the basics. Wireless communication is the act of transmitting and receiving information or signal via the air medium, which actually uses the electromagnetic waves as a means of transmission. When we talk to somebody, we are inherently using the wireless communication. 

Since we can talk to people nearby, why cant we talk to the people far away? If we have enough power to speak very loudly then we can even talk to the people at some distance but not too far. After the development of radio and electromagnetic waves, scientists knew that the information can be transmitted in the form of electromagnetic waves and the challenge was to determine the ways to transmit these signals. Later, scientists like Shannon founded the idea that the information can be measured and this led to the development of whole lot of aspects in the information theory and wireless communications.

The deeper knowledge of the electromagnetics and antenna theory and the also the development in the field of digital electronics made it possible to digitize the earlier form of analog communication, which was limited only to the voice transmission (1G). After then a whole lot of development has been observed in terms of the data transmission and the quality of service of the wireless users. 2G with SMS facilities, 2.5G with GPRS and 3G with high speed data connection and improving. 

The latest challenge on wireless communication is to meet the incresing demands of the people in terms of the data rate and the quality of service with the limited spectrum and limited power available. To meet these demands, different multiplexing and modulation techniques were proposed in the past. Orthogonal Frequency Division Multiplexing (OFDM) is recently used multi-carrier modulation technique, used in Long Term Evolution (LTE) standard, where instead of single wide band carrier, the data is transmitted over multiple parallel orthogonal narrow-band carriers.

However, researches are still going on to meet the increasing data requirements with the limited power constraint and the limited spectrum. The concept of interference channel (IC) is not new. This is talked from the time of Shannon. But, it is still an open problem to characterize the capacity of even a two user IC. A number of research works are available in this regard.

Since all the transmitters transmit at the same time using all the available bandwidth, each receiver receives the interference from all other unwanted transmitters and this interference is the limiting factor in IC and a major part of the latest research on wireless communication is focused in managing the interference so that better data rate and the quality of service is achieved.

The capacity of a multi-user interference channel is expressed in terms of the signal to interference plus noise ratio (SINR) as given by the Shannon's channel capacity formula:
                                               $C=d \log_2 (1+\text{SINR})$,
where $d$ is the prelog factor also called the degrees of freedom (DoF). In other words, DoF can be defined as the number of data streams that can be transmitted without interfering the other receivers. In multiple input multiple output (MIMO) systems, DoF is the number of interference free parallel channels between a transmitter and a receiver.

As we see that the capacity increases linearly with the DoF (d), it is a good idea to improve the DoF of the system instead of giving much attention to improve the SINR which is just growing logarithmically. Interference alignment (IA), the latest developed interference management technique, aims at improving the DoF of a multi-user interfering system by aligning all the interference in a common subspace and thus providing enough space for the desired data streams to be decoded. IA is believed to provide the outerbound on the achievable DoF, which is considered to be $\frac{K}{2}$ for a $K$-user single input single output (SISO) interference channel. Number of references are available in this regard.
                                         



Tuesday, September 9, 2014

Multiplication by 9, 99, 999 , 9999 and so on

Actually, multiplication by 9, 99, 999, 9999 and so on series seems easy and simple. This can be achieved by just two operation of subtraction and addition by shifting the positions to the right equal to the number of 9 used to multiply.

The rule that works for all the cases is 

Subtract (multiplicand-1) from multiplier (which is the series of 9), then add the resultant with (multiplicand-1) by shifting the 'n' positions to the right, where 'n' is the minimum number of   multiplier or multiplicand, n=min(no.  of multiplicand digits, no. of multiplier digits) in the series. 

Consider a simple example:

$8 \times 9=72$,

  1.  Multiplicand-1 = 8-1=7,
  2.  Subtract 7 from 9, 9-7=2,
  3. Shift 2 to the right one position and add one zero after 7 and perform bit-wise addition  as 

                                             02
                                             70
                                                          
                                             72
$88 \times 9=792$
  1.  Multiplicand-1 = 88-1=87,
  2.  Subtract 87 from 9, 9-87=-82,
  3. Shift -82 to the right one position and add with 87  i.e 
                                       (-08) 2
                                          87  0
                                                     
                                     (79)    2
$888 \times 9=7992$
  1.  Multiplicand-1 = 888-1=887,
  2.  Subtract 887 from 9, 9-887=-882,
  3. Shift -882 to the right one position and add with 887  i.e 
                                     (-088) 2
                                        887  0
                                                    
                                    (799)   2
$8 \times 99=792$
       

  1. Multiplicand-1= 8-1=7,
  2. Subtract 7 from 99 = 99-7=92,
  3. Shift 92 to the right by one and perform the addition as  
                               700
                               092
                                             
                               792

Thursday, August 21, 2014

Interesting series 42, 4422, 444222....

On observing keenly, I found that the number series 42, 4422, 444222, $\cdots, \cdots$ is  interesting. I dont know if any number theorists have realized this fact or not. Because these are the only numbers, I suppose that are the product of consecutive two numbers and the first half of the number of digits are exactly double the second half of the number of digits.
$ 6  \times 7            = 42 $ ;          $\Rightarrow\frac{4}{2}=2$ .
$ 66 \times 67        =4422 $;         $\Rightarrow\frac{44}{22}=2 $.
$ 666 \times 667     =444222$;     $\Rightarrow\frac{444}{222}=2$.
$ 6666 \times 6667 =44442222$; $\Rightarrow\frac{4444}{2222}=2$.

and this holds true for any number of digits. The number of digits in product is equal to the sum of the number of multipliers and the multiplicands. This is very interesting fact.

If there are some series that show the similar characteristics, please do not hesitate to comment.

Along with this, it is also observed that 67 has a very interesting property when it is multiplies with the multiples of 3 as observed in the following sequence
$ 67 \times 3     =  0201 ;\\            
 67 \times 6     =  0402 ; \\
67 \times 9     =  0603 ; \\
 67 \times 12   =  0804 ; \\
 67 \times 15   =  1005; \\
 67 \times 18   =  1206; \\
 67 \times 21   =  1407; \\
 67 \times 24   =  1608; \\
 67 \times 27   =  1809; \\
 67 \times 30   =  2010; \\
 67 \times 33   =  2211; \\
 67 \times 36   =  2412; \\
 67 \times 39   =  2613; \\
 67 \times 42   =  2814; \\
\cdots \\
\cdots \\
\cdots \\
\mathbf{67} \times \mathbf{66}=\mathbf{4422}; \\
\cdots \\
 67 \times 99   =  6633; \\
 67 \times 102 =  6834 ; \\
\cdots \\
\cdots \\
\cdots \\
 67 \times 297 =19899 $.

   In each of the cases the first two digits are excatly twice the last two digits and this continues for the multiples of 3. The reason for this is the number 201;

This series continues till $67 \times 297 =19899$ or till $201 \times 99=19899$; After this it does not show the similar nature.

Similar other interesting pattern observed is:

$21         =      3\times  7 \\
2211     =      33 \times 67 \\
222111 =     333 \times 667 \\
22221111 = 3333 \times 6667 \\
\cdots\\
\cdots\\
\cdots\\ $

Also

$63         =      9 \times 7 \\
6633     =      99  \times 67 \\
666333 =     999 \times 667 \\
66663333 = 9999 \times 6667 \\
\cdots \\
\cdots \\
\cdots $

This series is also interesting

$84         =      12  \times 7 \\
8844     =     \underline{132} \times 67   (132=1(2+1)2 \; \text{or} \; 1212 \;  \text{with the middle two taking carry from previous} 1) \\
888444 =     1332 \times 667 \\
88884444 =  13332  \times 6667 \\
\cdots
\cdots
\cdots $

and so on

Similar interesting pattern can be observed till

$189         =    27  \times 7  \\
19899    =     297  \times 67  (198 \; \text{is observed from 1818 where 1 is carried over to 8 }\\ \hspace{3.2cm} \text{and similarly 297 is obtained from 2727 where 2 is carried over to 7 }) \\
1998999 =    2997  \times 667 \\
19998999 =  29997 \times 6667 \\
\cdots\\
\cdots\\
\cdots $

Sunday, February 19, 2012

X Channel, Z Channel and Interference Channel

When we talk about multi-user communication, there are many users transmitting at the same time and same frequency. Based on the way how the signals are transmitted and received, different channels are defined in the literature.

Let us consider a simple case when there are two transmitters and two receivers. At any time instance, both the transmitters transmit and both the receivers receive. However, all the data streams that have been transmitted may not be desired by both the receivers. Also, all the channels linking a particular transmitter and the receiver may not be of the equal strength. Those channels that carry the unwanted data streams or those channels that are linked between the receiver and the undesired transmitters are called the undesired channels or the interference channels.

In the figure shown below,Transmitter T1 has message m1 intended to Receiver R1 and Transmitter T2 has message intended to receiver R2 and they are transmitting at the same time and with the same frequency.  This causes interference to the Receiver R1 from T2 and and interference to the Receiver R2 from T1, these are indicated by the dotted lines and the solid lines show the channels between the desired transmitters and the receivers in the figure. Such a channel is referred to as Interference Channel.
Interference channel
In the Z channel, the Transmitter T1 has message intended to Receiver R1only thus interfering to the receiver R2 but the Transmitter T2 has the messages intended to both the Receivers R1 and R2 or can be the other way. In the figure below, the interfering channel is shown by the dotted line and there is only one interference channel while remaining 3 are the desired channels which are shown by the solid lines. Here Receiver R1 should decode two messages from T1 and T2 but the receiver R2 should decode only one message from T2.
Z channel
The other channel is the X channel. In the X channel, both the Transmitters T1 and T2 have messages intended for both the receivers R1 and R2. It means R1 has to decode both the messages from T1 and T2 and R2 also has to decode both the messages from T1 and T2. It seems pretty interesting to have X channel, if we can send the messages at the same time to different users and also the receivers can receive the messages at the same time from different users, unlike broadcast and multiple access channels. X channel has special significance because of its higher and non-integer degrees of freedom, and hence higher capacity. The X channel is shown in the diagram below, here mij is the message from transmitter j to receiver i.
X channel


Tuesday, February 14, 2012

Ergodicity

The term ergodicity is vague in itself and possess different meanings in different fields. The word is derived from the Greek word  'ergon' (work) and 'odos' (path), and  was coined by Austrian physicist Ludwig Boltzmann.  There is a separate course in Sigma algebra to deal with ergodicity in detail. In plain words, ergodicity is a term that describes the system which possess same behavior as averaged over time and space.

The main purpose of this article is to make the readers acquainted with the term ergodic and ergodicity, statistically and literally. 

Let us consider an example to understand the concept of ergodicity. This example is given in http://news.softpedia.com/news/What-is-ergodicity-15686.shtml . I  am mentioning the same here because it explain the concept of ergodicity in simple words. 

Suppose we are concerned with determining the most visited parks in the city. One possible way to do so is to follow a single person for a month and see which park does he visit most in that period of time and the other way of figuring out is to take a momentary snapshot of all the parks at a time and see which parks got most of the people at that particular instant of time.  Here, we have got two analyses, one is the statistical data of an individual for a certain period of time and the other is the statistical data of an entire ensemble of people at a particular moment in time. 

So, we see that the situation one is for only a single person,which may not be valid for a large number of people and the statistics depend only on the single individual to whom we considered. While in the situation two it may not be valid for a longer period of time. We are considering only a short span and in the long run, the result may differ. So it is not sure that the statistical results we obtain from both the observations should be the same or alike.

We say that an ensemble is ergodic if both the situations we considered here gives the same statistical results i.e  when the temporal and spatial statistics of an event or a number of samples are same then the event or samples are called ergodic. In simple words, ergodic refers to the dynamical system that has the same behavior averaged over time and averaged over space.

The initial concept in ergodicity is the Poincaré recurrence theorem in statistics which states that certain systems after a sufficiently long time, return to a state very close to the initial state. It means after the long time, the time average exist almost everywhere in the space and is almost the same as the space average.

In signal processing the stochastic process is said to be ergodic if its statistical properties can be deduced from a single long sample or realization of the process (Source wikipedia). If we would like to determine the mean of a stationary stochastic process, we observe a large number of samples and use their ensemble average. However if we have access to only their single sample over a certain period of time, can this time average be used as an estimate for the sample mean. For the ergodic process, the time average should be the estimate of the sample mean. 

 A random process X(t) is said to be ergodic in the mean, i.e., first-order ergodic if the mean of sample average asymptotically approaches the ensemble mean:

 i.e.  lim T tends to infinity E(µxTau) =µx(t)
and lim T tends to infinity var(µxTau) =0      

Conditional Probability

The concept of conditioning in Statistics is of great importance in many fields of applications, such as, information theory, finance, random signal processing etc. In most of the cases, when the events are random and dependent and we want to specify the probability of occurrence of one given the other event, we use the concept of conditional probability.

Consider there are red and black balls in a box. We are picking the balls at random form the box and we do not put back in again, then the two events i.e. picking a red ball and picking a black ball at random are dependent. When we pick the first ball, the probability does not depend on the previous event as it is the initial pick and probability of picking any one ball is equally likely or in other words, they are uniformly distributed.

But when we pick the  second ball, there is now one ball less in the box and the probability of occurrence of the second event is dependent on the first event. In other words, we can condition the occurrence of event II on the known result of event one. We can now determine "what is the probability of occurrence of event II given the event I has already occurred?". This approach of determining the probability of occurrence of any event is called the conditional probability.

Truly speaking, the concept of conditioning is the heart of probability theory and we can use this concept to define the law of total probability based on the Bayes' rule.

Let us consider a combined experiment in which two events $A$ and $B$ occur with joint probabilities $P(A,B)$. Joint probability is the probability of occurring both the events. The probability of occurrence of  $B$ given $A$  is the conditional probability and is represented as $P(B|A)$, which is expressed in terms of the joint probability as :
                               $P(B|A)=\frac{P(A,B)}{P(A)}$.                     
 Similarly, the probability of occurrence of A conditioned on B is mathematically expressed as
                              $P(A|B)=\frac{P(A,B)}{P(B)}$.             
In each case $P(A), P(B) > 0$, because the value of probability always lies between 0 and 1 (axiomatic definition). Combining above two conditions, we can write 
                             $P(B|A)P(A)=P(A|B)P(B)$ 
                            $\Rightarrow P(B|A)=\frac{P(A|B)P(B)}{P(A)}.$                       
This is called the famous Bayes' Rule in statistics.

Say $A=\{1,2,3\}$ and $B=\{3,4,5\}$ are the two events occurred when rolling a die. It means $P(A)=3/6=1/2$ and $P(B)=3/6=1/2$, they are independent events and $P(A,B)=1/6$
                         so, $P(B|A)=\frac{P(A,B)}{P(A)}=\frac{1/6}{1/2}=\frac{1}{3}$.
and for mutually exclusive events, $P(A,B)=\phi=0$. So conditional probability is zero in such case.

If $P(A,B)=P(A)$ i.e A is a subset of B, then $P(B|A)=P(A)/P(A)=1.$

and if $P(A,B)=P(B)$ i.e. B is a subset of A, then $P(B|A)=P(B)/P(A).$

Conditional probability is also useful in checking the statistical independence. If the occurrence of A does not depend on the occurrence of B then
              $P(A|B)=P(A)$ 
       and
           $P(A,B)= P(A)P(B)$

Law of total probability states that if there are $n$ mutually exclusive events represented as $E_1, E_2 \cdots E_n$, the sum of the probabilities of which sum to 1 and $F$ is some other arbitrary event, then the probability of $F$ is expressed in terms of the conditional probability as:
$P(F)=P(F|E_1)P(E_1)+ P(F|E_2)P(E_2) + \cdots + P(F|E_n)P(E_n) $.