The Theory of Probability
Probability theory is applied to situations where uncertainty exists. These situations include
1. The characterization of traffic at the intersection US 460 and Peppers Ferry Road (i.e., the number of cars that cross the intersection as a function of time)
2. The prediction of the weather in Blacksburg
3. The number of students traversing the Drill Field between 8:50 and 9:00 on Mondays
4. The thermally induced (Brownian) motion of molecules in a
(a) copper wire
(b) a JFET amplifier
Note that the latter two situations would resort in a phenomenon known as noise whose power would be a function of the Temperature of the molecules.
In all of these situations, we could develop an excellent approximation or prediction for each of these experiments. However, we would never be able to characterize them with absolute certainty (i.e., deterministically). However, we could characterize them in a probabilistic fashion (or probabilistically) via the elements of probability theory. This is what weather forecasters are doing when they say
The chance of rain tomorrow is 40% In such cases they are saying that the probability of a rain even is 0.4. Other probabilistic characterizations include
• One the average 300 cars per minute cross US 460 and Peppers Ferry Road at Noon on Saturdays. The chances of so many cars crossing that intersection at 8:00 AM on Sunday is very small
• The average noise power produced by this amplifier is 1μW
Scientists and Engineers apply the theories of Probability and Random Processes to those repeating situations in nature where
1. We can roughly predict what may happen.
2. We cannot exactly determine what may happen
Whenever we cannot exactly predict an occurrence, we say that such an occurrences is random. Random occurrences occur for the following reasons
• All the causal forces at work are unknown.
• Insufficient data for the conditions of the problem do not exist
• The physical mechanisms driving the problem are so complicated that the direct calculation of the problem is not feasible
• There exists some basic indeterminacy in the physical world
The Notions of Probability
One can approach probability through an abstract mathematical concept called measure theory, which results in the axiomatic theory of probability, or through heuristic approach called relative frequency, which is a less complete (and slightly flawed) definition of probability. However, it suits our need for this course. Student's that continue on to graduate studies will be introduced to the more abstract but powerful axiomatic theory. Before continuing it is necessary to define the following important terms
Definition 1 An experiment is a set of rules that governs a specific operation that is being performed
Definition 2 A trial is the performance or exercise of that experiment
Definition 3 An outcome is the result of a given trial
Definition 4 An event is an outcome or any combination of outcomes
Example 1 Consider the experiment of selecting at random one card from a deck of 52 playing cards and writing down the number an suit of that cared. Notice that the rules are well defined
1. We have a deck of cards
2. Somebody selects the card
3. The records the result
Suppose somebody decided to perform the experiment. We would then say that he/she conducted a trial. The result of that trial could have been the 3 of spades (or 34♠). Hence the outcome would be: 3♠. Another outcome could have been 10♥ or J♦. Indeed there are as many as 52 possible outcomes. An event is a collection of possible outcomes. So 3♠ is an event. However, 10♥ is also an event, and J♦ is an event as well. Further {3♠, J♦}, {3♠, 10♥}, and {2♥, 5♣, K♥, A♦} are also events. Any combination of possible outcomes is an event. Note that for this experiment, there are 2^{52} = 4.50359962737 x 10^{15 }different events
The Axiomatic Theory of Probability
This is actually an application of a mathematical theory called Measure Theory. Both theories apply
basic concepts from set theory.
The axiomatic theory of probability is based on a triplet
(Ω,Ι,P) where
• Ω is the sample space, which is the set of all possible outcomes
• Ι is the sigma algebra or sigma field, which is the set of all possible events (or combination of outcomes)
• P is the probability function, which can be any set function, whose domain is Ω and the range is the closed unit interval [0,1] , i.e., any number between 0 and 1 including 0 and 1. The only requirement for this function is that it must obey the following three rules
(a) P [Ω] = 1
(b) Let A be any event in Ι, then P [A] ≥ 0
(c) Let A and B be two events in Ι such that A ∩ B = φ, then P [A U B] = P [A] + P [B]
Relative Frequency Definition.
The relative frequency approach is based on the following definition: Suppose we conduct a large number of trials of this a given experiment. The probability that a given event, say A, is the following limit
where n_{A} is the number of occurrences and n is the number of trials.
For example suppose we conduct the above experiment 10,000 times (n = 10000). Further suppose the
event A = {3♠} occurred 188 times (n_{A} = 188). Then
As n increased to infinity (and assuming that the cards are fair), then the ratio would approach the probability of A
Suppose the event B was the set of all spades. Then
B = {A♠, 2♠, 3♠, 4♠, 5♠, 6♠, 7♠, 8♠, 9♠, 10♠, J♠, Q♠, K♠}
Now we say that The event B occurs whenever an outcome is of a trial is contained in B. In this case, whenever the outcome of a trial contains a 4b, the outcome belongs to B and we say that B occurred. So, suppose we conducted the above experiment 10, 000 times, and the event B occurred 3000 times. Then
$\frac{n_B}{n}=\frac{3000}{10000}=0.3$
As n increased to infinity (and assuming that the cards are fair), then the ratio would approach the probability of B
Note that this definition makes sense. Since P [B] = ¼, it follows that the chance of B occurring in any given trial is 25%. Similarly, the chance of a 4♦ occurring from a particular trial is (1/52) ♠ 100 or 1.9%. However, note that the probability of any event will always be a number between 0 and 1. Now, suppose the event C consists of the set of all diamonds. Then
C = {A♦, 2♦,3♦,4♦,5♦, 6♦,7♦, 8♦,9♦,10♦, J♦, Q♦, K♦}
Let's look at both B and C. Note that none of the members in B belong to C and none of the members in C belong to B. We say that B and C are disjoint and that their intersection is the empty set
В ∩ C = φ
In this case the probability of either B or C occurring is equal to
$P[B or C]=\lim_{n\rightarrow\infty}\frac{n_B+n_C}{n}=P[B\cup C]$ (2)
where the U symbol stands for the union of two sets. Note that (2) describes the additive concept of probability: If two events are disjoint, then the probability of their sum is the sum of their probabilities.
Finally, let the event Ω equal all possible outcome. We call Ω the certain event or the sample space. Since every possible outcome is contained in this event, n_{Ω} = n. Therefore
$P[\Omega]=\lim_{n\rightarrow\infty}\frac{n_\Omega}{n}=\lim_{n\rightarrow\infty}\frac{n}{n}=1$ (3)
Example 2 Consider the experiment of flipping a fair coin twice and observing the output. What are the possible outcomes? List all possible events. Assume that all outcomes are equally likely, assign probabilities to each event. The set of all possible outcomes is
Ω = {HH, HT, TH, TT} There are 2^{4} = 16 events. They are
{HH},{HT},{TH},{TT}
{HH, HT} , {HH, TH} , {HH, TT} , {HT, TH} , {HT, TT} , {TH, TT} {HH, HT, TH} , {HH, HT, TT} , {HH, TH, TT} , {HT, TH, TT} {HH, HT, TH, TT} , φ
Note that {HH, HT, TH, TT} is the certain event. Therefore
P [{HH, HT, TH, TT}] = 1 (4)
However, from the additive property, it follows that
P [{HH, HT, TH, TT}] = P [{HH}] + P [{HT}] + P [{TH}] + P [{TT}] (5)
but since each outcome is equally likely we have that
P [{HH}] = P [{HT}] = P [{TH}] = P [{TT}] (6)
Therefore from (5) and (6), it follows that
P [{HH}} = P [{HT}} = P [{TH}} = P [{TT}} = ¼ (7)
Using the additive concept of probability we can see that
P [{HH, HT}] = P [{HH, TH}] = P [{HH, TT}] =
= P [{HT, TH}] = P [{HT, TT}] = P [{TH, TT}] = ½ (8)
P [{HH, HT, TH}] = P [{HH, HT, TT}] = P [{HH, TH, TT}] = P [{HT, TH, TT}] = 3/4 (9)
P[φ] = 0 (10)
Joint Probability
Let the events D and E be defined as follows
D ={4♠, 2♣, 8♦, 4♠}
E ={4♥, K♠, 4♠, 2♣}
Note that
P[D] = 4/52 = 1/3
P[E] = 4/52 = 1/13
Then the intersection of D and E is another event that contains the elements of D and E
D ∩ E ={4♠, 2♠}
and
P[D ∩ E] = P[V,Ј] = P[D and E] = 1/16 (11)
We call P [D ∩ E] = P [D, E] = P[D and E] the joint probability of D and E. It describes the probability of both events occurring
Conditional Probability
In a number of cases, knowledge about one event provides us additional information about the occurrence of another event. Suppose that we conduct an experiment and we find out that D has occurred. Thus we know that the outcome was either a A♠, 2♣, 8♦), or 4♠. Does this tell us anything about the occurrence of E The answer is yes.
Given that D has occurred, we know that the outcome was either A♠, 19♦, 8♦), or 4♠. Since each outcome is equally likely, and we know that these are now the only for possibilities, we have that the probability of each of these outcomes given that D has occurred is ¼ in other words
P [A♠|D] = P [2♠|D] = P [8♦|D] = P [4♠|D] = ¼ (12)
Note that P [A|B] means the probability of A given the B has occurred
Now look at the event E = {4♥, K4(t, 4♠, 2♦}. Given that D has occurred and that the possible outcomes were only A♠, 2♣, 8♣, or 4♣, we can conclude that the outcomes 4♥ and K♠ could hot have happened because 4♥ and K♠ are not in D. Therefore
P[4♥|D] = P[K♣|D] = 0 (13)
Therefore, given that we know D has occurred, we can write the probability of E as follows
P[E|D] = P[4♥|D] + P[K♣|D] + P[8♦|D]+P[4♠|D]
= 0 + 0 + ¼ + ¼ = ½
We can also compute the probability of E given that D has occurred using the following definition.
Definition 5 Let A and B be two events from the same experiment. Then the conditional probability of A given B, P[A|B] is defined as follows
$P[A|B]\overset{\triangle}=\frac{P[A,B]}{P[B]}$ (15)
If P [B] = 0, then P[A|B] is undefined. So, for the case of E given that D, we have that
$P[\varepsilon|D]=\frac{P[\varepsilon,D]}{P[D]}=\frac{\frac{1}{26}}{\frac{1}{13}}=\frac{13}{26}=\frac{1}{2}$
Similarly for the case of A given C above we have that
$P[A|C]=\frac{P[A,D]}{P[C]}=\frac{0}{0.25}=0$
Another important concept in probability is independence.
Definition 6 Let A and B be two events from the same experiment. Then A and B are said to be independent if the joint probability of A and B is equal to the product of their two individual probabilities.
P [A, B] = P [A] P [B] (16)
From this it follows that
$P[A|B]=\frac{P[A,B]}{P[B]}=\frac{P[A]P[B]}{P[B]}=P[A]$ (17)
This implies if two events are independent, the knowledge about one event provides absolutely know information about the other event.
Random Variables
You may have noticed thus far that we have working with sets and events that contain odd symbols such as 3♠, 4♠, 5♦, 6♥,.... Other examples, both in this note set and the book, have worked with events with
other symbols such as {HH} for the coin problem and {...} for the dice problem. We would like to
perform some mathematical analysis and manipulation on these events. However, such a task would be difficult and would not provide any insight.
It would be better if we could assign (or map) each outcome a number. The we could work with these numbers using the standard mathematical techniques that we have learned over the years.
Example 3 Consider our card experiment. Suppose we assign each card an integer as follows
A♠->1
2♠->2
...
K♠->13
A♣->14
2♣->15
...
K♣->26
A♦->27
2♦->28
...
K♦->39
A♥->40
2♥->41
...
K♥->52
Then each card is assigned an integer on the real line. Suppose we assign this mapping the function X (ζ), where ζ is any outcome
Such a mapping is called a random variable.
Definition 7 A random variable, X (ζ) is a deterministic function that assigns each outcome, ζ ε Ω, in an experiment, a number.
X : Ω -> R (18)
So that P [X (ζ) = ∞] = P [X (ζ) = -∞] = 0
The first thing you can notice about a random variable is that
1. There is nothing random about it
2. It is not a variable. Rather it is a set function.
However, in performing analysis, it is convenient to treat X (ζ) as a variable. Consider a second example of a random variable, since the random variable is a function of an outcome, which has an associated probability, the random variable also has an associated probability.
P[X(ζ) = x] = P(ζ = X^{-1}(x))
Example 4 Consider the same card experiment. Suppose we define a random variable S (Ј) such that
if ζ contains a ♠ | S(ζ) | = 1 |
if ζ contains a ♣ | S(ζ) | = 2 |
if ζ contains a ♦ | S(ζ) | = 3 |
if ζ contains a ♥ | S(ζ) | = 4 |
Note that the outcome can be assigned the same number. The function defined by the random variable can by any set function provided that the associated probabilities as the ±∞ is zero
What about the probabilities
A random variable is a function of the outcomes of an experiment. Therefore, since each outcome has a probability, the number assigned to that function by a random variable, also has a probability. The probability of a given random variable is determined as follows.
P[X(ζ) = x] = P[ζ = X^{-1}(x)]
where X ^{-1}(x) is a mapping from the real line, R, to the set of all outcomes, Ω.
Example 5 Consider the random variable X (ζ) defined in Example 3
P[X(ζ) = 1] = P[ζ = X^{-1}]=P[ζ = A♠]= 1/52;
P[X(ζ) = 26] = P[ζ = X^{-1}(26)]=P[ζ = K♠] = 1/52
P[X (ζ) = 28.9] = P [ζ = X^{-1}(28.9)] =P [φ] = 0
P[X(ζ) = 0] = P[ζ = X^{-1}(0)]=P[φ]=0
P[X(ζ)≤2] = P{ζ=X^{-1}(-∞,2]} = P[ζ = {A♠,2♠}] = 2/52
Example 6 Consider the random, variable S (ζ), defined in Example 4
P[S (ζ) = 1] = P[(ζ = S ^{-1}(1)]
= P[ζ = {A♠, 2♠, 3♠, 4♠, 5♠, 6♠, 7♠, 8♠, 9♠, 10♠, J♠, Q♠, K♠}] = ¼
P[S (ζ) = 2] = P[ζ = S^{ -1}(2)]=¼
P [S (ζ) = 2.5] = P[ζ = S^{ -1}(2.5)}=P[4φ]=0
P[2≤S(ζ)≤3] = P{ζ = S^{ -1} [2,3]} =P[ζ= {all ♣ and all ♥}] = ½
Cumulative Distribution Functions and Probability Density Functions
Two ways to analyze experiments via random variables is through the Cumulative Distribution Function (CDF) and probability density function (pdf)
Definition 8 Let X (ζ) be a random variable defined on a sample space Ω. Then the Cumulative Distribution Function (CDF) of X is denoted by the function Fx (x_{0}) and is defined as follows
F_{x} (x_{0}) = P[X≤ x_{0}]
where the set [X ≤ x_{0}] defines an event or a collection of outcomes.
Example 7 Consider the random variable in Example 3. The CDF of X is
$F_X(x_0)=P[X\leq x_0]=\frac{1}{52}\sum\limits_{k=1}^{52}u(x_0-k)$
Example 8 Consider the random variable in Example 4- The CDF of S is
Fs (so) = ¼u (s_{0} -l) + ¼u (s_{0} -2) + ¼u (s_{0} - 3) + ¼u (s_{0} - 4)
So
Definition 9 Let X (ζ) be a random variable defined on a sample space Ω. Then the Probability Density Function (pdf) of X is denoted by the symbol fx (x) and is defined as the derivative of its CDF
$f_x(x)=\frac{d}{dx}F_X(x)=\frac{d}{dx}P[X(\zeta)\leq x]$ (24) Note that we can recover the CDF from the PDF through integration
$F_X(x)=\int\limits_{-\infty}^\infty f_X(a)\ da$ (25) Therefore fx (x) will always integrate to 1
Furthermore, the probability that X (ζ) is between x_{1} and x_{2}, x_{2} < x_{1} and x<i inclusive, can be computed as follows
Note that the argument, ζ, has been dropped from the random variable, X. For the remainder of this discussion, we will assume that the dependence of a random variable X on ζ is implicit and omit the argument for convenience
Moments of Random Variables
Moments of random variables include a number of quantities such as averages, variances, standard deviations, etc. They are particularly useful in communications because they provide valuable information about a random variable with having to know its statistics (i.e., the CDF or pdf)
Definition 10 Let X be a random variable with the pdf fx (x). Then the expected value or average of X, is
$X_{avg}=E[X]\overset{\triangle}-\int\limits_{-\infty}^\infty x f_X(x)$ (28)
Definition 11 Let X be a random variable with pdf fx (x). Then, the variance of X is
and the standard deviation of X is
Definition 12 Let X be a random variable with pdf fx (x). Then, the n^{th} moment of X is
$m^n_X=\int\limits_{-\infty}^\infty x^n f_X(x) dx$
and the n^{th} central moment of X is
$\sigma^n_X=\int\limits_{-\infty}^{\infty} (x-X_{avg})^n f_X(x) dx$ (32)
Finally, let X be a random variable and let g (X) be some function of that random variable. In most cases this g (X) will transform X into a new random variable, which we will call Y
Y = g(X) (33)
which has its own CDF and pdf, which must be found through the use of random variable transformation techniques. However, if we are interested in the moments of Y, such as its average, then knowledge about Y's CDF or pdf are not required. Rather we can just use the already known CDF and pdf of X.
Theorem 1 Let X be a random variable with pdf fx (x). Let Y = g (X) be some transformation of X. Then the expected value or average of Y is
$Y_{avg}=E[Y]=E[g(X)]=\int\limits_{-\infty}^{\infty} g(x) f_X(x) dx$
Example 9 Let X be a random variable with the following pdf
$f_X(x)=\begin{cases} x+1 &-1\leq x\leq 0 \\ 1-x & 0
$X_{avg}=E[X]=\int\limits_{-\infty}^\infty x f_X(x) dx= \int\limits_{-1}^0 x(x+1) dx+\int\limits_0^1 x(1-x) dx = 0$
$\sigma_X^2=E[(X-X_{avg})^2]=E[X^2]\int\limits_{-\infty}^\infty x f_X(x) dx=$
$\int\limits_{-1}^0 x(x+1) dx+\int\limits_0^1 x^2(1-x) dx=\frac{2}{3}$
$E[x+\pi]=\int\limits_{-\infty}^\infty xf_X(x) dx=$
$\int\limits_{-1}^0 (x+\pi)(x+1) dx+\int\limits_0^1 (x+\pi)(1-x) dx=\pi$
Functions of a Random Variable
Definition 1 Consider a function of a random variable, g(X). The result is another random variable, which we could call Y, Y = g(X). However the CDF and pdf of Y will be different than the CDF and pdf of X. In fact we will use information about the Fx (x), fx (x), and g (X) to determine Fy (y) and fy (y)
The distribution of Y = g(X)
The PDF of the random variable, Y, is nothing more than the probability of the event that {Y ≤ y}. This consists of all outcomes ζ ε Ω such that Y (ζ) ≤ Y This is equivalent to the set of all outcomes ζ ε Ω such that g [X (ζ)] ≤ y, and this is equivalent to the set of all outcome ζ ε Ω such that X (ζ) ≤ g^{-1}(y). So
F(y) = P({Y≤ y}) = P({g(X) ≤ y}) = P({X ≤ g^{-1}(y)}) (36)
The Probability Density Function (pdf) of Y = g (X)
Once the PDF of Y is computed, the computation of the pdf of Y is straightforward
f^{Y}(y) = dF^{Y}(y) / dy
Examples
Example 10 Let X be a random, variable. Let
Y = g (X) = 2X + 1
Find Fy (y) and fy (y). To find Fy (y), we must evaluate the event {Y ≤ y} where Y is the random variable and y is simply a number. Consider the following graph
{Y≤y}
The shaded vertical line corresponds to the event {ζ :Х(ζ)≤ y} and the horizontal line corresponds to the event {ζ : X (ζ) ≤ ½ (y — 1)}. Both events have the exact same outcomes. Therefore
Therefore
{Y ≤ y} = {2X + 1 ≤ y} = {X ≤ ½(y - 1)} = {X ≤ g^{-1} (y)}
Note that the inverse function g^{-1}(Y) is
g^{-1}(Y)=½(X - l)
So
F_{Y}(y) = P[Y ≤ y]=P[2X + l ≤ y] = P[X ≤ ½(y - 1) = F_{x}(½(y - 1))
Now suppose X was uniformly distributed between 0 and 1. Then
$f_X(x)=u(x)-u(x-1)=\begin{cases} 1 & 0\leq x \leq 1\\ 0 & \text{otherwise} \end{cases}$,
$F_X(x)=x[u(x)-u(x-1)]=\begin{cases} 0 & x < 0 \\ x & 0\leq x \leq 1\\ 1 &x>1 \end{cases}$
We can now plot Fy (y). Let's plug in some values
y = 0 F_{y}(0) = F_{x}(½(0 - 1))=F_{x}(-½)=0
y = 1 F_{y}(l) = F_{x} ½ (1 - 1) = F_{x} (0) = 0
y = 2 F_{y} (2) = F_{x} ½ (2 - 1) = F_{x} (½) = ½
y = 3 F_{y}(3) = F_{x} (½ (3 - 1)) = F_{x} (1) = 1
So
$F_Y(y)=\begin{cases} 0 & y < 1 \\ \frac{1}{2}(y-1) & 1\leq y \leq 3\\ 1 & y>3 \end{cases}$
The density of Y, fy (y), is simply
$f_Y(y)=\frac{d}{dy} F_Y(y)$
and from, (41) we have that
$f_Y(y)=\begin{cases} \frac{1}{2} & 1\leq y \leq 3\\ 0 & \text{otherwise} \end{cases}$
We can also compute the fy (y) as follows
$f_Y(y)=\frac{d}{dy} F_Y(y)=\frac{d}{dy} F_X\left(\frac{1}{2}(y-1)\right) = \frac{1}{2} f_X\left(\frac{1}{2}(y-1)\right)$
Substituting (38) into the above equation, we get the same result as (43).
Example 11 Let X be a random variable. Let
Y = g(X) = X^{2 }
Then for y ≥ 0, we have that Y ≤ y, when X^{2} ≤ y for -√y ≤ X ≤ √y.
Note that in this case there are two inverse functions for g (X)
g^{-1}_{1}(Y) = √Y
g^{-1}_{2}(Y) = -√Y
So
F_{Y} (y) = P(Y ≤ y) = P(X^{2} = y) = P(-√y ≤ X ≤ √y) = F_{x} (√y) - F_{x} (-√y)
and
Now for y<0, there are no values of x such that x^{2} < y. therefore
F_{Y}{y)=0 y < 0
Again, suppose X is uniform between the interval [0,1]. Then we have that
and
F(x)
Figure 1: pdf of a Gaussian Random Variable
Common Random Variables
The Gaussian Random Variable
The Gaussian random variable is the most common of all random variables. It is used to characterize a number of random phenomenon such as noise most communication system to the random fluctuations of the desired received voltage in non-line-of-sight wireless communications systems such as cellular systems and personal communication systems. The Gaussian random variable has a pdf which is completely defined by its mean m and variance σ^{2}. Let X be a Gaussian random variable with a mean of m_{x} and a variance of σ^{2}_{x}. Then, fx (x) is
and is plotted as follows. The CDF of X is found in the usual way
This integral cannot be evaluated in closed form. However the integral is well tabulated. A tabulation common to communication engineers is the Q-function where
Therefore,
Other well tabulated integrals include the error function and the complementary error function. The error function is defined as follows
The complementary error function is
Note that the complementary error function and the Q-function are related as follows
Note that P [x_{1} < X < x_{2} is computed as follows
The Rayliegh Random Variable
Let X and Y be two independent Gaussian random variables with m_{x} = m_{y} = 0 and σ^{2}_{x} = σ^{2}_{y} = σ^{2}. Suppose we define a new random variable Z from the transformation
Now if X and Y represent the received voltages from two signal components in a wireless channel, then Z represents the received envelope, namely, Z is called a Rayliegh random, variable. The pdf of Z
is
and is plotted below for various values of σ^{2}_{x} = σ_{y}^{2} = σ^{2}
The CDF of Z is found by integrating the pdf. Therefore
The received signal envelop in a cellular or PCS system, where line-of-sight is not established, is typically modeled clS Qb Rayliegh random variable
The Exponential Random Variable
Let Z be a Rayleigh random variable. Suppose we define a new random variable, W, such that
If Z represents the voltage envelope of a signal, the W represents the power in a signal, to within a constant. Then W has the following pdf
W is called an exponential random variable, and the pdf of w looks as follows
The CDF of W is
Exponential random variables are used to model the power of cellular and PCS systems where line-of-sight propagation is not established
Uniform Random Variable
In systems or signals where the phase is not known, it is generally modeled as a random variable θ, which is uniformly distributed between 0 and 2π. The pdf of such a random variable is
The CDF is
The Central Limit Theorem
Suppose we take a large number of random variables and sum them together. The central limit theorem states that the resultant random variable will have a Gaussian distribution. This is one of the reasons why Gaussian random variables are so common. A more formal statement of the theorem follows
Theorem 2 Let Xi, i = 1, 2,..., N be a collection of independent random variables with a mean of 0 and a variance of 1. Define a new random variable, Y, as follows
Then Y is a Gaussian random variable with a mean of 0 and a variance of 1, i.e., m_{y} = 0, σ^{2}_{y} = 1.
Rician Random Variables
We just learned that when line-of-sight does not exist in a cellular system, that the received envelope follows a Rayliegh distribution. However, if line-of-sight is established, then the signal follows a Rician distribution, which has the following pdf
where I_{0} (·) is the modified Bessel function of the 0^{th} order. Note that when v = 0, Iq (0) = 1 and Z degenerates to a Rayliegh distribution.