ELEC321 Communication Systems: Detailed Solution of Assignment 2
VerifiedAdded on 2023/06/03
|20
|2743
|211
Homework Assignment
AI Summary
This document provides a detailed solution to ELEC321 Communication Systems Assignment 2, covering topics such as matched filters, QPSK transmission, binary communication systems, M-ary PSK modulation, and satellite communication parameters. The solution includes mathematical derivations, explanations of key concepts like SNR maximization, probability of bit error, and likelihood ratios, and MATLAB implementations for QPSK signal transmission and symbol error probability curves. Specific problems address the optimization of signal-to-noise ratio in matched filters, the analysis of QPSK bit error probability, the derivation of error probabilities for MAP detection, and the calculation of EIRP for satellite earth stations. Desklib is a platform where students can find more solved assignments and study resources.

STUDENT NAME
STUDENT ID NUMBER
DATE OF SUBMISSION
1
STUDENT ID NUMBER
DATE OF SUBMISSION
1
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

QUESTION 1
Section A
A matched filter is an optimal linear filter used to maximize the SNR in the presence of
additional zero mean white Gaussian noise with power spectral density. Some of the common
applications are in radar where signals are sent out and the reflected signals are measured and
compared to the sent signal. They are also used in image processing as an improved SNR shows
an improved SNR for x-ray pictures.
The signal shows a rectangular pulse given as,
s ( t ) = { A , for 0 ≤ t ≤T
0 , otherwise
The power spectral density is given as,
N 0
2 watts /Hz
Part 1
The matched filter output is y(t), and the filter input is given as, x(t),
x (t )=s ( t ) +n ( t ) 0 ≤ t ≤T
The output is expressed as,
y ( t ) =s0 ( t ) +n (t)
2
S(t)
n(t)
Section A
A matched filter is an optimal linear filter used to maximize the SNR in the presence of
additional zero mean white Gaussian noise with power spectral density. Some of the common
applications are in radar where signals are sent out and the reflected signals are measured and
compared to the sent signal. They are also used in image processing as an improved SNR shows
an improved SNR for x-ray pictures.
The signal shows a rectangular pulse given as,
s ( t ) = { A , for 0 ≤ t ≤T
0 , otherwise
The power spectral density is given as,
N 0
2 watts /Hz
Part 1
The matched filter output is y(t), and the filter input is given as, x(t),
x (t )=s ( t ) +n ( t ) 0 ≤ t ≤T
The output is expressed as,
y ( t ) =s0 ( t ) +n (t)
2
S(t)
n(t)

The output has a combination of the signal and noise components of the input signal. To
determine the maximum peak pulse SNR,
η= |s0 ( t )|2
E [ n2 ( t ) ]
Part 2
The white noise is further described based on the spectral power density as,
H ( f ) =2 K
N0
S¿ ( f ) e− j w t 0
When the input noise is the zero mean white Gaussian noise, the impulse response of the
matched filter is given as,
h( t)=Cs (t 0−t )
h ( t ) =F−1 [ H ( f ) ] = 2 K
N0
∫
−∞
∞
S¿ ( f ) e j w t0
e− j wt df
¿ 2 K
N0 [∫
−∞
∞
S¿ ( f ) e j 2 πf ( t0−t ) df ]¿
¿ 2 K
N0
[ s ( t0 −t ) ] ¿
Part 3
Since the signal S(t) is a real life signal, the maximum signal to noise ratio is given as,
2 K
N0
=Cs ...impulse response
The impulse response signal shows a slight backward shift of the original input signal before the
noise input.
Part 4
3
determine the maximum peak pulse SNR,
η= |s0 ( t )|2
E [ n2 ( t ) ]
Part 2
The white noise is further described based on the spectral power density as,
H ( f ) =2 K
N0
S¿ ( f ) e− j w t 0
When the input noise is the zero mean white Gaussian noise, the impulse response of the
matched filter is given as,
h( t)=Cs (t 0−t )
h ( t ) =F−1 [ H ( f ) ] = 2 K
N0
∫
−∞
∞
S¿ ( f ) e j w t0
e− j wt df
¿ 2 K
N0 [∫
−∞
∞
S¿ ( f ) e j 2 πf ( t0−t ) df ]¿
¿ 2 K
N0
[ s ( t0 −t ) ] ¿
Part 3
Since the signal S(t) is a real life signal, the maximum signal to noise ratio is given as,
2 K
N0
=Cs ...impulse response
The impulse response signal shows a slight backward shift of the original input signal before the
noise input.
Part 4
3
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

To show that
y ( T ) =∫
0
T
x ( τ ) h(T −τ ) dτ
y ( T ) = [ x ( t ) , s ( t ) ]L2
Section B
4
y ( T ) =∫
0
T
x ( τ ) h(T −τ ) dτ
y ( T ) = [ x ( t ) , s ( t ) ]L2
Section B
4
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Part 1
Transmitting the QPSK requires a probability of bit error, PB. The received signals are,
Sm ( t )= √ ( 2 Pr ) cos [ 2 π f 0 t +ϕ ( m ) ] 0 ≤ t< T
ϕ ( m ) ϵ { π
4 , 3 π
4 , 5 π
4 , 7 π
4 }
Pr −received average signal power
The channel output is given as,
r ( t )=sm ( t )+ n(t)
The maximum bit rate transmitted in bps. The PB requirement needs to be met such that
Pr
N 0
, Pr= 1
T ∫
0
T
sm
2 ( t ) dt
y=Q ( x )= 1
√2 π ∫
−∞
∞
e
−α 2
2 dα , x =Q−1 ( y )
The QPSK constellation of the carrier phase with the ϕ ( m ) as given is,
The striking result of the QPSK signal output is that the bit error probability of the QPSK is
identical to the binary phase shift keying, although the data sent through the channel has double
5
Transmitting the QPSK requires a probability of bit error, PB. The received signals are,
Sm ( t )= √ ( 2 Pr ) cos [ 2 π f 0 t +ϕ ( m ) ] 0 ≤ t< T
ϕ ( m ) ϵ { π
4 , 3 π
4 , 5 π
4 , 7 π
4 }
Pr −received average signal power
The channel output is given as,
r ( t )=sm ( t )+ n(t)
The maximum bit rate transmitted in bps. The PB requirement needs to be met such that
Pr
N 0
, Pr= 1
T ∫
0
T
sm
2 ( t ) dt
y=Q ( x )= 1
√2 π ∫
−∞
∞
e
−α 2
2 dα , x =Q−1 ( y )
The QPSK constellation of the carrier phase with the ϕ ( m ) as given is,
The striking result of the QPSK signal output is that the bit error probability of the QPSK is
identical to the binary phase shift keying, although the data sent through the channel has double
5

the bandwidth. QPSK provides twice the spectral efficiency as the same energy efficiency of the
BPSK.
SQPSK = {√ Es cos [ (i−1 ) π
2 ] ϕ1 ( t )− √ Es sin [ ( i−1 ) ] ϕ2 ( t ) }i=1,2,3,4 , …
It’s differentially encoded to allow non-coherent detection. The bit rate in either I or Q channel
are ½ input data rate.
Part 2
Matlab Implementation of the maximum bit rate of the QPSK signals being transmitted.
clear
N = 10^5; % number of symbols
Es_N0_dB = [-3:20]; % multiple Eb/N0 values
ipHat = zeros(1,N);
for ii = 1:length(Es_N0_dB)
ip = (2*(rand(1,N)>0.5)-1) + j*(2*(rand(1,N)>0.5)-1); %
s = (1/sqrt(2))*ip; % normalization of energy to 1
n = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]; % white guassian noise, 0dB
variance
y = s + 10^(-Es_N0_dB(ii)/20)*n; % additive white gaussian noise
figure
semilogy(Es_N0_dB,theorySer_QPSK,'b.-');
hold on
semilogy(Es_N0_dB,simSer_QPSK,'mx-');
axis([-3 15 10^-5 1])
grid on
legend('theory-QPSK', 'simulation-QPSK');
xlabel('Es/No, dB')
ylabel('Symbol Error Rate')
title('Symbol error probability curve for QPSK(4-QAM)')
6
BPSK.
SQPSK = {√ Es cos [ (i−1 ) π
2 ] ϕ1 ( t )− √ Es sin [ ( i−1 ) ] ϕ2 ( t ) }i=1,2,3,4 , …
It’s differentially encoded to allow non-coherent detection. The bit rate in either I or Q channel
are ½ input data rate.
Part 2
Matlab Implementation of the maximum bit rate of the QPSK signals being transmitted.
clear
N = 10^5; % number of symbols
Es_N0_dB = [-3:20]; % multiple Eb/N0 values
ipHat = zeros(1,N);
for ii = 1:length(Es_N0_dB)
ip = (2*(rand(1,N)>0.5)-1) + j*(2*(rand(1,N)>0.5)-1); %
s = (1/sqrt(2))*ip; % normalization of energy to 1
n = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]; % white guassian noise, 0dB
variance
y = s + 10^(-Es_N0_dB(ii)/20)*n; % additive white gaussian noise
figure
semilogy(Es_N0_dB,theorySer_QPSK,'b.-');
hold on
semilogy(Es_N0_dB,simSer_QPSK,'mx-');
axis([-3 15 10^-5 1])
grid on
legend('theory-QPSK', 'simulation-QPSK');
xlabel('Es/No, dB')
ylabel('Symbol Error Rate')
title('Symbol error probability curve for QPSK(4-QAM)')
6
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

-2 0 2 4 6 8 10 12 14
Es/No, dB
10 -5
10 -4
10 -3
10 -2
10 -1
10 0
Symbol Error Rate
Symbol error probability curve for QPSK(4-QAM)
theory-QPSK
simulation-QPSK
QUESTION 2
Binary communications system used to transmit bits.
π0 + π1=1
The channel output is a continuous random variable, R, with the conditional probability density
functions. The priori probabilities are modeled as equiprobables and are used as multistage
detection processes in which,
P0∧P1 =1−P0 be arbitrary
V →rv ( conditiona l probability density )
f v∨u { v|am }−finite∧non−zero ∀ v ∈ R
m∈ {0,1 }
7
Es/No, dB
10 -5
10 -4
10 -3
10 -2
10 -1
10 0
Symbol Error Rate
Symbol error probability curve for QPSK(4-QAM)
theory-QPSK
simulation-QPSK
QUESTION 2
Binary communications system used to transmit bits.
π0 + π1=1
The channel output is a continuous random variable, R, with the conditional probability density
functions. The priori probabilities are modeled as equiprobables and are used as multistage
detection processes in which,
P0∧P1 =1−P0 be arbitrary
V →rv ( conditiona l probability density )
f v∨u { v|am }−finite∧non−zero ∀ v ∈ R
m∈ {0,1 }
7
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Part A
The modifications for zero densities are discrete and complex. These conditional densities are
likelihoods in the argon of the hypothesis testing. The marginal density is given by,
f v ( u ) =P0 f v∨u ( v|a0 ) + P1 f v∨u ( v|a1 )
The posteriori probability of U for m=0 or 1 is given by
pu∨v ( am|v )= ( Pm f v∨u ( v|am ) )
f v ( u )
For the MAP decision, the likelihood ratio is given as,
A ( v ) = f v∨u ( v|a0 )
f v∨u ( v|a1 ) , p1
p0
=η
This denotes the threshold and depends on the priori probabilities. Adjusting the threshold value
may relate the equation,
p0= p1 η=1
When the threshold is unity,
^U ( v ) =a0
f v∨u ( v |a0 ) ≥ f v∨u (v ∨a1 )
^U ( v ) =1 ,otherwise
8
The modifications for zero densities are discrete and complex. These conditional densities are
likelihoods in the argon of the hypothesis testing. The marginal density is given by,
f v ( u ) =P0 f v∨u ( v|a0 ) + P1 f v∨u ( v|a1 )
The posteriori probability of U for m=0 or 1 is given by
pu∨v ( am|v )= ( Pm f v∨u ( v|am ) )
f v ( u )
For the MAP decision, the likelihood ratio is given as,
A ( v ) = f v∨u ( v|a0 )
f v∨u ( v|a1 ) , p1
p0
=η
This denotes the threshold and depends on the priori probabilities. Adjusting the threshold value
may relate the equation,
p0= p1 η=1
When the threshold is unity,
^U ( v ) =a0
f v∨u ( v |a0 ) ≥ f v∨u (v ∨a1 )
^U ( v ) =1 ,otherwise
8

It is the maximum likelihood rule or test. The priori probabilities are usually equal and the MAP
reduces to ML. the probability of error is derived for the MAP detection. The probability of error
is conditional on each hypothesis,
Pr {e|U =a1 }∧Pr { e|U =a0 }
The overall probability of the error is given as,
Pr {e }= p0 Pr {e|U =a0 }+ p1 {e|U =a1 }
Part B
For MAP detection with a threshold η,
Pr { e|U =a0 }=Pr { ^U=a1|U=a0 } =Pr { Λ ( v ) < η|U=a0 }
Pr {e|U =a1 }=Pr { ^U =a0|U=a1 }=Pr { Λ ( v ) ≥ η|U=a1 }
Using the values provided in the question,
The probability density functions are given as f R ( r|0 ) and f R ( r|1 )
9
reduces to ML. the probability of error is derived for the MAP detection. The probability of error
is conditional on each hypothesis,
Pr {e|U =a1 }∧Pr { e|U =a0 }
The overall probability of the error is given as,
Pr {e }= p0 Pr {e|U =a0 }+ p1 {e|U =a1 }
Part B
For MAP detection with a threshold η,
Pr { e|U =a0 }=Pr { ^U=a1|U=a0 } =Pr { Λ ( v ) < η|U=a0 }
Pr {e|U =a1 }=Pr { ^U =a0|U=a1 }=Pr { Λ ( v ) ≥ η|U=a1 }
Using the values provided in the question,
The probability density functions are given as f R ( r|0 ) and f R ( r|1 )
9
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide

Therefore,
P ( error )=P ( 1−decided , 0−transmitted ) +P ( 0−decided ,1−transmitted )
P ( error )=P ( 1D ,OT ) + P ( OD , 1T )
P ( error )=P ( OT ) ∫
0
∞
f ( r|OT ) dr + P(1T ) ∫
−∞
0
f ( r|1T ) dr
Part C
Since for the a-priori probabilities P ( OT ) ∧P ( 1T )−0.5 , the equation is re-written as,
P ( error )= 1
2 ∫
0
∞
f ( r|OT ) dr + 1
2 ∫
−∞
0
f ( r|1T ) dr
For the continuous density functions,
S0 = √ Es
P(1D∨OT )=Q
( √ Es
N0
2 )
P(O∨1T )=Q
( √ Es
N0
2 )
10
P ( error )=P ( 1−decided , 0−transmitted ) +P ( 0−decided ,1−transmitted )
P ( error )=P ( 1D ,OT ) + P ( OD , 1T )
P ( error )=P ( OT ) ∫
0
∞
f ( r|OT ) dr + P(1T ) ∫
−∞
0
f ( r|1T ) dr
Part C
Since for the a-priori probabilities P ( OT ) ∧P ( 1T )−0.5 , the equation is re-written as,
P ( error )= 1
2 ∫
0
∞
f ( r|OT ) dr + 1
2 ∫
−∞
0
f ( r|1T ) dr
For the continuous density functions,
S0 = √ Es
P(1D∨OT )=Q
( √ Es
N0
2 )
P(O∨1T )=Q
( √ Es
N0
2 )
10
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

Part D
Replacing in the previous equation,
P ( error ) =0.5 Q
( √ Es
N 0
2 ) +0.5Q
( √ Es
N 0
2 )
P ( error ) =Q ( √ 2 Es
N 0 ) =0.5 erfc ( √ Es
N0 )
QUESTION 3
Binary transmission of information
Part A
Signal set of two signals with apriori probabilities given as,
π0=Pr { transmits s0 ( t ) }= 1
2
π1=Pr {transmits s1 ( t ) }= 1
2
The signal transmission is given by,
H0 :r ( t ) =S0 (t ) +n ( t ) , 0 ≤ t <T
H1 : r ( t ) =S1 ( t ) + n ( t ) , 0 ≤t <T
n ( t )−zero mean whiteGaussian noise with p ower spectral density
Sn ( f )= N 0
2 ,−∞< f <∞
s0 ( t )= {A ,0 ≤ t ≤ 3
4 T
0 , otherwise
s1 ( t ) = {A , 1
4 T ≤ t ≤ T
0 , otherwise
11
Replacing in the previous equation,
P ( error ) =0.5 Q
( √ Es
N 0
2 ) +0.5Q
( √ Es
N 0
2 )
P ( error ) =Q ( √ 2 Es
N 0 ) =0.5 erfc ( √ Es
N0 )
QUESTION 3
Binary transmission of information
Part A
Signal set of two signals with apriori probabilities given as,
π0=Pr { transmits s0 ( t ) }= 1
2
π1=Pr {transmits s1 ( t ) }= 1
2
The signal transmission is given by,
H0 :r ( t ) =S0 (t ) +n ( t ) , 0 ≤ t <T
H1 : r ( t ) =S1 ( t ) + n ( t ) , 0 ≤t <T
n ( t )−zero mean whiteGaussian noise with p ower spectral density
Sn ( f )= N 0
2 ,−∞< f <∞
s0 ( t )= {A ,0 ≤ t ≤ 3
4 T
0 , otherwise
s1 ( t ) = {A , 1
4 T ≤ t ≤ T
0 , otherwise
11

Part B
The bit error is given as,
Eb =π0∫
0
T
s0
2 ( t ) dt+ π1∫
0
T
s1
2 ( t ) dt
Signals are measured across 1 ohm and the optimal minimum PB detection is used. It is based on
the Bayes theorem of conditional probability.
ωc= 2 π
T s
=N ( 2 π
T b )
ρ= A2 T b
2η
SN Re=10 log10 [ A2 Tb
2 η ]
Pexp= total number of erroneous bits
total number of transmitted bits
Part C
The probability of bit error as a function of Eb/N0 is given as,
Pb=erfc ( √ Eb
N0 ) [ 1− 1
2 erfc ( √ Eb
N 0 ) ]
QUESTION 4
Part A
12
The bit error is given as,
Eb =π0∫
0
T
s0
2 ( t ) dt+ π1∫
0
T
s1
2 ( t ) dt
Signals are measured across 1 ohm and the optimal minimum PB detection is used. It is based on
the Bayes theorem of conditional probability.
ωc= 2 π
T s
=N ( 2 π
T b )
ρ= A2 T b
2η
SN Re=10 log10 [ A2 Tb
2 η ]
Pexp= total number of erroneous bits
total number of transmitted bits
Part C
The probability of bit error as a function of Eb/N0 is given as,
Pb=erfc ( √ Eb
N0 ) [ 1− 1
2 erfc ( √ Eb
N 0 ) ]
QUESTION 4
Part A
12
⊘ This is a preview!⊘
Do you want full access?
Subscribe today to unlock all pages.

Trusted by 1+ million students worldwide
1 out of 20
Related Documents

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
Copyright © 2020–2025 A2Z Services. All Rights Reserved. Developed and managed by ZUCOL.