SlideShare a Scribd company logo
1 of 487
Download to read offline
SOLUTIONS MANUAL
to accompany

Digital Signal Processing:
A Computer-Based Approach
Second Edition

Sanjit K. Mitra

Prepared by
Rajeev Gandhi, Serkan Hatipoglu, Zhihai He, Luca Lucchese,
Michael Moore, and Mylene Queiroz de Farias

1
Errata List
of
"Digital Signal Processing: A Computer-Based Approach", Second Edition
Chapter 2
1. Page 48, Eq. (2.17): Replace "y[n]" with " x u [n]".
1
2

2. Page, 51: Eq. (2.24a): Delete " (x[n] + x * [N − n]) ". Eq. (2.24b): Delete
1
2

" (x[n] − x * [N − n]) ".
3. Page 59, Line 5 from top and line 2 from bottom: Replace "− cos((ω1 + ω 2 − π)n ) " with
"cos((2π – ω1 – ω 2 )n) ".
4. Page 61, Eq. (2.52): Replace "A cos((Ω o + k ΩT )t + φ ) " with "A cos( ±(Ωo t + φ) + k Ω Tt ) ".
5. Page 62, line 11 from bottom: Replace " Ω T > 2Ω o " with " Ω T > 2 Ω o ".
6. Page 62, line 8 from bottom: Replace "2πΩ o / ω T " with "2πΩ o / ΩT ".
7. Page 65, Program 2_4, line 7: Replace "x = s + d;" with "x = s + d';".
∞

8. Page 79, line 5 below Eq. (2.76): Replace "

∑

n=0

α n " with "

∞

∑

α n ".

n=0

n
n
9. Page 81, Eq. (2.88): Replace "α L+1λn + α N λn
2
N−L " with " α L+1λ 2 + L + α N λ N− L ".

10. Page 93, Eq. (2.116): Replace the lower limit "n=–M+1" on all summation signs with "n=0".
11. Page 100, line below Eq. (2.140) and caption of Figure 2.38: Replace "ω o = 0.03 " with
" ω o = 0.06π ".
12. Page 110, Problem 2.44: Replace "{y[n]} = {−1, −1, 11, −3, −10, 20, −16} " with
"{y[n]} = {−1, −1, 11, −3, 30, 28, 48} ", and
"{y[n]} = {−14 − j5, −3 − j17, −2 + j 5, −26 + j 22, 9 + j12} " with
"{y[n]} = {−14 − j5, −3 − j17, −2 + j 5, −9.73 + j12.5, 5.8 + j5.67} ".
13. Page 116, Exercise M2.15: Replace "randn" with "rand".
Chapter 3
1. Page 118, line 10 below Eq. (3.4): Replace "real" with "even".

2
∞

∑

2. Page 121, Line 5 below Eq. (3.9): Replace "

α n " with "

n=0

∞

∑

α n ".

n=0

3. Page 125, Eq. (3.16): Delete the stray α.
4. Page 138, line 2 below Eq. (3.48): Replace "frequency response" with "discrete-time Fourier
transform".
5. Page 139, Eq. (3.53): Replace "x(n + mN)" with "x[n + mN]".
6. Page 139, lin2 2, Example 3.14: Replace "x[n] = {0 1 2
"{x[n]} = {0 1 2 3 4 5} ".

3 4 5} " with

7. Page 139, line 3, Example 3.14: Replace "x[n]" with "{x[n]}", and "πk/4" with "2πk/4".
8. Page 139, line 6 from bottom: Replace "y[n] = {4 6 2
"{y[n]} = {4 6 2 3} ".

3 4 6} " with

9. Page 141, Table 3.5: Replace " N[g < −k >N ] " with " N g[< −k > N ] ".
10. Page 142, Table 3.7: Replace "argX[< −k >N ] " with "– arg X[< −k >N ] ".
1 1
1 1
1 1
1 1
1 j −1 − j 
1 − j −1 j 
 " with " 
 ".
11. Page 147, Eq. (3.86): Replace " 
1 −1 1 −1
1 −1 1 −1




1 − j −1 j 
1 j −1 − j 
−1

12. Page 158, Eq.(3.112): Replace "

∑

α n z −n " with " −

n =−∞

−1

∑

α n z− n ".

n= −∞

13. Page 165, line 4 above Eq. (3.125); Replace "0.0667" with "0.6667".
14. Page 165, line 3 above Eq. (3.125): Replace "10.0000" with "1.000", and "20.0000" with
"2.0000".
15. Page 165, line above Eq. (3.125): Replace "0.0667" with "0.6667", "10.0" with "1.0", and
"20.0" with "2.0".
16. Page 165, Eq. (3.125): Replace "0.667" with "0.6667".
17. Page 168, line below Eq. (3.132): Replace " z > λ l " with " z > λ l ".
18. Page 176, line below Eq. (3.143): Replace " R h " with "1/ R h ".
19. Page 182, Problem 3.18: Replace " X(e − jω/2 ) " with " X( −e jω/2 ) ".

3
20. Page 186, Problem 3.42, Part (e): Replace "arg X[< −k >N ] " with "– arg X[< −k >N ] ".
21. Page 187, Problem 3.53: Replace "N-point DFT" with "MN-point DFT", replace
" 0 ≤ k ≤ N − 1 " with " 0 ≤ k ≤ MN − 1 ", and replace "x[< n > M ] " with " x[< n > N ]".
22. Page 191, Problem 3.83: Replace " lim " with " lim ".
n→ ∞

23. Page 193, Problem 3.100: Replace "

z →∞

P(z)
P(z)
" with " −λ l
".
D' (z)
D' (z )

24. Page 194, Problem 3.106, Parts (b) and (d): Replace " z < α " with " z > 1 / α ".
25. page 199, Problem 3.128: Replace " (0.6)µ [n]" with " (0.6)n µ[n] ", and replace "(0.8)µ [ n] "
with " (0.8)n µ[n]".
26. Page 199, Exercise M3.5: Delete "following".
Chapter 4
1. Page 217, first line: Replace " ξ N " with " ξ M ".
2. Page 230, line 2 below Eq. (4.88): Replace "θ g (ω) " with "θ(ω)".
3. Page 236, line 2 below Eq. (4.109): Replace "decreases" with "increases".
4. Page 246, line 4 below Eq. (4.132): Replace "θ c (e jω )" with "θ c (ω)".
5. Page 265, Eq. (4.202): Replace "1,2,K,3" with "1,2,3".
6. Page 279, Problem 4.18: Replace " H(e j0 ) " with " H(e jπ/ 4 ) ".
7. Page 286, Problem 4.71: Replace "z 3 = − j 0.3" with "z 3 = − 0.3 ".
8. Page 291, Problem 4.102: Replace
0.4 + 0.5z −1 + 1.2 z −2 + 1.2z −3 + 0.5z −4 + 0.4 z −5
"H(z) =
" with
1 + 0.9z −2 + 0.2 z −4
0.1 + 0.5z −1 + 0.45z −2 + 0.45z −3 + 0.5z −4 + 0.1z −5
"H(z) =
".
1+ 0.9 z −2 + 0.2z −4
9. Page 295, Problem 4.125: Insert a comma "," before "the autocorrelation".
Chapter 5
1. Page 302, line 7 below Eq. (5.9): Replace "response" with "spectrum".
4
2. Page 309, Example 5.2, line 4: Replace "10 Hz to 20 Hz" with "5 Hz to 10 Hz". Line 6:
Replace "5k + 15" with "5k + 5". Line 7: Replace "10k + 6" with "5k + 3", and replace
"10k – 6" with "5(k+1) – 3".
3. Page 311, Eq. (5.24): Replace " G a ( jΩ − 2k(∆Ω)) " with " G a ( j(Ω − 2k(∆Ω))) ".
4. Page 318, Eq. (5.40): Replace "H(s)" with "Ha(s)", and replace "" with "".
5. Page 321, Eq. (5.54): Replace "" with "".
ˆ
ˆ
6. Page 333, first line: Replace "Ω p1 " with " Ω p1 ", and " Ω p2 " with " Ω p2 ".
7. Page 349, line 9 from bottom: Replace "1/T" with "2π/T".
8. Page 354, Problem 5.8: Interchange "Ω1 " and "Ω 2 ".
9. Page 355, Problem 5.23: Replace "1 Hz" in the first line with "0.2 Hz".
10. Page 355, Problem 5.24: Replace "1 Hz" in the first line with "0.16 Hz".
Chapter 6
1. Page 394, line 4 from bottom: Replace "alpha1" with "fliplr(alpha1)".
2. Page 413, Problem 6.16: Replace
" H(z) = b0 + b1 z −1 + b 2z −1 z −1 + b3z −1 1 +L+ bN−1z −1 (1 + bN z −1)  " with



(
))
(
" H(z) = b0 + b1z −1 1 + b2 z −1( z −1 + b3z −1(1 +L+ b N−1z −1(1 + b N z −1 ))) ".



3. Page 415, Problem 6.27: Replace "H(z) =

3z 2 + 18.5z + 17.5
" with
(2 z + 1)(z + 2)

3z 2 + 18.5z + 17.5
"H(z) =
".
(z + 0.5)(z + 2)
4. Page 415, Problem 6.28: Replace the multiplier value "0.4" in Figure P6.12 with "–9.75".
5. Page 421, Exercise M6.1: Replace "−7.6185z −3 " with "−71.6185z −3 ".
6. Page 422, Exercise M6.4: Replace "Program 6_3" with "Program 6_4".
7. Page 422, Exercise M6.5: Replace "Program 6_3" with "Program 6_4".
8. Page 422, Exercise M6.6: Replace "Program 6_4" with "Program 6_6".

5
Chapter 7
1. Page 426, Eq. (7.11): Replace "h[n – N]" with "h[N – n]".
2. Page 436, line 14 from top: Replace "(5.32b)" with "(5.32a)".
3. Page 438, line 17 from bottom: Replace "(5.60)" with "(5.59)".
4. Page 439, line 7 from bottom: Replace "50" with "40".
5. Page 442, line below Eq. (7.42): Replace "F −1 (ˆ ) " with "1 / F( ˆ ) ".
z
z
6. Page 442, line above Eq. (7.43): Replace "F −1 (ˆ ) " with "F( ˆ ) ".
z
z
L  z−α 
ˆ
l ".
7. Page 442, Eq. (7.43): Replace it with " F( ˆ ) = ± ∏ 
z

l =1  1 − α *ˆ 
lz
8. Page 442, line below Eq. (7.43): Replace "where α l " with "where α l ".
9. Page 446, Eq. (7.51): Replace " β(1 − α) " with " β(1 + α) ".
10. Page 448, Eq. (7.58): Replace " ω c < ω ≤ π " with " ω c < ω ≤ π ".
11. Page 453, line 6 from bottom: Replace "ω p − ωs " with "ω s − ω p ".
12. Page 457, line 8 from bottom: Replace "length" with "order".
13. Page 465, line 5 from top: Add "at ω = ω i " before "or in".
14. Page 500, Problem 7.15: Replace "2 kHz" in the second line with "0.5 kHz".
Bs
15. Page 502, Problem 7.22: Replace Eq. (7.158) with "Ha (s) = 2
".
s + Bs + Ω 2
0
16. Page 502, Problem 7.25: Replace "7.2" with "7.1".
17. Page 504, Problem 7.41: Replace "Hint (e jω ) = e − jω " in Eq. (7.161) with
1
"Hint (e jω ) =
".
jω
18. Page 505, Problem 7.46: Replace "16" with "9" in the third line from bottom.
19. Page 505, Problem 7.49: Replace "16" with "9" in the second line.
20. Page 510, Exercise M7.3: Replace "Program 7_5" with "Program 7_3".
21. Page 510, Exercise M7.4: Replace "Program 7_7" with "M-file impinvar".
22. Page 510, Exercise M7.6: Replace "Program 7_4" with "Program 7_2".
23. Page 511, Exercise M7.16: Replace "length" with "order".
6
24. Page 512, Exercise M7.24: Replace "length" with "order".
Chapter 8
1. Page 518, line 4 below Eq. (6.7): Delete "set" before "digital".
2. Page 540, line 3 above Eq. (8.39): Replace "G[k]" with " X0 [k]" and " H[k]" with " X1[k]".
Chapter 9
1. Page 595, line 2 below Eq. (9.30c): Replace "this vector has" with "these vectors have".
2. Page 601, line 2 below Eq. (9.63): Replace "2 b " with "2 −b ".
 a z + 1
 a k z + 1
3. Page 651, Problem 9.10, line 2 from bottom: Replace "  k
 " with " 
 ".
 1 + ak z
 z + ak 
4. Page 653, Problem 9.15, line 7: Replace "two cascade" with "four cascade".
d1d 2 + d1z −1 + z −2
5. Page 653, Problem 9.17: Replace "A 2 (z) =
" with
1 + d1z −1 + d1d 2z −2
"A 2 (z) =

d2 + d1z−1 + z −2
".
1 + d1z −1 + d2 z −2

6. Page 654, Problem 9.27: Replace "structure" with "structures".
7. Page 658, Exercise M9.9: Replace "alpha" with "α ".
Chapter 10
1. Page 692, Eq. (10.57b): Replace "P0 (α 1) = 0.2469" with "P0 (α 1) = 0.7407".
2. Page 693, Eq. (10.58b): Replace "P0 (α 2 ) = –0.4321" with "P0 (α 2 ) = –1.2963 ".
3. Page 694, Figure 10.38(c): Replace "P−2 (α 0 )" with "P1 (α 0 ) ", "P−1(α 0 )" with "P0 (α 0 )",
"P0 (α 0 )" with "P−1(α 0 )", "P1 (α 0 ) " with "P−2 (α 0 )", "P−2 (α1 )" with "P1 (α1 )",
"P−1(α 1)" with "P0 (α 1) ", "P0 (α 1) " with "P−1(α 1)", "P1 (α1 )" with "P−2 (α1 )",
"P−2 (α 2 ) " with "P1 (α 2 ) ", "P−1(α 2 )" with "P0 (α 2 )", "P0 (α 2 )" with "P−1(α 2 )", and
"P−1(α 2 )" with "P−2 (α 2 ) ".
4. Page 741, Problem 10.13: Replace "2.5 kHz" with "1.25 kHz".
N
N−1
5. Page 741, Problem 10.20: Replace " ∑ i =0 z i " with " ∑ i =0 z i ".

7
6. Page 743, Problem 10.28: Replace "half-band filter" with a "lowpass half-band filter with a
zero at z = –1".
7. Page 747, Problem 10.50: Interchange "Y k " and "the output sequence y[n]".
8. Page 747, Problem 10.51: Replace the unit delays " z −1 " on the right-hand side of the structure
of Figure P10.8 with unit advance operators "z".

[

]

9. Page 749, Eq. (10.215): Replace " 3 H2 (z) − 2H 2 (z) " with " z −2 3H 2 (z) − 2H2 (z) ".
10. Page 751, Exercise M10.9: Replace "60" with "61".
11. Page 751, Exercise M10.10: Replace the problem statement with "Design a fifth-order IIR
half-band Butterworth lowpass filter and realize it with 2 multipliers".
12. Page 751, Exercise M10.11: Replace the problem statement with "Design a seventh-order
IIR half-band Butterworth lowpass filter and realize it with 3 multipliers".
Chapter 11
1. Page 758, line 4 below Figure 11.2 caption: Replace "grid" with "grid;".
2. age 830, Problem 11.5: Insert "ga (t) = cos(200πt) " after "signal" and delete
"= cos(200πn) ".
3. Page 831, Problem 11.11: Replace "has to be a power-of-2" with "= 2 l , where l is an
integer".

8
Chapter 2 (2e)
2.1 (a) u[n] = x[n] + y[n] = {3 5 1 −2 8 14 0}
(b) v[n] = x[n]⋅ w[n] = {−15 −8 0 6 −20 0 2}
(c) s[n] = y[n] − w[n] = {5 3 −2 −9 9 9 −3}
(d) r[n] = 4.5y[n] ={0 31.5 4.5 −13.5 18 40.5 −9}
2.2 (a) From the figure shown below we obtain
v[n]

x[n]

v[n–1]

–1

z
α
β

y[n]

γ

v[n] = x[n] + α v[n −1] and y[n] = βv[n −1] + γ v[n −1] = (β + γ )v[n −1]. Hence,
v[n −1] = x[n −1] + αv[n − 2] and y[n −1] = (β + γ )v[n − 2]. Therefore,
y[n] = (β + γ )v[n −1] = (β + γ )x[n −1] + α(β + γ )v[n − 2] = (β + γ )x[n −1] + α(β + γ )
= (β + γ )x[n −1] + α y[n −1] .
(b) From the figure shown below we obtain
–1

x[n]

x[n–1]

z

–1

–1

x[n–2]

z

–1

x[n–4] z

x[n–3] z

α

β

γ
y[n]

y[n] = γ x[n − 2] + β( x[n −1] + x[n − 3]) + α( x[n] + x[n − 4]) .
(c)

From the figure shown below we obtain
v[n]

x[n]

–1

z
–1
–1

d1

z

6

v[n–1]

y[n]

y[n − 1]
(β+ γ )
v[n] = x[n] − d1 v[n − 1] and y[n] = d 1v[n] + v[n − 1] . Therefore we can rewrite the second

(

)

(

)

equation as y[n] = d 1 x[n] − d1 v[n −1] + v[n −1] = d 1x[n] + 1 − d 2 v[n −1]
1

(

)(

)

(

(1)

)

(

)

2
2
2
= d1 x[n] + 1 − d1 x[n −1] − d 1v[n − 2] = d1 x[n] + 1 − d1 x[n −1] − d1 1− d1 v[n − 2]

(
(

)
)

2
From Eq. (1), y[n −1] = d1 x[n − 1] + 1 − d1 v[n − 2] , or equivalently,
2
d 1y[n −1] = d1 x[n −1] + d1 1 − d 2 v[n − 2]. Therefore,
1
2
2
2
y[n] + d 1y[n −1] = d1x[n] + 1− d 1 x[n − 1] − d 1 1 − d1 v[n − 2] + d1 x[n −1] + d1

(

)

(

)

(1− d2 )v[n − 2]
1

= d1 x[n] + x[n − 1] , or y[n] = d 1x[n] + x[n −1] − d1y[n −1].
(d)

From the figure shown below we obtain
x[n]

v[n] –1
z

v[n–1]

–1 v[n–2]

y[n]

z

–1

d1

w[n]

u[n]
d2

v[n] = x[n] − w[n], w[n] = d 1v[n −1] + d 2 u[n], and u[n] = v[n − 2] + x[n]. From these equations
we get w[n] = d 2 x[n] + d1x[n −1] + d 2 x[n − 2] − d1w[n − 1] − d 2 w[n − 2] . From the figure we also
obtain y[n] = v[n − 2] + w[n] = x[n − 2] + w[n] − w[n − 2], which yields
d 1y[n −1] = d1x[n − 3] + d 1w[n −1] − d1 w[n − 3], and
d 2 y[n − 2] = d 2 x[n − 4] + d 2 w[n − 2] − d 2 w[n − 4], Therefore,
y[n] + d 1y[n −1] + d 2 y[n − 2] = x[n − 2] + d1 x[n − 3] + d 2 x[n − 4]

(

) (

)

+ w[n] + d1w[n − 1] + d 2 w[n − 2] − w[n − 2] + d1w[n − 3] + d 2 w[n − 4]
= x[n − 2] + d 2 x[n] + d1x[n −1] or equivalently,
y[n] = d 2 x[n] + d 1x[n −1] + x[n − 2] − d 1y[n −1] − d 2 y[n − 2].
2.3

(a) x[n] = {3 −2 0 1 4 5 2}, Hence, x[−n] ={2 5 4 1 0 −2 3}, − 3 ≤ n ≤ 3.
1
Thus, x ev [n] = (x[n] + x[−n]) = {5 / 2 3 / 2 2 1 2 3 / 2 5 / 2}, − 3 ≤ n ≤ 3, and
2
1
x od [n] = (x[n] − x[−n]) = { / 2 −7 / 2 −2 0 2 7 / 2 −1 / 2}, − 3 ≤ n ≤ 3.
1
2
(b) y[n] = {0 7 1 −3 4 9 −2}. Hence, y[−n] ={−2 9 4 −3 1 7 0}, − 3 ≤ n ≤ 3.
1
Thus, y ev [ n] = (y[n] + y[−n]) = {−1 8 5 / 2 −3 5 / 2 8 −1 − 3 ≤ n ≤ 3, and
},
2
1
y od [n] = (y[n] − y[−n]) = { −1 −3 / 2 0 1 3 / 2 −1}, − 3 ≤ n ≤ 3 .
1
2

7
(c) w[n] = {−5 4 3 6 −5 0 1}, Hence, w[−n] = { 0 −5 6 3 4 −5}, − 3 ≤ n ≤ 3.
1
1
Thus, w ev [n] = (w[n] + w[−n]) = {−2 2 −1 6 −1 2 −2}, − 3 ≤ n ≤ 3, and
2
1
w od [n] = (w[n] − w[−n]) = {−3 2 4 0 −4 −2 3}, − 3 ≤ n ≤ 3,
2
2.4

(a) x[n] = g[n]g[n] . Hence x[− n] = g[−n]g[−n] . Since g[n] is even, hence g[–n] = g[n].
Therefore x[–n] = g[–n]g[–n] = g[n]g[n] = x[n]. Hence x[n] is even.
(b) u[n] = g[n]h[n]. Hence, u[–n] = g[–n]h[–n] = g[n](–h[n]) = –g[n]h[n] = –u[n]. Hence
u[n] is odd.
(c) v[n] = h[n]h[n]. Hence, v[–n] = h[n]h[n] = (–h[n])(–h[n]) = h[n]h[n] = v[n]. Hence
v[n] is even.

2.5

Yes, a linear combination of any set of a periodic sequences is also a periodic sequence and
the period of the new sequence is given by the least common multiple (lcm) of all periods.
For our example, the period = lcm(N1 , N2 , N3 ) . For example, if N1 = 3, N 2 = 6, and N 3 =12,
then N = lcm(3, 5, 12) = 60.

2.6

1
1
(a) x pcs[n] = {x[n] + x * [−n]} = {Aα n + A *(α*) −n }, and
2
2
1
1
x pca [n] = {x[n] − x * [−n]} = {Aα n − A *(α*) −n }, −N ≤ n ≤ N.
2
2
(b) h[n] = {−2 + j5 4 − j3 5 + j6 3 + j −7 + j2} −2 ≤ n ≤ 2 , and hence,
h * [−n] = {−7 − j2 3 − j 5 − j6 4 + j3 −2 − j5}, −2 ≤ n ≤ 2 . Therefore,
1
h pcs[n] = {h[n] + h * [−n]} ={−4.5 + j1.5 3.5 − j2 5 3.5 + j2 −4.5 − j1.5} and
2
1
h pca [n] = {h[n] − h * [−n]}= {2.5 + j3.5 0.5 − j j6 −0.5 − j −2.5 + j3.5} −2 ≤ n ≤ 2 .
2

{ n}

2.7 (a) {x[n]} = Aα

Since for n < 0, α

where A and α are complex numbers, with α < 1.
n

can become arbitrarily large hence {x[n]} is not a bounded sequence.

(b) {y[n]} = Aα µ[n]
n

where A and α are complex numbers, with α <1.

In this case y[n] ≤ A

∀n hence {y[n]} is a bounded sequence.

(c) {h[n]} = Cβ µ[n]

where C and β are complex numbers,with β > 1.

n

n

Since β becomes arbitrarily large as n increases hence {h[n]} is not a bounded sequence.

8
(d) {g[n]} = 4 sin(ω an). Since − 4 ≤ g[n] ≤ 4 for all values of n, {g[n]} is a bounded
sequence.
(e) {v[n]} = 3 cos 2 (ω b n 2 ). Since − 3≤ v[n] ≤ 3 for all values of n, {v[n]} is a bounded
sequence.
2.8

1

(a) Recall, x ev [n] = ( x[n] + x[−n]).
2
Since x[n] is a causal sequence, thus x[–n] = 0 ∀n > 0 . Hence,
x[n] = x ev [n] + x ev [−n] = 2x ev [n], ∀n > 0 . For n = 0, x[0] = xev[0].
Thus x[n] can be completely recovered from its even part.
 1 x[n], n > 0,

1
Likewise, x od [n] = (x[n] – x[−n]) =  2
2
 0,
n = 0.

Thus x[n] can be recovered from its odd part ∀n except n = 0.
(b) 2y ca [n] = y[n] − y * [−n] . Since y[n] is a causal sequence y[n] = 2y ca [n] ∀n > 0 .
For n = 0, Im{y[0]} = y ca [0] . Hence real part of y[0] cannot be fully recovered from y ca [n] .
Therefore y[n] cannot be fully recovered from y ca [n] .
2y cs [n] = y[n] + y *[−n] . Hence, y[n] = 2y cs[n]

∀n > 0 .

For n = 0, Re{y[0]} = y cs [0]. Hence imaginary part of y[0] cannot be recovered from y cs[n] .
Therefore y[n] cannot be fully recovered from y cs[n] .
2.9

1

1

x ev [n] = ( x[n] + x[−n]). This implies, x ev [–n] = ( x[–n] + x[n]) = x ev [n].
2
2
Hence even part of a real sequence is even.
1

1

x od [n] = (x[n] – x[–n]). This implies, x od [–n] = ( x[–n] – x[n]) = –x od[n].
2
2
Hence the odd part of a real sequence is odd.
1

1

2.10 RHS of Eq. (2.176a) is x cs [n] + x cs [n − N] = ( x[n] + x *[−n]) + (x[n − N] + x *[N − n]).
2
2
Since x[n] = 0 ∀n < 0 , Hence
1
2

x cs [n] + x cs [n − N] = ( x[n] + x *[N − n]) = x pcs[n],

0 ≤ n ≤ N – 1.

RHS of Eq. (2.176b) is
1
2

1
2

x ca [n] + x ca [n − N] = ( x[n] − x * [−n]) + ( x[n − N] − x * [n − N])
1
2

= ( x[n] − x *[N − n]) = x pca [n],

9

0 ≤ n ≤ N – 1.
1

(

)

2.11 x pcs[n] = x[n] + x * [< −n >N ] for 0 ≤ n ≤ N – 1, Since, x[< − n > N ] = x[N − n] , it
2

follows

1

that x pcs[n] = ( x[n] + x * [N − n]), 1 ≤ n ≤ N – 1.
2
1

For n = 0, x pcs[0] = ( x[0] + x *[0]) = Re{x[0]}.
2
1

(

)

1

Similarly x pca [n] = x[n] − x *[< −n > N ] = ( x[n] − x *[N − n]), 1 ≤ n ≤ N – 1. Hence,
2
2
1

for n = 0, xpca [0] = ( x[0] − x * [0]) = jIm{x[0]}.
2
∞

∑ n= −∞ x[n] < ∞. Therefore, by Schwartz inequality,
∞
∞
∞



∑ n= −∞ x[n]2 ≤  ∑ n= −∞ x[n]  ∑ n= −∞ x[n] < ∞.

2.12 (a) Given

(b) Consider x[n] =

{1/0,n,

n ≥1,
otherwise. The convergence of an infinite series can be shown

via the integral test. Let a n = f(x), where f(x) is a continuous, positive and decreasing
function for all x ≥ 1. Then the series

∞

∑ n =1 an

and the integral

∞

∫1

f(x) dx both converge or

∞1
∞
both diverge. For a n = 1 / n , f(x) = 1/n. But ∫
dx = ( ln x ) 1 = ∞ − 0 = ∞. Hence,
1 x
∞
∞
∑ n =−∞ x[n] = ∑ n=1 1 does not converge, and as a result, x[n] is not absolutely
n
1
summable. To show {x[n]} is square-summable, we observe here a n = 2 , and thus,
n
∞
1
∞ 1
1 1
1
∞
 1
f (x) = 2 . Now, ∫
= − + = 1. Hence, ∑ n =1 2 converges, or in other
2 dx =  − x 
1 x

1
∞ 1
x
n

words, x[n] = 1/n is square-summable.
2.13

See Problem 2.12, Part (b) solution.

2.14

x 2 [n] =

∑

cos ω c n
πn

, 1 ≤ n ≤ ∞. Now,

∑

 cosω cn  2

 ≤
n=1 
πn 
∞

∞

∑

1
. Since,
n=1 π 2 n 2

∑

1
π2
=
,
n=1 n 2
6
∞

 cosω cn  2 1

 ≤ . Therefore x 2 [n] is square-summable.
n=1 
πn 
6
∞

Using integral test we now show that x 2 [n] is not absolutely summable.
cos ω c x
∞ cos ω x
x
1
c
dx = ⋅
x ⋅ cos int(ω cx)
1
πx
π cos ω c x

∞

∫

Since

∞

∫1

cos ω c x
πx

where cosint is the cosine integral function.
1

dx diverges,

∞

∑ n=1

cosω cn
πn

10

also diverges.
∞

∑

2.15

x2 [n] =

n=−∞
∞

=

∑

∞

2
∑ ( xev [n] + x od[n])

n=−∞
∞

∑

x2 [n] +
ev

n=−∞
n= −∞
∞
as n= –∞ xev [n]xod [n]

∑

2.16 x[n] = cos(2πkn / N),
N−1

Ex =

∑ cos

2

∑

x ev [n]x od [n] =

n=−∞

∞

∑

n= −∞

x2 [n] +
ev

∞

∑

x2 [n]
od

n=−∞

= 0 since x ev [n]x od [n] is an odd sequence.
0 ≤ n ≤ N –1. Hence,

(2πkn / N) =

n= 0
N−1

Let C =

∞

x2 [n] + 2
od

1
2

N −1

∑ (1 + cos(4πkn / N)) =

n =0

N
2

+

1
2

N −1

∑ cos(4πkn / N).
n= 0

N−1

∑ cos(4πkn / N), and S = ∑ sin(4πkn / N).
n =0
j4πk

n= 0

N−1

Therefore C + jS =

1
∑ e j4πkn / N = e e / N−− 1 = 0. Thus, C = Re{C + jS} = 0 .
j4πk

n= 0

As C = Re{C + jS} = 0 , it follows that E x = .
2
N

∞

∞

2.17 (a) x 1[n] = µ[n]. Energy = ∑ n =−∞ µ 2 [n] = ∑ n= −∞ 12 = ∞.
1
1
K
1
K
K
Average power = lim
lim
lim
∑ n= − K (µ[n]) 2 = K→∞ 2K +1 ∑ n =0 12 = K→ ∞ 2K + 1 = 2 .
K→ ∞ 2K + 1
2

∞

∞

(b) x 2 [n] = nµ[n]. Energy = ∑ n =−∞ ( nµ[n]) = ∑ n= −∞ n 2 = ∞.
1
1
K
K
Aveerage power = lim
lim
∑ n= − K (nµ[n]) 2 = K→ ∞ 2K + 1 ∑ n= 0 n 2 = ∞.
2K + 1
K→ ∞
(c) x 3 [n] = A o e

jω o n

. Energy =

∞

∫n =−∞

jω o n

A oe

2

∞

= ∑ n = −∞ A o

2

= ∞. Average power =

2
1
1
2K A 2
K
K
jω n 2
o
Ao e o
= lim
A o = lim
= A 2.
∑ n= − K
∑ n =− K
o
K→ ∞ 2K + 1
K→∞ 2K +1
K→ ∞ 2K + 1

lim

2π
A
 2πn

jω n
− jω n
(d) x[n] = Asin
+ φ = Ao e o + A1e o , where ω o =
, A o = − e jφ and
 M

M
2
A − jφ
3
2
A1 = e . From Part (c), energy = ∞ and average power = A 2 + A2 + 4A 2 A1 = A 2 .
o
1
o
2
4
2.18

 1, n ≥ 0,
 1, n < 0,
Now, µ[n] = 
Hence, µ[−n − 1] = 
Thus, x[n] = µ[n] + µ[−n −1].
 0, n < 0.
 0, n ≥ 0.
n

2.19 (a) Consider a sequence defined by x[n] =

∑ δ[k].

k = −∞

11
If n < 0 then k = 0 is not included in the sum and hence x[n] = 0, whereas for n ≥ 0 , k = 0 is
n
 1, n ≥ 0,
included in the sum hence x[n] = 1 ∀ n ≥ 0 . Thus x[n] =
δ[k] = 
= µ[n].
 0, n < 0,


∑

 1,
(b) Since µ[n] = 
 0,


 1,
n ≥ 0,
it follows that µ[n – 1] = 
n < 0,
 0,


{

Now x[n] = Asin( ω 0 n + φ) .

{

(a) Given x[n] = 0 − 2

n ≥ 1,
n ≤ 0.

n = 0,
n ≠ 0, = δ[n].

1,
Hence, µ[n] – µ[n −1] = 0,
2.20

k = −∞

−2 − 2

0

2

2

}

2 . The fundamental period is N = 4,

hence ω o = 2π / 8 = π / 4. Next from x[0] = Asin(φ) = 0 we get φ = 0, and solving
π
x[1] = Asin( + φ) = Asin(π / 4) = − 2 we get A = –2.
4
(b) Given x[n] =

{

2

2

− 2

}

− 2 . The fundamental period is N = 4, hence

ω 0 = 2π/4 = π/2. Next from x[0] = Asin(φ) = 2 and x[1] = Asin(π / 2 + φ) = A cos(φ) = 2 it
can be seen that A = 2 and φ = π/4 is one solution satisfying the above equations.
(c) x[n] = {3

−3} . Here the fundamental period is N = 2, hence ω 0 = π . Next from x[0]

= A sin(φ) = 3 and x[1] = Asin(φ + π) = −A sin(φ) = −3 observe that A = 3 and φ = π/2 that A = 3
and φ = π/2 is one solution satisfying these two equations.
(d) Given x[n] = {0 1.5

0

−1.5} , it follows that the fundamental period of x[n] is N =

4. Hence ω 0 = 2π / 4 = π / 2 . Solving x[0] = Asin(φ) = 0 we get φ = 0, and solving
x[1] = Asin(π / 2) =1.5 , we get A = 1.5.
˜
2.21 (a) x 1[n] = e − j0.4πn . Here, ω o = 0.4π. From Eq. (2.44a), we thus get
N=

2πr 2π r
=
= 5 r = 5 for r =1.
ωo
0.4π

˜
(b) x 2 [n] = sin(0.6πn + 0.6π). Here, ω o = 0.6π. From Eq. (2.44a), we thus get
N=

2πr 2π r 10
=
= r = 10 for r = 3.
ωo
0.6π 3

˜
(c) x 3 [n] = 2 cos(1.1πn − 0.5π) + 2 sin(0.7πn). Here, ω 1 = 1.1π and ω 2 = 0.7π . From Eq.
2π r1 2π r1 20
2πr2 2π r2 20
(2.44a), we thus get N1 =
=
=
r1 and N 2 =
=
=
r 2 . To be periodic
ω1
1.1π 11
ω2
0.7π
7

12
we must have N1 = N 2 . This implies,

20
20
r1 =
r2 . This equality holds for r1 = 11 and r 2 = 7 ,
11
7

and hence N = N1 = N 2 = 20.
2π r1 20
2πr2 20
=
r1 and N 2 =
=
r2 . It follows from the results of Part (c), N = 20
1.3π 13
0.3π
3
with r1 = 13 and r 2 = 3.
(d) N1 =

2π r1 5
2πr2 5
= r1 , N 2 =
= r2 and N 3 = N 2 . Therefore, N = N1 = N2 = N 2 = 5 for
1.2π 3
0.8π 2
r1 = 3 and r 2 = 2.

(e) N1 =

˜
˜
˜
(f) x 6 [n] = n modulo 6. Since x 6 [n + 6] = (n+6) modulo 6 = n modulo 6 = x 6 [n].
˜
Hence N = 6 is the fundamental period of the sequence x 6 [n].
2πr
2πr
100
=
=
r =100 for r = 7.
ωo
0.14π
7
2πr
2πr
25
(b) ω o = 0.24π. Therefore, N =
=
=
r = 25 for r = 3.
ωo
0.24π 3
2πr
2πr
100
(c) ω o = 0.34π. Therefore, N =
=
=
r =100 for r = 17.
ωo
0.34π 17
2πr
2π r
8
(d) ω o = 0.75π. Therefore, N =
=
= r = 8 for r = 3.
ωo
0.75π 3

2.22 (a) ω o = 0.14π. Therefore, N =

2.23 x[n] = x a (nT) = cos(Ω o nT) is a periodic sequence for all values of T satisfying Ω o T ⋅ N = 2πr
for r and N taking integer values. Since, Ω o T = 2πr / N and r/N is a rational number, Ω o T
must also be rational number. For Ω o = 18 and T = π/6, we get N = 2r/3. Hence, the smallest
value of N = 3 for r = 3.
2.24

(a) x[n] = 3δ[n + 3] − 2 δ[n + 2] + δ[n] + 4 δ[n −1] + 5 δ[n − 2] + 2 δ[n −3]
(b) y[n] = 7 δ[n + 2] + δ[n +1] − 3δ[n] + 4 δ[n − 1] + 9 δ[n − 2] − 2 δ[n − 3]
(c) w[n] = −5δ[n + 2] + 4 δ[n + 2] + 3δ[n +1] + 6δ[n] − 5δ[n −1] +δ[n − 3]

2.25

(a) For an input x i [n], i = 1, 2, the output is
y i [n] = α1x i [n] + α 2 x i [n −1] + α 3x i [n − 2] + α 4 x i [n − 4], for i = 1, 2. Then, for an input
x[n] = Ax 1[n] + B x 2 [n], the output is y[n] = α1 ( A x1 [n] + Bx 2 [n] ) + α 2 ( A x1[ n −1] + Bx 2 [n −1])
+α3 ( A x1 [n − 2] + Bx 2 [n − 2]) + α4 ( Ax 1[n − 3] + B x 2 [n − 3])
= A( α1x1 [n] + α 2 x1[n −1] + α 3x 1[n − 2] + α 4 x1[n − 4])

+ B ( α1x 2 [n] + α 2 x 2 [n − 1] + α3 x 2 [n − 2] + α 4 x 2 [n − 4]) = A y1[ n] + By 2 [n].
Hence, the system is linear.

13
(b) For an input x i [n], i = 1, 2, the output is
y i [n] = b 0 x i [ n] + b1x i [n − 1] + b 2 x i [n − 2] + a1y i [n − 1] + a2 y i [n − 2], i = 1, 2. Then, for an
input x[n] = Ax 1[n] + B x 2 [n], the output is
y[n] = A ( b 0 x1 [n] + b 1x1 [n − 1] + b 2 x1 [n − 2] + a1 y1[n −1] + a 2 y1 [n − 2] )
+ B ( b 0 x 2 [ n] + b1x 2 [n − 1] + b 2 x 2 [n − 2] + a1y 2 [n −1] + a 2 y 2 [n − 2]) = A y1[ n] + By 2 [n].
Hence, the system is linear.

 x i [n / L], n = 0, ± L, ± 2L, K
(c) For an input x i [n], i = 1, 2, the output is y i [n] =, 
0,
otherwise,

Consider the input x[n] = Ax 1[n] + B x 2 [n], Then the output y[n] for n = 0, ± L, ± 2L, K is
given by y[n] = x[n / L] = Ax 1[n / L] + B x 2 [n / L] = A y1[n] + By 2 [n] . For all other values
of n, y[n] = A ⋅ 0 + B ⋅ 0 = 0. Hence, the system is linear.
(d) For an input x i [n], i = 1, 2, the output is y i [n] = x i [n / M] . Consider the input
x[n] = Ax 1[n] + B x 2 [n], Then the output y[n] = Ax 1[n / M] + B x 2 [n / M ] = A y1[n] + B y 2 [n].
Hence, the system is linear.
(e) For an input x i [n], i = 1, 2, the output is y i [n] =
input x[n] = Ax 1[n] + B x 2 [n], the output is y i [n] =

1
M −1
∑ k =0 x i [ n − k]. Then, for an
M

1
M −1
∑ k =0 (Ax 1[n − k] + Bx 2 [n − k])
M

M −1
M −1
 1

 1

= A ∑ k =0 x1 [n − k] + B ∑ k =0 x 2 [n − k] = A y1 [n] + By 2 [n]. Hence, the system is
M

M

linear.

(f) The first term on the RHS of Eq. (2.58) is the output of an up-sampler. The second
term on the RHS of Eq. (2.58) is simply the output of an unit delay followed by an upsampler, whereas, the third term is the output of an unit adavance operator followed by an
up-sampler We have shown in Part (c) that the up-sampler is a linear system. Moreover,
the delay and the advance operators are linear systems. A cascade of two linear systems is
linear and a linear combination of linear systems is also linear. Hence, the factor-of-2
interpolator is a linear system.
(g) Following thearguments given in Part (f), it can be shown that the factor-of-3
interpolator is a linear system.
2.26

(a) y[n] = n2 x[n].
For an input xi[n] the output is yi[n] = n2 x i[n], i = 1, 2. Thus, for an input x3 [n] = Ax1 [n]
+ Bx2 [n], the output y3 [n] is given by y 3 [n] = n 2 ( A x1[n] + Bx 2 [n]) = A y1[ n] + By 2 [n] .

Hence the system is linear.
Since there is no output before the input hence the system is causal. However, y[n] being
proportional to n, it is possible that a bounded input can result in an unbounded output. Let
x[n] = 1 ∀ n , then y[n] = n2 . Hence y[n] → ∞ as n → ∞ , hence not BIBO stable.
Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If
x 1[n] = x[n − n 0 ] then y 1[n] = n 2 x1 [n] = n 2 x[n − n 0 ]. However, y[n − n 0 ] = (n − n 0 ) 2 x[n − n 0 ] .
Since y 1[n] ≠ y[n − n 0 ], the system is not time-invariant.

14
(b) y[n] = x 4 [n] .
For an input x1 [n] the output is yi[n] = xi4 [n], i = 1, 2. Thus, for an input x3 [n] = Ax1 [n] +
4
Bx2 [n], the output y3 [n] is given by y 3 [n] = (A x1[n] + Bx 2 [n]) 4 ≠ A4 x 1 [n] + A 4 x 4 [n]
2

Hence the system is not linear.
Since there is no output before the input hence the system is causal.
Here, a bounded input produces bounded output hence the system is BIBO stable too.
Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If
4
x 1[n] = x[n − n 0 ] then y 1[n] = x 1 [n] = x 4 [n − n 0 ] = y[n − n 0 ]. Hence, the system is time-

invariant.
3

(c) y[n] = β +

∑ x[n − l] .
l =0

For an input xi[n] the output is y i [n] = β +

3

∑ x i [n − l ] , i = 1, 2.

Thus, for an input x3 [n] =

l =0

Ax 1 [n] + Bx2 [n], the output y3 [n] is given by
3

3

3

l =0

y[n] = β +

l =0

l =0

∑ (A x1 [n − l ] + Bx 2 [ n − l ]) = β + ∑ A x1 [n − l] + ∑ Bx 2 [ n − l ]

≠ A y1[ n] + By 2 [n]. Since β ≠ 0 hence the system is not linear.
Since there is no output before the input hence the system is causal.
Here, a bounded input produces bounded output hence the system is BIBO stable too.
Also following an analysis similar to that in part (a) it is easy to show that the system is timeinvariant.
3

(d) y[n] = β +

∑

x[n − l ]

l = –3

For an input xi[n] the output is y i [n] = β +

3

∑ x i [n − l] , i = 1, 2.

l = –3

Thus, for an input x3 [n] =

Ax 1 [n] + Bx2 [n], the output y3 [n] is given by
3

3

3

l = −3

y[n] = β +

l = −3

l = −3

∑ (A x1[n − l ] + Bx 2 [n − l ]) = β + ∑ A x1 [n − l ] + ∑

Bx 2 [n − l ]

≠ A y1[ n] + By 2 [n]. Since β ≠ 0 hence the system is not linear.
Since there is output before the input hence the system is non-causal.
Here, a bounded input produces bounded output hence the system is BIBO stable.

15
Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If
x 1[n] = x[n − n 0 ] then y 1[n] = β +

3

∑

l = –3

x1[ n − l ] = β +

3

∑

l = –3

x1 [n − n 0 − l ] = y[n − n 0 ]. Hence the

system is time-invariant.
(e) y[n] = αx[−n]
The system is linear, stable, non causal. Let y[n] be the output for an input x[n] and y 1[n]
be the output for an input x 1[n]. Then y[n] = αx[−n] and y 1[n] = α x1 [−n] .
Let x 1[n] = x[n − n 0 ], then y 1[n] = α x1 [−n] = α x[−n − n 0 ], whereas y[n − n 0 ] = αx[n 0 − n].
Hence the system is time-varying.
(f) y[n] = x[n – 5]
The given system is linear, causal, stable and time-invariant.
2.27 y[n] = x[n + 1] – 2x[n] + x[n – 1].
Let y 1[n] be the output for an input x 1[n] and y 2 [n] be the output for an input x 2 [n] . Then
for an input x 3 [n] = αx 1[n] +βx 2 [n] the output y 3 [n] is given by
y 3 [n] = x 3[n +1] − 2x 3[n] + x 3 [n −1]
= αx1[n +1] − 2αx 1[n] + αx1[n −1] +βx 2 [n +1] − 2βx 2 [n] + βx 2 [n −1]
= αy1[n] + βy 2 [n] .
Hence the system is linear. If x 1[n] = x[n − n 0 ] then y 1[n] = y[n − n 0 ]. Hence the system is
time-inavariant. Also the system's impulse response is given by
 −2,
n = 0,

h[n] =  1,
n =1,-1,

elsewhere.
 0,
Since h[n] ≠ 0 ∀ n < 0 the system is non-causal.
2.28 Median filtering is a nonlinear operation. Consider the following sequences as the input to
a median filter: x 1[n] = {3, 4, 5} and x 2 [n] ={2, − 2, − 2}. The corresponding outputs of the
median filter are y 1[n] = 4 and y2 [n] = −2 . Now consider another input sequence x3 [n] =
x 1 [n] + x2 [n]. Then the corresponding output is y 3 [n] = 3, On the other hand,

y1 [n] + y2 [n] = 2 . Hence median filtering is not a linear operation. To test the timeinvariance property, let x[n] and x1 [n] be the two inputs to the system with correponding
outputs y[n] and y1 [n]. If x 1[n] = x[n − n 0 ] then
y 1[n] = median{x1[n − k],......., x1[n],.......x1 [n + k]}
= median{x[n − k − n 0 ],......., x[n − n 0 ], .......x[n + k − n 0 ]}= y[n − n 0 ].
Hence the system is time invariant.

16
2.29

1
x[n] 
y[n] =  y[n −1] +


2
y[n −1] 

Now for an input x[n] = α µ[n], the ouput y[n] converges to some constant K as n → ∞ .
1
α
The above difference equation as n → ∞ reduces to K =  K +  which is equivalent to
2
K
2

K = α or in other words, K = α .
It is easy to show that the system is non-linear. Now assume y1 [n] be the output for an
x [n] 
1

input x1 [n]. Then y 1[n] =  y1[n −1] + 1
2
y1 [n −1] 



x[n − n 0 ] 
1
.
If x 1[n] = x[n − n 0 ]. Then, y 1[n] =  y1[n −1] +
2
y1 [n −1] 


Thus y 1[n] = y[n − n 0 ]. Hence the above system is time invariant.
2.30 For an input x i [n], i = 1, 2, the input-output relation is given by
y i [n] = x i [n] − y 2 [n − 1] + y i [n −1]. Thus, for an input Ax1 [n] + Bx2 [n], if the output is
i
Ay 1 [n] + By2 [n], then the input-output relation is given by A y1 [n] + By 2 [ n] =

A x1 [n] + Bx 2 [n] − ( A y1 [n − 1] + By 2 [n −1]) + A y1[ n −1] + By 2 [n − 1] = A x1 [n] + Bx 2 [n]
2

2
− A 2 y1 [n − 1] − B2 y 2 [n − 1] + 2AB y1 [n − 1]y 2 [n − 1] + A y1[n −1] + By 2 [n − 1]
2

≠ A x1[ n] − A2 y 2 [n − 1] + Ay 1[n − 1] + B x 2 [n] − B2 y 2 [ n −1] + By 2 [n − 1] . Hence, the system
1
2
is nonlinear.
Let y[n] be the output for an input x[n] which is related to x[n] through
y[n] = x[n] − y 2 [ n −1] + y[n − 1]. Then the input-output realtion for an input x[n − n o ] is given
by y[n − n o ] = x[n − n o ] − y 2 [n − n o − 1] + y[n − n o − 1], or in other words, the system is timeinvariant.
Now for an input x[n] = α µ[n], the ouput y[n] converges to some constant K as n → ∞ .
The above difference equation as n → ∞ reduces to K = α − K2 + K , or K 2 = α , i.e.
K= α.
2.31 As δ[n] = µ[n] − µ[n − 1] , Τ{δ[n]} = Τ{µ[n]}− Τ{µ[n −1]} ⇒ h[n] = s[n] − s[n − 1]
For a discrete LTI system
∞

y[n] =

∑

h[k]x[n − k] =

k = −∞

2.32

∞

∑ (s[k] −s[k −1])x[n − k] =

k =−∞

∞

∞

∑

∞

s[k]x[n − k] −

k =−∞

∑ s[k −1]x[n − k]

k= −∞

∞

˜
˜
y[n] = ∑ m =−∞ h[m] x[n − m] . Hence, y[n + kN] = ∑ m =−∞ h[m] x[n + kN − m]
∞

˜
= ∑ m = −∞ h[m] x [n − m] = y[n]. Thus, y[n] is also a periodic sequence with a period N.

17
2.33

Now δ[n − r ] * δ[n − s] =

∞

∑ m = −∞ δ[m − r]δ[n − s − m] = δ[n − r − s]

(a) y 1[n] = x 1[n] * h 1[n] = ( 2δ[n − 1] − 0.5δ[ n − 3] ) * ( 2δ[n] + δ[n − 1] − 3δ[n − 3] )
= 4δ[n −1] * δ[n] – δ[n − 3] * δ[n] + 2 δ[n −1] * δ[n −1] – 0.5 δ[n − 3] * δ[n −1]
– 6δ[n −1] * δ[n − 3] + 1.5 δ[n − 3] * δ[n − 3] = 4 δ[n −1] – δ[n − 3] + 2 δ[n −1]
– 0.5 δ[n − 4] – 6δ[n − 4] + 1.5δ[n − 6]
= 4 δ[n −1] + 2 δ[n −1] – δ[n − 3] – 6.5δ[n − 4] + 1.5δ[n − 6]
(b) y 2 [n] = x 2 [n] * h 2 [n] = ( −3δ[n −1] + δ[n + 2]) * ( −δ[n − 2] − 0.5δ[n −1] + 3δ[n − 3])
= − 0.5 δ[n + 1] − δ[n] + 3δ[n − 1] + 1.5δ[ n − 2] + 3δ[n − 3] − 9δ[n − 4]
(c) y 3 [n] = x 1[n] * h 2 [n] =

(2δ[n − 1] − 0.5δ[ n − 3]) * (−δ[n − 2] − 0.5δ[n −1] + 3δ[n − 3]) =
− δ[n − 2] − 2 δ[n − 3] − 6.25δ[n − 4] + 0.5δ[n − 5] – 1.5δ[n − 6]
(d) y 4 [n] = x 2 [n] * h 1[n] = ( −3δ[n −1] + δ[n + 2]) * ( 2δ[n] + δ[n − 1] − 3δ[n − 3] ) =
2 δ[n + 2] + δ[n +1] − δ[n −1] − 3δ[n − 2] – 9 δ[n − 4]
2.34 y[n] = ∑ m2 g[m] h[n − m] . Now, h[n – m] is defined for M1 ≤ n − m ≤ M2 . Thus, for
=N 1
N

m = N1 , h[n–m] is defined for M1 ≤ n − N1 ≤ M 2 , or equivalently, for
M1 + N1 ≤ n ≤ M2 + N1 . Likewise, for m = N 2 , h[n–m] is defined for M1 ≤ n − N2 ≤ M2 , or
equivalently, for M1 + N 2 ≤ n ≤ M2 + N 2 .
(a) The length of y[n] is M 2 + N2 − M1 – N1 + 1.
(b) The range of n for y[n] ≠ 0 is min ( M1 + N1 , M1 + N2 ) ≤ n ≤ max( M 2 + N1 , M2 + N 2 ) , i.e.,
M 1 + N1 ≤ n ≤ M 2 + N2 .

2.35 y[n] = x1 [n] * x 2 [n] =

∞

∑ k =−∞ x1[n − k]x2[k] .

Now, v[n] = x1 [n – N1 ] * x 2 [n – N2 ] =
k − N2 = m. Then v[n] =

∞

∞

∑ k = −∞ x 1[n − N1 − k] x 2 [k − N2 ].

Let

∑ m = −∞ x1 [n − N1 − N2 − m]x 2 [m] = y[n − N1 − N 2 ],

2.36 g[n] = x1 [n] * x 2 [n] * x 3 [n] = y[n] * x 3 [n] where y[n] = x1 [n] * x 2 [n]. Define v[n] =
x 1 [n – N1 ] * x 2 [n – N2 ]. Then, h[n] = v[n] * x 3 [n – N3 ] . From the results of Problem
2.32, v[n] = y[n − N1 − N2 ] . Hence, h[n] = y[n − N1 − N2 ] * x 3 [n – N3 ] . Therefore,
using the results of Problem 2.32 again we get h[n] = g[n– N1 – N 2 – N 3 ] .

18
∞

2.37 y[n] = x[n] * h[n] =

∑ x[n − k]h[k] . Substituting k by n-m in the above expression, we

k =−∞

∞

∑ x[m]h[n − m] = h[n]

get y[n] =

* x[n]. Hence convolution operation is commutative.

m = −∞

(

)

Let y[n] = x[n] * h1 [n] + h 2 [n] = =
∞

k =−∞

=

k =∞

∞

∑ x[n − k](h1[k] + h2[k])

k =−∞

k= −∞

∑ x[n − k]h1[k] + ∑ x[n − k]h2[k] = x[n]

* h 1 [n] + x[n] * h 2 [n]. Hence convolution is

distributive.
2.38

x 3 [n] * x 2 [n] * x 1 [n] = x3 [n] * (x 2 [n] * x 1 [n])
As x 2 [n] * x 1 [n] is an unbounded sequence hence the result of this convolution cannot be
determined. But x2 [n] * x 3 [n] * x 1 [n] = x2 [n] * (x 3 [n] * x 1 [n]) . Now x3 [n] * x 1 [n] =
0 for all values of n hence the overall result is zero. Hence for the given sequences
x 3 [n] * x 2 [n] * x 1 [n] ≠ x 2 [n] * x 3 [n] * x 1 [n] .

2.39 w[n] = x[n] * h[n] * g[n]. Define y[n] = x[n] * h[n] =
h[n] * g[n] =

∑ g[k] h[n − k].

∑ x[k]h[n − k] and f[n] =
k

Consider w1 [n] = (x[n] * h[n]) * g[n] = y[n] * g[n]

k

∑ g[m] ∑ x[k] h[n − m − k].
m
k
= ∑ x[k] ∑ g[m] h[n − k − m].
=

k

Next consider w2 [n] = x[n] * (h[n] * g[n]) = x[n] * f[n]
Difference between the expressions for w1 [n] and w2 [n] is

m

that the order of the summations is changed.
A) Assumptions: h[n] and g[n] are causal filters, and x[n] = 0 for n < 0. This implies

0,
for m < 0,

y[m] =  m
 k =0 x[k] h[m − k], for m ≥ 0.


∑

Thus, w[n] =

n −m

∑m = 0 g[m] y[n − m] = ∑m = 0 g[m] ∑ k= 0 x[k] h[n − m − k] .
n

n

All sums have only a finite number of terms. Hence, interchange of the order of summations
is justified and will give correct results.
B) Assumptions: h[n] and g[n] are stable filters, and x[n] is a bounded sequence with
x[n] ≤ X. Here, y[m] =

∞

∑ k= −∞ h[k] x[m − k] = ∑ k= k
k2

ε k ,k [m] ≤ ε n X.
1 2

19

1

h[k]x[m − k] + ε k ,k [m] with
1 2
In this case all sums have effectively only a finite number of terms and the error can be
reduced by choosing k1 and k2 sufficiently large. Hence in this case the problem is again
effectively reduced to that of the one-sided sequences. Here, again an interchange of
summations is justified and will give correct results.
Hence for the convolution to be associative, it is sufficient that the sequences be stable and
single-sided.
2.40 y[n] =

∞

∑ k= −∞ x[n − k]h[k].

Since h[k] is of length M and defined for 0 ≤ k ≤ M – 1, the

convolution sum reduces to y[n] =

(M −1)

∑ k= 0

x[n − k]h[k]. y[n] will be non-zero for all those

values of n and k for which n – k satisfies 0 ≤ n − k ≤ N − 1 .
Minimum value of n – k = 0 and occurs for lowest n at n = 0 and k = 0. Maximum value
of n – k = N–1 and occurs for maximum value of k at M – 1. Thus n – k = M – 1
⇒ n = N + M − 2 . Hence the total number of non-zero samples = N + M – 1.
∞

2.41 y[n] = x[n] * x[n] =

∑ x[n − k]x[k].

k =−∞

Since x[n – k] = 0 if n – k < 0 and x[k] = 0 if k < 0 hence the above summation reduces to
N −1
 n + 1,
0 ≤ n ≤ N −1,

y[n] =
x[n − k]x[k] = 

 2N − n, N ≤ n ≤ 2N − 2.
k =n

∑

Hence the output is a triangular sequence with a maximum value of N. Locations of the
output samples with values
N
2

with values
N
4

otherwise

are n =

N
2

N
4

N
4

are n =

– 1 and

3N
2

– 1 and

7N
4

– 1. Locations of the output samples

– 1. Note: It is tacitly assumed that N is divisible by 4

is not an integer.

2.42 y[n] = ∑ k= 0 h[k]x[n − k] . The maximum value of y[n] occurs at n = N–1 when all terms
in the convolution sum are non-zero and is given by
N(N + 1)
N−1
N
y[N − 1] = ∑ k =0 h[k] = ∑ k =1 k =
.
2
N−1

∞

2.43

(a) y[n] = gev[n] * h ev[n] =

∑

∞

h ev[n − k]g ev [k] . Hence, y[–n] =

k= −∞

Replace k by – k. Then above summation becomes
∞

y[−n] =

∑

∞

h ev [−n + k]g ev [−k] =

k= −∞

∑

h ev[−(n − k)]g ev [−k] =

k= −∞

= y[n].
Hence gev[n] * h ev[n] is even.

20

∞

∑ h ev[−n − k]gev[k].

k= −∞

∑ h ev[(n − k)]gev[k]

k= −∞
∞

(b)

∑ hod [(n − k)]gev[k] . As a result,

y[n] = gev[n] * h od [n] =
∞

∞

k= −∞

y[–n] =

k= −∞
∞

k= −∞

k= −∞

∑ hod [(−n − k)]gev[k] = ∑ hod [−(n − k)]gev[−k] = − ∑ hod [(n − k)]gev[k].

Hence gev[n] * h od [n] is odd.
(c)

∞

y[n] = god [n] * h od [n] =
∞

y[–n] =

∑

h od [−n − k]god [k] =

k= −∞

∑ hod [n − k]god [k] .

As a result,

k= −∞
∞

∑

∞

h od [−(n − k)]g od[−k] =

k= −∞

∑ hod [(n − k)]god [k].

k= −∞

Hence god [n] * h od [n] is even.
2.44 (a) The length of x[n] is 7 – 4 + 1 = 4. Using x[n] =

{

}

{

}

{

}

1
7
y[n] − ∑ k =1 h[ k]x[n − k] ,
h[0]

we arrive at x[n] = {1 3 –2 12}, 0 ≤ n ≤ 3
(b) The length of x[n] is 9 – 5 + 1 = 5. Using x[n] =
arrive at x[n] = {1

1

1

1
9
y[n] − ∑ k =1 h[ k]x[n − k] , we
h[0]

1}, 0 ≤ n ≤ 4.

1

1
5
y[n] − ∑ k =1 h[ k]x[n − k] , we get
h[0]
x[n] = {−4 + j, −0.6923 + j0.4615, 3.4556 + j1.1065} , 0 ≤ n ≤ 2.
(c) The length of x[n] is 5 – 3 + 1 = 3. Using x[n] =

2.45

y[n] = ay[n – 1] + bx[n]. Hence, y[0] = ay[–1] + bx[0]. Next,
2

y[1] = ay[0] + bx[1] = a y[−1] + a b x[0] + b x[1] . Continuing further in similar way we
obtain y[n] = a n+1y[−1] +

∑ k =0 a n− kb x[k].
n

(a) Let y 1[n] be the output due to an input x 1[n]. Then y 1[n] = a

n +1

y[−1] +

n

∑ a n− kb x1[k].

k= 0

If x 1[n] = x[n – n 0 ] then
y 1[n] = a

n +1

y[−1] +

n

∑a

n− k

b x[k − n 0 ] = a

n+1

y[−1] +

However, y[n − n 0 ] = a

y[−1] +

∑ a n −n 0 − rb x[r] .
r=0

k= n 0

n− n 0 +1

n −n 0

n− n 0

∑ a n− n0 − r b x[r].
r= 0

Hence y 1[n] ≠ y[n − n 0 ] unless y[–1] = 0. For example, if y[–1] = 1 then the system is time
variant. However if y[–1] = 0 then the system is time -invariant.

21
(b)

Let y1 [n] and y2 [n] be the outputs due to the inputs x1 [n] and x2 [n]. Let y[n] be the output
for an input α x1[n] + βx 2 [n]. However,
αy1 [n] + βy 2 [n] = αa

n+1

y[−1] + βa

n

n+1

y[−1] + α

∑a

n −k

n

b x1[k] + β

k= 0

whereas
y[n] = a

n+1

y[−1] + α

n

∑a

k =0

n −k

∑ an −k b x2[k]

k =0

n

b x1[k] + β

∑ an −k b x2[k].

k =0

Hence the system is not linear if y[–1] = 1 but is linear if y[–1] = 0.
(c)

Generalizing the above result it can be shown that for an N-th order causal discrete time
system to be linear and time invariant we require y[–N] = y[–N+1] = L = y[–1] = 0.

2.46 y step [n] = ∑ k= 0 h[k] µ[n − k] = ∑ k =0 h[k], n ≥ 0, and y step [n] = 0, n < 0. Since h[k] is
nonnegative, y step [n] is a monotonically increasing function of n for n ≥ 0, and is not
oscillatory. Hence, no overshoot.
n

2.47

n

n

(a) f[n] = f[n – 1] + f[n – 2]. Let f[n] = αr , then the difference equation reduces to
1± 5
n
n−1
n −2
2
αr − αr
− αr
= 0 which reduces to r − r −1 = 0 resulting in r =
.
2
 1+ 5  n
 1− 5  n
Thus, f[n] = α1 
 2  + α 2 2  .







α + α2
α −α 2
Since f[0] = 0 hence α1 + α2 = 0 . Also f[1] = 1 hence 1
+ 5 1
= 1.
2
2
n
n
1
1  1+ 5
1  1 − 5
Solving for α1 and α 2 , we get α1 = – α 2 =
. Hence, f[n] =

 −

 .
5
5 2 
5 2 




(b) y[n] = y[n – 1] + y[n – 2] + x[n – 1]. As system is LTI, the initial conditions are equal
to zero.
Let x[n] = δ[n]. Then, y[n] = y[n – 1] + y[n – 2] + δ[n − 1] . Hence,
y[0] = y[– 1] + y[– 2] = 0 and y[1] = 1. For n > 1 the corresponding difference equation is
y[n] = y[n – 1] + y[n – 2] with initial conditions y[0] = 0 and y[1] = 1, which are the same as
those for the solution of Fibonacci's sequence. Hence the solution for n > 1 is given by
n
n
1  1+ 5
1  1− 5
y[n] =

 −


5 2 
5 2 




Thus f[n] denotes the impulse response of a causal LTI system described by the difference
equation y[n] = y[n – 1] + y[n – 2] + x[n – 1].

2.48 y[n] = αy[n−1]+x[n] . Denoting, y[n] = yre [n] + j yim [n], and α = a + jb, we get,
y re[n] + jy im [n] = (a + jb)(y re[n −1] + jy im [n −1]) + x[n] .

22
Equating the real and the imaginary parts , and noting that x[n] is real, we get
y re[n] = a y re[n −1] − b y im [n −1] + x[n],

(1)

y im [n] = b y re[n −1] + a y im [n −1]
Therefore
1
b
y im [n −1] = y im [n] − y re [n −1]
a
a
Hence a single input, two output difference equation is
b
b2
y re[n] = ay re[n −1] − y im [n] +
y [n −1] + x[n]
a
a re
thus b y im [n −1] = −a y re[n −1] +(a 2 + b 2 )y re [n − 2] + a x[n −1] .
Substituting the above in Eq. (1) we get
y re[n] = 2a y re [n − 1] − (a2 + b 2 )y re[n − 2] + x[n] − a x[n −1]
which is a second-order difference equation representing y re[n] in terms of x[n].
2.49 From Eq. (2.59), the output y[n] of the factor-of-3 interpolator is given by
1
2
y[n] = x u [n] + ( x u [n − 1] + x u [n + 2]) + ( x u [n − 2] + x u [n +1]) where x u [n] is the output of
3
3
the factor-of-3 up-sampler for an input x[n]. To compute the impulse response we set x[n] =
δ[n] , in which case, x u [n] = δ[3n]. As a result, the impulse response is given by
1
2
h[n] = δ[3n] + ( δ[3n − 3] + δ[3n + 6] ) + (δ[3n − 6] + δ[3n + 3]) or
3
3
1
2
1
2
= δ[n + 2] + δ[n +1] + δ[n] + δ[n − 1] + δ[n − 2].
3
3
3
3
2.50 The output y[n] of the factor-of-L interpolator is given by
1
2
y[n] = x u [n] + ( x u [n − 1] + x u [n + L − 1]) + ( x u [n − 2] + x u [ n + L − 2] )
L
L
L−1
+K +
(x u [n − L + 1] + x u [n + 1]) where x u [n] is the output of the factor-of-L upL
sampler for an input x[n]. To compute the impulse response we set x[n] = δ[n] , in which
case, x u [n] = δ[Ln]. As a result, the impulse response is given by
1
2
h[n] = δ[Ln] + ( δ[Ln − L] + δ[Ln + L(L − 1)]) + (δ[ Ln − 2L] + δ[Ln + L(L − 2)])
L
L
L−1
1
2
+K +
(δ[Ln − L(L – 1)] + δ[Ln + L]) = δ[n + ( L − 1)] + δ[n + (L − 2)] +K
L
L
L
L −1
1
2
L −1
+
δ[n +1] + δ[n] + δ[n − 1] + δ[n − 2] +
δ[n − (L − 1)]
L
L
L
L
2.51 The first-order causal LTI system is characterized by the difference equation
y[n] = p 0 x[n] + p1x[n −1] − d1 y[n −1] . Letting x[n] = δ[n] we obtain the expression for its

impulse response h[n] = p 0 δ[n] + p1δ[n −1] − d 1h[n −1] . Solving it for n = 0, 1, and 2, we get
h[0] = p 0 , h[1] = p1 − d 1h[0] = p1 − d1 p 0 , and h[2] = −d 1h[1] == −d1 ( p 1 − d1 p 0 ) . Solving these
h[2]
h[2]h[0]
equations we get p0 = h[0], d 1 = −
, and p1 = h[1] −
.
h[1]
h[1]

23
M

2.52

∑

N

p k x[n − k] =

k= 0

∑ dk y[n − k].
k= 0

N

k= 0

Let the input to the system be x[n] = δ[n] . Then,

M

k= 0

∑ pkδ[n − k] = ∑ dk h[n − k]. Thus,

N

pr =

∑ dkh[r − k].

Since the system is assumed to be causal, h[r – k] = 0 ∀ k > r.

k =0

r

pr =

r

k =0

k =0

∑ dkh[r − k] = ∑ h[k]dr − k .

2.53 The impulse response of the cascade is given by h[n] = h1 [n] * h 2 [n] where
 n

n
n
k n− k 
h 1[n] = α µ[n] and h 2 [n] = β µ[n] . Hence, h[n] = 
α β

 µ[n].
 k= 0


∑

∞

n

2.54 Now, h[n] = α µ[n]. Therefore y[n] =

∑

∞

h[k] x[n − k] =

∞

k =1

= x[n] +

k = −∞
∞

∑ α k x[n − k]

k =0

k =0

∑ αk x[n − k] = x[n] + α ∑ α k x[n −1 − k] = x[n] + αy[n − 1] .

Hence, x[n] = y[n] – αy[n −1] . Thus the inverse system is given by y[n]= x[n] – αx[n −1].


The impulse response of the inverse system is given by g[n] =  1
−α .
↑

2.55 y[n] = y[n − 1] + y[n − 2] + x[n − 1] . Hence, x[n – 1] = y[n] – y[n – 1] – y[n – 2], i.e.
x[n] = y[n + 1] – y[n] – y[n – 1]. Hence the inverse system is characterised by


y[n] = x[n + 1] – x[n] – x[n – 1] with an impulse response given by g[n] = 1 –1 –1 .


↑
d1
p1
1
2.56 y[n] = p 0 x[n] + p1x[n −1] − d1 y[n −1] which leads to x[n] =
y[n] +
y[n −1] −
x[n − 1]
p0
p0
p0
Hence the inverse system is characterised by the difference equation
d
p
1
y 1[n] =
x1 [n] + 1 x1[n −1] − 1 y 1[n −1].
p0
p0
p0
2.57 (a) From the figure shown below we observe
v[n]
↓ h [n]
x[n]
h1[n]
2
h 5[n]
h 3 [n]

h 4 [n]

24

y[n]
v[n] = (h1 [n] + h3 [n] * h 5 [n]) * x[n] and y[n] = h2 [n] * v[n] + h3 [n] * h 4 [n] * x[n].
Thus, y[n] = (h2 [n] * h 1 [n] + h2 [n] * h 3 [n] * h 5 [n] + h3 [n] * h 4 [n]) * x[n].
Hence the impulse response is given by
h[n] = h2 [n] * h 1 [n] + h2 [n] * h 3 [n] * h 5 [n] + h3 [n] * h 4 [n]
(b)

From the figure shown below we observe
h 4 [n]
v[n]
↓
x[n]

h1 [n ]

h 2 [n]

h 3[n]

y[n]

h 5 [n]

v[n] = h4 [n] * x[n] + h1 [n] * h 2 [n] * x[n].
Thus, y[n] = h3 [n] * v[n] + h1 [n] * h 5 [n] * x[n]
= h3 [n] * h 4 [n] * x[n] + h3 [n] * h 1 [n] * h 2 [n] * x[n] + h1 [n] * h 5 [n] * x[n]
Hence the impulse response is given by
h[n] = h3 [n] * h 4 [n] + h3 [n] * h 1 [n] * h 2 [n] + h1 [n] * h 5 [n]
2.58 h[n] = h1[n] * h 2 [n] + h 3 [n]
Now h 1[n] * h 2 [n] = ( 2 δ[n − 2] − 3 δ[n + 1]) * ( δ[n − 1] + 2 δ[n + 2])
= 2 δ[n − 2] * δ[n −1] – 3δ[n + 1] * δ[n −1] + 2 δ[n − 2] * 2 δ[n + 2]
– 3δ[n + 1] * 2 δ[n + 2] = 2 δ[n − 3] – 3δ[n] + 4 δ[n] – 6 δ[n + 3]
= 2 δ[n − 3] + δ[n] – 6 δ[n + 3] . Therefore,
y[n] = 2 δ[n − 3] + δ[n] – 6 δ[n + 3] + 5 δ[n − 5] + 7 δ[n − 3] + 2 δ[n −1] −δ[n] + 3 δ[n + 1]
= 5 δ[n − 5] + 9 δ[n − 3] + 2 δ[n −1] + 3δ[n + 1] − 6δ[n + 3]
2.59 For a filter with complex impulse response, the first part of the proof is same as that for a
∞

filter with real impulse response. Since, y[n] =

∑ h[k]x[n − k] ,

k = −∞
∞

k= −∞

y[n] =

∞

k= −∞

∑ h[k]x[n − k] ≤ ∑ h[k] x[n − k].

Since the input is bounded hence 0 ≤ x[n] ≤ Bx . Therefore, y[n] ≤ B x
∞

So if

∞

∑ h[k].

k= −∞

∑ h[k] = S < ∞ then y[n] ≤ B S indicating that y[n] is also bounded.
x

k= −∞

25
To proove the converse we need to show that if a bounded output is produced by a bounded
h *[−n]
input then S < ∞ . Consider the following bounded input defined by x[n] =
.
h[−n]
∞

∑

Then y[0] =

k =−∞

h * [k]h[k]
=
h[k]

∞

∑ h[k] = S.

Now since the output is bounded thus S < ∞ .

k =−∞

∞

Thus for a filter with complex response too is BIBO stable if and only if

∑ h[k] = S < ∞ .

k= −∞

2.60 The impulse response of the cascade is g[n] = h1 [n] * h 2 [n] or equivalently,
∞

g[k] =

∑ h1[k − r]h2[r].

r = –∞
∞

∑

 ∞
 ∞

h1[k – r] h 2 [r] ≤ 
h 1[k]  
h 2 [r]  .



 k= –∞
  r= –∞

k= –∞ r = –∞
∞

g[k] =

k= –∞

Thus,

∞

∑ ∑

∑

Since h1 [n] and h2 [n] are stable,

∑

∑ h1[k] < ∞ and ∑ h2[k] < ∞ . Hence, ∑ g[k] < ∞ .
k

k

k

Hence the cascade of two stable LTI systems is also stable.
2.61 The impulse response of the parallel structure g[n] = h1 [n] + h 2 [n] . Now,

∑ g[k] = ∑ h 1[k] + h 2 [k] ≤ ∑ h1 [k] + ∑ h 2 [ k].
k

k

k

Since h1 [n] and h2 [n] are stable,

k

∑ h1[k] < ∞ and ∑ h2[k] < ∞ . Hence, ∑ g[k] < ∞ . Hence the parallel connection of
k

k

k

two stable LTI systems is also stable.
2.62 Consider a cascade of two passive systems. Let y1 [n] be the output of the first system which is
the input to the second system in the cascade. Let y[n] be the overall output of the cascade.
∞

The first system being passive we have

∑

∞

2

y 1[n] ≤

n =−∞

∑ x[n] 2 .

n= −∞

Likewise the second system being also passive we have

∞

∑ y[n]

n =−∞

2

∞

∑ y1[n]

≤

n =−∞

2

∞

≤

∑ x[n] 2 ,

n= −∞

indicating that cascade of two passive systems is also a passive system. Similarly one can
prove that cascade of two lossless systems is also a lossless system.
2.63 Consider a parallel connection of two passive systems with an input x[n] and output y[n].
The outputs of the two systems are y 1[n] and y 2 [n] , respectively. Now,
∞

∑ n =−∞ y1 [n]

2

∞

≤ ∑ n= −∞ x[n] , and
2

∞

∑ n =−∞ y 2 [n]

2

∞

≤ ∑ n= −∞ x[n] .
2

Let y 1[n] = y 2 [n] = x[n] satisfying the above inequalities. Then y[n] = y1 [n] + y 2 [n] = 2x[n]
∞

∞

∞

and as a result, ∑ n =−∞ y[n] = 4∑ n =−∞ x[n] > ∑ n =−∞ x[n] . Hence, the parallel
connection of two passive systems may not be passive.
2

2

26

2
M

2.64 Let

∑

N

p k x[n − k] = y[n] +

k= 0

∑ d ky[n − k] be the difference equation representing the causal
k=1

IIR digital filter. For an input x[n] = δ[n] , the corresponding output is then y[n] = h[n], the
impulse response of the filter. As there are M+1 {pk } coefficients, and N {dk } coefficients,
there are a total of N+M+1 unknowns. To determine these coefficients from the impulse
response samples, we compute only the first N+M+1 samples. To illstrate the method, without
any loss of generality, we assume N = M = 3. Then , from the difference equation
reprsentation we arrive at the following N+M+1 = 7 equations:
h[0] = p 0 ,
h[1] + h[0]d 1 = p1 ,
h[2] + h[1]d 1 + h[0] d 2 = p 2 ,
h[3] + h[2]d 1 + h[1] d 2 + h[0] d 3 = p 3 ,
h[4] + h[3]d 1 + h[2] d 2 + h[1] d 3 = 0,
h[5] + h[4] d1 + h[3]d 2 + h[2]d 3 = 0,
h[6] + h[5]d 1 + h[4] d 2 + h[3]d 3 = 0.
Writing the last three equations in matrix form we arrive at
 h[4]   h[3] h[2] h[1]   d1   0 
 h[5]  +  h[4] h[3] h[2]  d  =  0 ,
 h[6]   h[5] h[4] h[3]  2   0 

 
  d3   
 
d 
 h[3]
 1
and hence,  d 2  = –  h[4]
 h[5]
d 

 3

h[1] 
h[2] 
h[3] 


h[2]
h[3]
h[4]

–1 

h[4]
 h[5] .
 h[6]



Substituting these values of {di} in the first four equations written in matrix form we get
 p 0   h[0]
0
0
0  1 
p  
0
0   d1
 1  =  h[1] h[0]
 
0   d2 .
 p 2   h[2] h[1] h[0]

 p   h[3] h[2] h[1] h[0]  d 
 3
 3
2.65 y[n] = y[−1] + ∑ l =0 x[l ] = y[−1] + ∑ l = 0 l µ[l ] = y[ −1] + ∑ l = 0 l = y[−1] +
n

n

(a) For y[–1] = 0, y[n] =

n(n + 1)
.
2

n(n +1)
2

(b) For y[–1] = –2, y[n] = –2 +
2.66 y(nT) = y ( (n – 1)T) + ∫

n

nT

(n−1) T

n(n +1) n 2 + n − 4
=
.
2
2

x( τ) dτ = y (( n – 1)T) + T ⋅ x (( n – 1)T). Therefore, the difference

equation representation is given by y[n] = y[n − 1] + T ⋅ x[n − 1], where y[n] = y( nT) and
x[n] = x( nT).

27
2.67 y[n] =

1 n
1 n −1
1
1
n−1
∑ l =1 x[l ] = n ∑ l =1 x[l ] + n x[n], n ≥ 1. y[n − 1] = n − 1 ∑ l =1 x[l ], n ≥ 1. Hence,
n

n −1

∑ l =1 x[l ] = (n − 1)y[n − 1].

Thus, the difference equation representation is given by

1
 n − 1
y[n] = 
 y[ n −1] + x[n]. n ≥ 1.
 n 
n
2.68

y[n] + 0.5 y[n − 1] = 2 µ[n], n ≥ 0 with y[−1] = 2. The total solution is given by
y[n] = y c[ n] + y p [n] where y c [n] is the complementary solution and y p [n] is the particular
solution.
y c [n] is obtained ny solving y c [n] + 0.5 y c [n − 1] = 0 . To this end we set y c [n] = λn , which
yields λn + 0.5 λn−1 = 0 whose solution gives λ = −0.5. Thus, the complementary solution is
of the form y c [n] = α(−0.5) n .
For the particular solution we choose y p [n] = β. Substituting this solution in the difference
equation representation of the system we get β + 0.5β = 2 µ[n]. For n = 0 we get
β(1 + 0.5) = 2 or β = 4 / 3.
4
The total solution is therefore given by y[n] = y c[ n] + y p [n] = α(−0.5) n + , n ≥ 0.
3
1
−1 4
Therefore y[–1] = α(−0.5) + = 2 or α = – . Hence, the total solution is given by
3
3
1
4
n
y[n] = – (−0.5) + , n ≥ 0.
3
3

2.69 y[n] + 0.1y[n −1] − 0.06y[n − 2] = 2 n µ[n] with y[–1] = 1 and y[–2] = 0. The complementary
solution y c [n] is obtained by solving y c [n] + 0.1 y c[n −1] − 0.06y c [n − 2] = 0 . To this end we
set y c [n] = λn , which yields λn + 0.1λn−1 – 0.06λn− 2 = λn− 2 (λ2 + 0.1λ – 0.06) = 0 whose
solution gives λ1 = –0.3 and λ 2 = 0.2 . Thus, the complementary solution is of the form
y c [n] = α1 (−0.3) n + α 2 (0.2) n .
For the particular solution we choose y p [n] = β(2) n . ubstituting this solution in the difference
equation representation of the system we get β2 n + β(0.1)2 n−1 – β(0.06)2 n− 2 = 2 n µ[n]. For
n = 0 we get β+ β(0.1)2 −1 – β(0.06)2 −2 =1 or β = 200 / 207 = 0.9662 .
The total solution is therefore given by y[n] = y c[ n] + y p [n] = α1 (−0.3) n + α 2 (0.2) n +

200 n
2 .
207

200 −1
2 = 1 and
207
200 −2
10
107
y[−2] = α1 (−0.3) −2 + α 2 (0.2) −2 +
2 = 0 or equivalently, – α1 + 5α 2 =
and
207
3
207
100
50
α1 + 25α2 = –
whose solution yields α1 = –0.1017 and α 2 = 0.0356. Hence, the total
9
207
solution is given by y[n] = −0.1017(−0.3) n + 0.0356(0.2) n + 0.9662(2) n , for n ≥ 0.
From the above y[−1] = α1 (−0.3) −1 + α2 (0.2) −1 +

2.70 y[n] + 0.1y[n −1] − 0.06y[n − 2] = x[n] − 2 x[n − 1] with x[n] = 2 n µ[n] , and y[–1] = 1 and y[–2]
= 0. For the given input, the difference equation reduces to
y[n] + 0.1y[n −1] − 0.06y[n − 2] = 2 n µ[n] − 2 (2 n −1 ) µ[n −1] = δ[ n]. The solution of this

28
equation is thus the complementary solution with the constants determined from the given
initial conditions y[–1] = 1 and y[–2] = 0.
From the solution of the previous problem we observe that the complementary is of the
form y c [n] = α1 (−0.3) n + α 2 (0.2) n .
For the given initial conditions we thus have
y[−1] = α1 (−0.3) −1 + α2 (0.2) −1 = 1 and y[−2] = α1 (−0.3) −2 + α 2 (0.2) −2 = 0 . Combining these
 −1 / 0.3 1 / 0.2   α1   1 
two equations we get 
   =   which yields α1 = − 0.18 and α 2 = 0.08.
 1 / 0.09 1 / 0.04  α2   0 
Therefore, y[n] = − 0.18(−0.3) n + 0.08(0.2) n .
2.71

The impulse response is given by the solution of the difference equation
y[n] + 0.5 y[n − 1] = δ[n]. From the solution of Problem 2.68, the complementary solution is
given by y c [n] = α(−0.5) n . To determine the constant we note y[0] = 1 as y[–1] = 0. From
the complementary solution y[0] = α(–0.5) 0 = α, hence α = 1. Therefore, the impulse
response is given by h[n] = (−0.5) n .

2.72 The impulse response is given by the solution of the difference equation
y[n] + 0.1y[n −1] − 0.06y[n − 2] = δ[n] . From the solution of Problem 2.69, the complementary
solution is given by y c [n] = α1 (−0.3) n + α 2 (0.2) n . To determine the constants α1 and α 2 , we
observe y[0] = 1 and y[1] + 0.1y[0] = 0 as y[–1] = y[–2] = 0. From the complementary
solution y[0] = α1 (−0.3) 0 + α 2 (0.2) 0 = α1 + α2 = 1, and
y[1] = α1 (−0.3)1 + α2 (0.2)1 = –0.3 α1 + 0.2α 2 = −0.1. Solution of these equations yield α1 = 0.6
and α 2 = 0.4. Therefore, the impulse response is given by h[n] = 0.6(−0.3) n + 0.4(0.2) n .
A n+1 n +1 K
n+1K
2.73 Let A n = n (λ i ) . Then
=
λ i . Now lim
= 1. Since λ i < 1, there exists
An
n
n→ ∞ n
A
1 + λi
∞
a positive integer N o such that for all n > N o , 0 < n +1 <
< 1. Hence ∑ n =0 A n
An
2
converges.
K

n

2.74 (a) x[n] = {3 −2 0 1 4 5 2 } , −3 ≤ n ≤ 3. r xx[l ] = ∑ n =−3 x[n] x[n − l ] . Note,
r xx[−6] = x[3]x[−3] = 2 × 3 = 6, r xx[−5] = x[3] x[−2] + x[2]x[−3] = 2 × (−2) + 5 × 3 =11,
r xx[−4] = x[3]x[−1] + x[2] x[−2] + x[1] x[−3] = 2 × 0 + 5 × (−2) + 4 × 3 = 2,
r xx[−3] = x[3] x[0] + x[2]x[−1] + x[1]x[−2] + x[0] x[−3] = 2 ×1 + 5 × 0 + 4 × (−2) +1 × 3 = −3,
r xx[−2] = x[3]x[1] + x[2] x[0] + x[1] x[−1] + x[0]x[−2] + x[−1]x[−3] =11,
r xx[−1] = x[3]x[2] + x[2] x[1] + x[1] x[0] + x[0]x[−1] + x[−1]x[−2] + x[−2]x[−3] = 28,
r xx[0] = x[3] x[3] + x[2] x[2] + x[1]x[1] + x[0]x[0] + x[ −1] x[−1] + x[−2] x[−2] + x[−3]x[−3] = 59.
The samples of r xx[l ] for 1 ≤ l ≤ 6 are determined using the property r xx[l ] = rxx[ −l ] . Thus,
3

r xx[l ] = {6 11 2 −3 11 28 59

28 11 −3 2 11 6 }, −6 ≤ l ≤ 6.

y[n] = {0 7 1 −3 4 9 −2} , −3 ≤ n ≤ 3. Following the procedure outlined above we get

29
r yy[l ] = {0 −14 61 43 −52 10 160 10

−52

43 61 −14

w[n] = {−5 4 3 6 −5 0 1}, −3 ≤ n ≤ 3. Thus,
r ww[l ] = {−5 4 28 −44 −11 −20 112 −20

−11 −44

0}, −6 ≤ l ≤ 6.

28 4 −5}, −6 ≤ l ≤ 6.

(b) r xy[l ] = ∑ n =−3 x[n] y[n − l ] . Note, r xy[−6] = x[3]y[−3] = 0,
r xy[−5] = x[3] y[−2] + x[2]y[−3] =14, r xy[−4] = x[3]y[−1] + x[2] y[−2] + x[1] y[−3] = 37,
r xy[−3] = x[3] y[0] + x[2]y[−1] + x[1]y[−2] + x[0] y[−3] = 27,
r xy[−2] = x[3]y[1] + x[2] y[0] + x[1]y[−1] + x[0]y[−2] + x[−1]y[−3] = 4,
r xy[−1] = x[3]y[2] + x[2] y[1] + x[1]y[0] + x[0] y[−1] + x[−1]y[−2] + x[−2] y[−3] = 27,
r xy[0] = x[3] y[3] + x[2] y[2] + x[1]y[1] + x[0] y[0] + x[−1] y[−1] + x[ −2] y[−2] + x[−3]y[−3] = 40,
r xy[1] = x[2] y[3] + x[1] y[2] + x[0]y[1] + x[−1]y[0] + x[ −2] y[−1] + x[ −3]y[−2] = 49,
r xy[2] = x[1] y[3] + x[0] y[2] + x[−1]y[1] + x[−2]y[0] + x[−3]y[−1] = 10,
r xy[3] = x[0] y[3] + x[−1]y[2] + x[−2] y[1] + x[−3]y[0] = −19,
r xy[4] = x[−1]y[3] + x[−2]y[2] + x[−3]y[1] = −6, r xy[5] = x[−2]y[3] + x[−3] y[2] = 31,
r xy[6] = x[−3]y[3] = −6. Hence,
3

r xy[l ] = {0 14 37 27

4 27 40 49 10 −19

−6 31 −6,} , −6 ≤ l ≤ 6.

r xw[l ] = ∑ n =−3 x[n] w[n − l ] = {−10 −17 6 38 36 12 −35 6 1 29 −15
3

−2 3},

−6 ≤ l ≤ 6.

∞

∞

2.75 (a) x 1[n] = α n µ[n]. r xx[l ] = ∑ n =−∞ x 1[n] x1[ n − l ] = ∑ n= −∞ α n µ[n] ⋅ αn −l µ[n − l ]

 ∞
α −l
α 2n− l =
, for l < 0,
 ∑ n= 0
∞

2 n−l
1 − α2
= ∑ n= 0 α
µ[n − l ] = 
l
 ∞ α2 n−l = α
for l ≥ 0.
∑ n= l


1 − α2
αl
α −l
Note for l ≥ 0, r xx[l ] =
, and for l < 0, r xx[l ] =
. Replacing l with −l in the
1 − α2
1 − α2
α −(− l )
αl
second expression we get r xx[−l ] =
=
= r xx[l ]. Hence, r xx[l ] is an even function
1 − α2 1 − α2
of l .
Maximum value of r xx[l ] occurs at l = 0 since α l is a decaying function for increasing l
when α < 1.
 1, 0 ≤ n ≤ N −1,
 1, l ≤ n ≤ N −1 + l ,
N −1
(b) x 2 [n] = 
r xx[l ] = ∑ n =0 x 2 [n − l ], where x 2 [n − l] = 
otherwise.
otherwise.
 0,
 0,

30
for l < −(N − 1),
 0,
 N + l , for – (N – 1) ≤ l < 0,


for l = 0,
Therefore, r xx[l ] =  N,
N − l ,
for 0 < l ≤ N − 1,

 0,
for l > N −1.

It follows from the above r xx[l ] is a trinagular function of l , and hence is even with a
maximum value of N at l = 0
 πn 
2.76 (a) x 1[n] = cos   where M is a positive integer. Period of x 1[n] is 2M, hence
 M
1
1
πn
π(n + l )
2M −1
2M −1
r xx[l ] =
∑ n= 0 x 1[n] x1[n + l ] = 2M ∑ n= 0 cos M  cos M 
2M
1
πn 
πn
πl
πn
πl 
1
πl
πn
2M −1
2M−1
=
∑ n =0 cos M   cos M  cos M  − sin M  sin M   = 2M cos M  ∑ n= 0 cos2  M  .
2M


From the solution of Problem 2.16

2M −1

∑ n =0

1
 πn 
r xx[l ] = cos   .
 M
2

 πn  2M
cos 2   =
= M. Therefore,
 M
2

(b) x 2 [n] = n mod ulo 6 = {0 1 2 3 4 5}, 0 ≤ n ≤ 5. It is a peridic sequence with a period
1 5
6. Thus,, r xx[l ] = ∑ n =0 x 2 [ n] x 2 [n + l ], 0 ≤ l ≤ 5. It is also a peridic sequence with a period 6.
6
1
55
r xx[0] = (x 2 [0]x 2 [0] + x 2 [1]x 2 [1] + x 2 [2]x 2 [2] + x 2 [3]x 2 [3] + x 2 [4]x 2 [4] + x 2 [5]x 2 [5]) = ,
6
6
1
40
r xx[1] = (x 2 [0]x 2 [1] + x 2 [1]x 2 [2] + x 2 [2]x 2 [3] + x 2 [3]x 2 [4] + x 2 [4]x 2 [5] + x 2 [5]x 2 [0]) =
,
6
6
1
32
r xx[2] = (x 2 [0]x 2 [2] + x 2 [1]x 2 [3] + x 2 [2]x 2 [4] + x 2 [3]x 2 [5] + x 2 [4]x 2 [0] + x 2 [5]x 2 [1]) = ,
6
6
1
28
r xx[3] = (x 2 [0]x 2 [3] + x 2 [1]x 2 [4] + x 2 [2]x 2 [5] + x 2 [3]x 2 [0] + x 2 [4]x 2 [1] + x 2 [5]x 2 [2]) = ,
6
6
1
31
r xx[4] = (x 2 [0]x 2 [4] + x 2 [1]x 2 [5] + x 2 [2]x 2 [0] + x 2 [3]x 2 [1] + x 2 [4]x 2 [2] + x 2 [5]x 2 [3]) = ,
6
6
1
40
r xx[5] = (x 2 [0]x 2 [5] + x 2 [1]x 2 [0] + x 2 [2]x 2 [1] + x 2 [3]x 2 [2] + x 2 [4]x 2 [3] + x 2 [5]x 2 [4]) =
.
6
6
(c) x 3 [n] = (−1) n . It is a periodic squence with a period 2. Hence,
1 1
1
r xx[l ] = ∑ n =0 x 3 [n] x3 [n + l ], 0 ≤ l ≤ 1. r xx[0] = (x 2 [0]x 2 [0] + x 2 [1]x 2 [1]) = 1, and
2
2
1
r xx[1] = (x 2 [0]x 2 [1] + x 2 [1]x 2 [0]) = −1. It is also a periodic squence with a period 2.
2
2.77 E{X + Y} = ∫∫ (x + y)p XY (x, y)dxdy = ∫∫ x p XY (x, y)dxdy + ∫∫ y p XYy (x, y)dxdy

(

)

= ∫ x ∫ p XY (x, y)dy dx + ∫ y

(∫ pXY (x, y)dx)dy = ∫ x pX (x)dx + ∫ y pY(y)dy
31
= E{X} + E{Y}.
From the above result, E{2X} = E(X + X} = 2 E{X}. Similarly, E{cX} = E{(c–1)X + X}
= E{(c–1)X} + E{X}. Hence, by induction, E{cX} = cE{x}.

{

}

2.78 C = E (X − κ)2 . To find the value of κ that minimize the mean square error C, we

differentiate C with respect to κ and set it to zero.
Thus

dC
= E{2(X − κ)} = E{2X} − E{2K} = 2 E{X} − 2K = 0. which results in κ = E{X}, and
dκ

the minimum value of C is σ 2 .
x
2.79 mean = m x = E{x} =

∫

∞
−∞

xp X (x)dx

variance = σ 2 = E{(x − m x )2 } =
x

∞

∫−∞ (x − mx)2 pX (x)dx

α
1 
α ∞ xdx
x
(a) p X (x) =  2
2  . Now, m x = π −∞ 2
2 . Since
2
2 is odd hence the
π x +α 
x +α
x +α
integral is zero. Thus m x = 0 .

∫

2

σx =

2
α ∞ x dx
. Since this integral does not converge hence variance is not defined for
π −∞ x 2 + α 2

∫

this density function.
α −α x
α ∞ −α x
(b) p x (x) = e
. Now, m x =
xe
dx = 0 . Next,
2
2 −∞
 2 −αx ∞ ∞

∞ 2 −αx
x e
α ∞ 2 −α x
2x −αx 
2
σx =
x e
dx = α x e dx = α
+
e dx
2 −∞
0
α
 −α


0

0
∞


−αx
∞ 2 −αx 

2x e
2
= α 0 +
+
dx = 2 .
2e
α −α
0 α

 α

0


∫

∫

∫

∫

∫

n

(c) p x (x) =

∑  n pl (1 − p)n− l δ(x − l ) .
l

l =0
n
∞
 n

∫−∞ ∑
l =0

mx= x
2

 l  p l (1 − p )
 

2

2

σ x = E{x } − m x =
n

=

∞

n− l

Now,
n

δ(x−l )dx=

 

∑  n pl (1− p)n −l =np
l

l =0
n

 n l
2
n− l
2
∫−∞x ∑  l  p (1− p) δ(x − l )dx − (np)
l =0

 

∑ l 2 n pl (1 − p)n− l −n2p2 =n p(1−p) .
l
l =0

(d) p x (x) =

∞

∑

l =0

e

−α l

α
δ(x − l) . Now,
l!

32
∞ ∞

∫−∞ ∑
l =0

mx = x
2
σx

= E{x

e −αα l
δ(x−l )dx =
l!

2

2
} − mx

x

(e) p x (x) =

α

2

e

∞

α
2
=
x
2 −∞

∫

−x 2 / 2α 2

∞

∞

∑

l

e −ααl
= α.
l!

l =0
−α l

∑

e

l =0

α
2
δ(x − l) dx − α =
l!

∞

∑

l2

l =0

e−αα l
− α2 = α .
l!

µ(x) . Now,

1 ∞ 2 − x 2 /2α2
1 ∞ 2 −x 2 / 2α2
mx = 2
x e
µ(x)dx = 2
x e
dx = α π / 2 .
α −∞
α 0
2
1 ∞ 3 −x 2 / 2α2
α π 
π 2
2
2
2
σ x = E{x } − m x = 2
x e
µ(x)dx −
= 2− α .

2
2
α −∞

∫

∫

∫

2.80

Recall that random variables x and y are linearly independent if and only if
2

E  a1x + a2 y  >0 ∀ a1 ,a2 except when a1 = a2 = 0, Now,


2
2
 2 2  2 2
2
2
E  a1 x  +E a 2 y  +E (a1 ) * a2 x y * +E a1 (a2 ) * x * y = a1 E x + a 2 E y > 0

 

∀ a 1 and a 2 except when a1 = a2 = 0.
Hence if x,y are statistically independent they are also linearly independent.

{

{

2

2.81 σ x = E (x − m x )

2

} {

{ }

{ }

}

} = E{x2 + m2 − 2xm x} = E{x2} + E{m 2} − 2E{xm x}.
x
x
2

2

2

Since m x is a constant, hence E{mx } = m x and E{xm x } = m x E{x} = m x . Thus,
2
2
2
2
2
2
σ x = E{x } + m x − 2m x = E{x } − m x .
2.82

V = aX + bY. Therefore, E{V} = a E{X} + b E{Y} = a m x + b m y . and
2

2

2

σ v = E{(V − m v ) } = E{(a(X − m x ) + b(Y − m y )) }.
2

2 2

2 2

Since X,Y are statistically independent hence σ v = a σ x + b σ y .
2.83 v[ n] = a x[n] + b y[n]. Thus φ vv[n] = E{v[m + n}v[m]}

{

}

= E a2 x[m + n}x[m] + b 2 y[m + n}y[m] + ab x[m + n] y[m] + ab x[m] y[m + n] . Since x[n] and
y[n] are independent,

{

} {

}

φ vv[n] = E a 2 x[m + n}x[m] + E b 2 y[m + n}y[m] = a2 φxx [n] + b2 φ yy[n] .
φ vx[n] = E{v[m + n] x[m]} = E{a x[m + n]x[m] + b y[m + n]x[m]} = a E{ x[m + n]x[m]} = a φxx [n].
Likewise, φ vy[n] = b φyy [n].
2.84 φxy [l]=E{x[n + l ]y * [n]},

φxy [–l]=E{x[n – l ]y *[n]},

φyx [l]=E{y[n + l ]x * [n]}.

Therefore, φyx *[l ]=E{y *[n + l ]x[n]} =E{x[n – l ] y *[n]} =φxy [–l].

33
Hence φxy [−l ] = φyx *[l l] .
Since γ xy[l ] = φxy [l ]−m x (m y )* . Thus, γ xy[l ] = φxy [l ] − m x (m y )* . Hence,
γ xy[l ] = φxy [l ] − m x (m y )* . As a result, γ xy *[l ] = φxy *[l ] − (m x )*m y .
Hence, γ xy[−l ] = γ yx *[l ].
The remaining two properties can be proved in a similar way by letting x = y.

{
} ≥ 0.
E{ x[n] } + E{ x[n − l ] } − E{x[n]x * [n − l ]} − E{x * [n]x[n − l ]} ≥ 0

2.85 E x[n] − x[n − l]

2

2

2

2φxx [0] − 2φxx[l ] ≥ 0
φxx [0] ≥ φxx [l]

{ 2} E{ y 2} ≤ E 2{xy} . Hence, φxx [0] φyy[0] ≤ φxy [l ] 2 .

Using Cauchy's inequality E x

2

One can show similarly γ xx[0] γ yy [0] ≤ γ xy[l ] .
2.86 Since there are no periodicities in {x[n]} hence x[n], x[n+ l ] become uncorrelated as l → ∞. .
2

2

Thus lim γ xx [l ] = lim φxx[l ] − m x → 0. Hence lim φxx [l ] = m x .
l →∞
l →∞
l →∞
2.87 φ XX[l ] =

9 + 11l 2 +14 l 4

. Now, m X[n]
1+ 3l2 + 2l4

(

Hence, m X[ n] = m 7. E X[n]

2

2

= lim φ XX[l ] = lim
l →∞

) = φXX [0] = 9.

l →∞

9 +11l 2 +14l 4
1 + 3l 2 + 2 l 4

= 7.

2
Therefore, σ 2 = φ XX[0] − m X = 9 − 7 = 2.
X

M2.1

L = input('Desired length = '); n = 1:L;
FT = input('Sampling frequency = ');T = 1/FT;
imp = [1 zeros(1,L-1)];step = ones(1,L);
ramp = (n-1).*step;
subplot(3,1,1);
stem(n-1,imp);
xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude');
subplot(3,1,2);
stem(n-1,step);
xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude');
subplot(3,1,3);
stem(n-1,ramp);
xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude');

M2.2

% Get user inputs
A = input('The peak value =');
L = input('Length of sequence =');
N = input('The period of sequence =');
FT = input('The desired sampling frequency =');
DC = input('The square wave duty cycle = ');
% Create signals
T = 1/FT;

34
t = 0:L-1;
x = A*sawtooth(2*pi*t/N);
y = A*square(2*pi*(t/N),DC);
% Plot
subplot(211)
stem(t,x);
ylabel('Amplitude');
xlabel(['Time in ',num2str(T),'sec']);
subplot(212)
stem(t,y);
ylabel('Amplitude');
xlabel(['Time in ',num2str(T),'sec']);
10
5
0
-5
-10
0

20

40
60
Time in 5e-05 sec

80

100

40
60
Time in 5e-05 sec

80

100

10
5
0
-5
-10

0

20

M2.3 (a) The input data entered during the execution of Program 2_1 are
Type
Type
Type
Type

in
in
in
in

real exponent = -1/12
imaginary exponent = pi/6
the gain constant = 1
length of sequence = 41

35
Real part

1

Imaginary part

1

0.5

0.5

0

0

-0.5

-0.5

-1

0

10

20
Time index n

30

-1

40

0

10

20
Time index n

30

40

(b) The input data entered during the execution of Program 2_1 are
Type
Type
Type
Type

in
in
in
in

real exponent = -0.4
imaginary exponent = pi/5
the gain constant = 2.5
length of sequence = 101

Real part

1.5

Imaginary part

1.5

1

1

0.5

0.5

0

0

-0.5

0

M2.4 (a)

10

20
Time index n

30

-0.5

40

0

10

20
Time index n

L = input('Desired length = ');
A = input('Amplitude = ');
omega = input('Angular frequency = ');
phi = input('Phase = ');
n = 0:L-1;
x = A*cos(omega*n + phi);
stem(n,x);
xlabel('Time index');ylabel('Amplitude');
title(['omega_{o} = ',num2str(omega)]);

(b)

36

30

40
ω = 0.14π
o

2
1
0
-1
-2

0

10

20

30

40

50
60
Time index n

70

80

90

100

ω o = 0.24 π

2
1
0
-1
-2
0

5

10

15

20
Time index n

25

30

35

40

ω = 0.68 π
o

2
1
0
-1
-2

0

10

20

30

40

50
60
Time index n

37

70

80

90

100
ω = 0.75 π
o

2
1
0
-1
-2

0

5

10

15

20
25
Time index n

30

35

40

˜
M2.5 (a) Using Program 2_1 we generate the sequence x 1[n] = e − j 0.4πn shown below
Real part
1
0.5
0
-0.5
-1
0

5

10

15

20
Time index n

25

30

35

40

25

30

35

40

Imaginary part
1
0.5
0
-0.5
-1
0

(b)

5

10

15

20
Time index n

˜
Code fragment used to generate x 2 [n] = sin(0.6πn + 0.6π) is:
x = sin(0.6*pi*n + 0.6*pi);

38
2
1
0
-1
-2
0

5

10

15

20
25
Time index n

30

35

40

˜
(c) Code fragment used to generate x 3 [n] = 2 cos(1.1πn − 0.5π) + 2 sin(0.7πn) is

x = 2*cos(1.1*pi*n - 0.5*pi) + 2*sin(0.7*pi*n);
4
2
0
-2
-4
0

5

10

15

20
25
Time index n

30

35

40

˜
(d) Code fragment used to generate x 4 [n] = 3sin(1.3πn) − 4 cos(0.3πn + 0.45π) is:
x = 3*sin(1.3*pi*n) - 4*cos(0.3*pi*n+0.45*pi);

6
4
2
0
-2
-4
-6
0

(e)

5

10

15

20
25
Time index n

30

35

40

Code fragment used to generate
˜
x 5 [n] = 5 sin(1.2πn + 0.65π) + 4 sin(0.8πn) − cos(0.8πn) is:

x = 5*sin(1.2*pi*n+0.65*pi)+4*sin(0.8*pi*n)-cos(0.8*pi*n);

39
10
5
0
-5
-10

(f)

0

5

10

15

20
25
Time index n

30

35

40

˜
Code fragment used to generate x 6 [n] = n mod ulo 6 is: x = rem(n,6);
6
5
4
3
2
1
0
-1
0

5

10

15

20
25
Time index n

30

35

40

M2.6 t = 0:0.001:1;
fo = input('Frequency of sinusoid in Hz = ');
FT = input('Samplig frequency in Hz = ');
g1 = cos(2*pi*fo*t);
plot(t,g1,'-')
xlabel('time');ylabel('Amplitude')
hold
n = 0:1:FT;
gs = cos(2*pi*fo*n/FT);
plot(n/FT,gs,'o');hold off
M2.7 t = 0:0.001:0.85;
g1 = cos(6*pi*t);g2 = cos(14*pi*t);g3 = cos(26*pi*t);
plot(t/0.85,g1,'-',t/0.85,g2,'--',t/0.85,g3,':')
xlabel('time');ylabel('Amplitude')
hold
n = 0:1:8;
gs = cos(0.6*pi*n);
plot(n/8.5,gs,'o');hold off
M2.8

As the length of the moving average filter is increased, the output of the filter gets more
smoother. However, the delay between the input and the output sequences also increases
(This can be seen from the plots generated by Program 2_4 for various values of the filter
length.)

40
M2.9 alpha = input('Alpha = ');
yo = 1;y1 = 0.5*(yo + (alpha/yo));
while abs(y1 - yo) > 0.00001
y2 = 0.5*(y1 + (alpha/y1));
yo = y1; y1 = y2;
end
disp('Square root of alpha is'); disp(y1)
M2.10 alpha = input('Alpha = ');
yo = 0.3; y = zeros(1,61);
L = length(y)-1;
y(1) = alpha - yo*yo + yo;
n = 2;
while abs(y(n-1) - yo) > 0.00001
y2 = alpha - y(n-1)*y(n-1) + y(n-1);
yo = y(n-1); y(n) = y2;
n = n+1;
end
disp('Square root of alpha is'); disp(y(n-1))
m=0:n-2;
err = y(1:n-1) - sqrt(alpha);
stem(m,err);
axis([0 n-2 min(err) max(err)]);
xlabel('Time index n');
ylabel('Error');
title(['alpha = ',num2str(alpha)])
The displayed output is
Square root of alpha is
0.84000349056114
α = 0.7056
0.06
0.04
0.02
0
-0.02
-0.04
0

5

10
15
Time index n

20

25

M2.11 N = input('Desired impulse response length = ');
p = input('Type in the vector p = ');
d = input('Type in the vector d = ');
[h,t] = impz(p,d,N);
n = 0:N-1;
stem(n,h);
xlabel('Time index n');ylabel('Amplitude');

41
1.5
1
0.5
0
-0.5
-1
0

10

20
Time index n

30

40

M2.12 x = [ 3 −2 0 1 4 5 2 ] y = [ 0 7 1 −3 4 9 −2 ] , w = [ −5 4 3 6 −5 0 1].
(a) r xx[n] = [6

11

2

r yy[n] = [0 -14
r ww[n] = [–5

-3

61
4

11

28

43 -52

59

28

10 160

11

-3

10 -52

2
43

11 6],
61 -14 0],

28 –44 –11 –20 112 –20 –11 –44

60

28

4 –5].

150

50

rxx[n]

40

ryy[n]

100

30

50

20
0

10
0
-6

-4

-2

0
2
Lag index

4

-50
-6

6

-4

100

-2

r

ww

0
2
Lag index

4

[n]

50
0
-50
-6

(b)

r xy[n] = [ –6

-4

31

–6 –19

-2

0
2
Lag index

10

42

49

40

4

27

6

4

27

37

14 0].

6
r xw[n] = [3 –2 –15

29

1

6 –35

12

36

38

6 –17 –10].

50
100

r [n]
xw

rxy[n]

50

0

0
-50
-6

M2.13

-4

-2

0
2
Lag index

4

-50
-6

6

-4

-2

0
2
Lag index

N = input('Length of sequence = ');
n = 0:N-1;
x = exp(-0.8*n);
y = randn(1,N)+x;
n1 = length(x)-1;
r = conv(y,fliplr(y));
k = (-n1):n1;
stem(k,r);
xlabel('Lag index');ylabel('Amplitude');
gtext('r_{yy}[n]');
30
r [n]

20

yy

10
0
-10
-30

-20

-10

0
Lag index

10

20

30

M2.14 (a) n=0:10000;
phi = 2*pi*rand(1,length(n));
A = 4*rand(1,length(n));
x = A.*cos(0.06*pi*n + phi);
stem(n(1:100),x(1:100));%axis([0 50 -4 4]);
xlabel('Time index n');ylabel('Amplitude');
mean = sum(x)/length(x)
var = sum((x - mean).^2)/length(x)

43

4

6
4

4

2

2

0

0

-2

-2

-4

0

20

40
60
Time index n

80

100

-4
0

4

100

80

100

0

-2

80

2

0

40
60
Time index n

4

2

20

-2

-4

0

20

40
60
Time index n

80

-4

100

0

20

40
60
Time index n

mean =
0.00491782302432
var =
2.68081196077671
From Example 2.44, we note that the mean = 0 and variance = 8/3 = 2.66667.
M2.15

n=0:1000;
z = 2*rand(1,length(n));
y = ones(1,length(n));x=z-y;
mean = mean(x)
var = sum((x - mean).^2)/length(x)
mean =
0.00102235365812
var =
0.34210530830959

1 −1
(1 + 1) 2 1
2
Using Eqs. (2. 129) and (2.130) we get m X =
= 0, and σ X =
= . It should
2
22
3
be noted that the values of the mean and the variance computed using the above MATLAB
program get closer to the theoretical values if the length is increased. For example, for a length
100001, the values are
mean =
9.122420593670135e-04

44
var =
0.33486888516819

45
3.1 X( e

jω

Chapter 3 (2e)

∞

)=

∑ x[n]e− jωn where x[n] is a real sequence. Therefore,

n =−∞

∞
∞
 ∞

X re (e jω ) = Re  ∑ x[n]e− jωn  = ∑ x[n]Re e − jωn = ∑ x[n]cos(ωn), and
 n =−∞
 n =−∞
n =−∞
∞
∞
 ∞

X im (e jω ) = Im ∑ x[n]e − jωn  = ∑ x[n] Im e− jωn = − ∑ x[n]sin (ωn) . Since cos (ωn) and
 n= −∞
 n= −∞
n= −∞

(

)

(

)

sin (ωn) are, respectively, even and odd functions of ω , X re (e jω ) is an even function of ω ,
and X im (e jω ) is an odd function of ω .
X(e jω ) = X2 (e jω ) + X2 (e jω ) . Now, X 2 (e jω ) is the square of an even function and
re
im
re
X 2 (e jω ) is the square of an odd function, they are both even functions of ω . Hence,
im
X(e jω ) is an even function of ω .

{

arg X(e

jω

 Xim (e jω ) 
) = tan 
jω  . The argument is the quotient of an odd function and an even
 Xre (e ) 

}

−1

{

}

function, and is therefore an odd function. Hence, arg X(e jω ) is an odd function of ω .

3.2

1
1 − αe –ω
1 − α e –ω
1 − α cos ω − jα sin ω
=
⋅
=
– jω
– jω
–ω
2 =
1−αe
1− α e
1 − αe
1 − 2 α cosω + α
1 − 2 αcos ω + α 2
1 − α cos ω
αsin ω
jω
Therefore, X re (e jω ) =
.
2 and X im (e ) = –
1 − 2 αcos ω + α
1 − 2 αcos ω + α 2
X(e jω ) =

1

1

2

X(e jω ) = X(e jω ) ⋅ X * (e jω ) =
Therefore, X(e jω ) =

tan θ(ω) =

3.3

X im (e jω )
Xre (e

jω

)

1 − αe
1

– jω

1 − 2 α cos ω + α 2

=–

⋅

1
1−αe

jω

=

1
1 − 2α cos ω + α2

.

.

α sin ω

αsin ω 
. Therefore, θ(ω ) == tan −1  –
.
1 − α cosω
 1 − α cosω 

(a) y[n] = µ[n] = y ev [n] + y od[n], where y ev [n] = ( y[n] + y[−n]) = ( µ[n] + µ[−n]) = + δ[ n],
2
2
2 2
1

1

and y od [n] = (y[n] − y[−n]) = (µ[n] − µ[−n]) = µ[n] − – δ[n] .
2
2
2 2

∞

∞
1
1
1
Now, Yev (e jω ) =  2π
δ(ω + 2πk)  + = π
δ(ω + 2πk) + .

 2
2
2
 k= –∞

k= –∞
1

1

1

∑

1

1

∑

1

1

1

Since y od [n] = µ[n] − + δ[n], y od [n] = µ[n − 1] − + δ[n −1]. As a result,
2 2
2 2

45

1

1
1
2
− jω

1
2

1
2

1
2

y od [n] − y od [n −1] = µ[n] − µ[n − 1] + δ[n −1] − δ[n] = δ[n] + δ[n −1]. Taking the DTFT of
both sides we then get Yod (e jω ) − e
Yod (e jω ) =

− jω
1 1+ e
2 1 − e− jω

=

Yod (e jω ) =

1
1
− jω − 2 . Hence,
1− e

Y(e jω ) = Yev (e jω ) + Yod (e jω ) =

1
+π
1− e − jω

(1 + e –jω ). or

∞

∑ δ(ω + 2πk).

k= −∞

(b) Let x[n] be the sequence with the DTFT X(e
1
DTFT is then given by x[n] =
2π

1
2

jω

∞

)=

∑ 2πδ(ω −ω o + 2πk) . Its inverse

k =−∞

π

∫ 2πδ(ω − ωo )e jωndω = e jω n .
o

−π

∞

3.4 Let X(e jω ) = ∑ k= −∞ 2π δ(ω + 2πk) . Its inverse DTFT is then given by
1 π
2π
x[n] =
∫ –π 2π δ(ω) e jωn dω = 2π = 1.
2π
∞

3.5

(a) Let y[n] = g[n – no ], Then Y(e

jω

)=

∑ g[n]e − jωn = e − jωn

n= −∞

(b) Let h[n] = e

jω o n

g[n] , then H(e

∞

∞

jω

)=

∞

∑ h[n] e

∑ g[n − no] e− jωn

=

n= −∞

∞

= e − jωn o

∑ y[n] e

− jωn

n =−∞

o G(e jω ) .

− jωn

∞

=

n= −∞

∑ e jω ng[n]e − jωn
o

n =−∞

∑ g[n] e− j(ω−ω )n = G(e j(ω− ω ) ) .
n =−∞
∞
∞
d (G(e jω ) )
(c) G(e jω ) = ∑ g[n]e − jωn . Hence
= ∑ −jng[n] e− jωn .
dω
n= −∞
n =−∞
jω
∞
d (G(e ) )
d (G(e jω ) )
Therefore, j
= ∑ ng[n] e− jωn . Thus the DTFT of ng[n] is j
.
dω
dω
=

o

o

n =−∞
∞

(d) y[n] = g[n] * h[n] =
∞

=

∑ g[k]h[n − k] . Hence Y(e

k= −∞

∞

∞

jω

)=

∞

∑ ∑ g[k] h[n − k] e− jωn

n= −∞ k= −∞

∑ g[k]H(e jω ) e− jωk = H(e jω ) ∑ g[k]e − jωk = H(e jω )G(e jω ) .

k =−∞

(e) y[n] = g[n]h[n]. Hence Y(e jω ) =

k= −∞

∞

∑ g[n] h[n] e− jωn

n= −∞

46
1
Since g[n] =
2π
Y(e

jω

1
)=
2π
=

π

∫ G(e jθ ) e jθndθ

−π
∞ π

∑ ∫
n =−∞

1
2π

π

h[n]e

− jωn

we can rewrite the above DTFT as

jθ

G(e ) e

jθn

−π

1
dθ =
2π

π

∫

∞

G(e jθ)

∑ h[n]e − j(ω−θ)ndθ

n= −∞

−π

∫ G(e jθ)H(e j(ω−θ) )dθ .

−π


π

 1

jω − jωn
(f) y[n] =
g[n] h * [n] =
g[n] 
H* (e ) e
dω 
 2π

n =−∞
n= −∞


−π
∞

∞

∑

1
=
2π

∑

π

∫ H* (e

jω

−π

∫

 ∞

)
g[n] e− jωn  dω


 n= −∞


∑

1
=
2π

π

∫ H* (e jω ) G(e jω ) dω .

−π

∞

3.6 DTFT{x[n]} = X(e jω ) =
(a) DTFT{x[–n]} =
(b) DTFT{x*[-n]} =

∑ x[n]e− jωn .

n= −∞
∞

∑

x[ −n]e − jωn =

n =−∞
∞

∞

∑

x[m]e jωm = X(e − jω ).

m =−∞
 ∞


x * [−n]e− jωn =  ∑ x[−n]e jωn  using the result of Part (a).
 n =−∞

n =−∞

∑

Therefore DTFT{x*[-n]} = X * (e jω ).

{

}

 x[n] + x * [n]  1
jω
− jω
(c) DTFT{Re(x[n])} = DTFT 
) using the result of Part
 = X(e ) + X * (e
2

 2
(b).

{

}

 x[n] − x * [n]  1
jω
− jω
(d) DTFT{j Im(x[n])} = DTFT  j
) .
 = X(e ) − X * (e
2j
2



{

}

{

{

}

}

 x[n] + x * [−n]  1
jω
jω
jω
jω
(e) DTFT {x cs [n]} = DTFT 
 = X(e ) + X * (e ) = Re X(e ) = Xre (e ).
2

 2
 x[n] − x * [−n]  1
jω
jω
jω
(f) DTFT {x ca [n]} = DTFT 
 = X(e ) − X * (e ) = jX im (e ).
2
2


∞

3.7

X(e

jω

X(e

jω

)=

∑ x[n]e− jωn where x[n] is a real sequence. For a real sequence,

n= −∞

{

}

) = X * (e – jω ), and IDFT X * (e – jω ), = x * [−n].

47
{

}

{

}

1
X(e jω ) + X * (e – jω ) . Therefore, IDFT X re (e jω )
2
1
1
1
= IDFT X(e jω ) + X * (e – jω ) = {x[n] + x * [−n]} = {x[n] + x[−n]} = x ev [n].
2
2
2

(a) X re (e jω ) =

{

}

{

}

{

}

1
X(e jω ) − X * (e– jω ) . Therefore, IDFT jXim (e jω )
2
1
1
1
= IDFT X(e jω ) − X * (e – jω ) = {x[n] − x * [−n]} = {x[n] − x[ −n]} = x od [n].
2
2
2

(b) jX im (e jω ) =

{

∞

3.8

∑

(a) X(e jω ) =

}

x[n]e− jωn . Therefore, X * (e jω ) =

∑

∑

x[n]e jωn = X(e – jω ), and hence,

n= −∞

n= −∞
∞

X * (e– jω ) =

∞

x[n]e – jωn = X(e jω ).

n =−∞

(b) From Part (a), X(e jω ) = X * (e – jω ). Therefore, X re (e jω ) = X re (e – jω ).
(c) From Part (a), X(e jω ) = X * (e – jω ). Therefore, X im (e jω ) = −X im (e– jω ).
(d) X(e jω ) = X2 (e jω ) + X2 (e jω ) = X 2 (e –jω ) + X 2 (e –jω ) = X(e − jω ) .
re
im
re
im
(e) arg X(e jω ) = tan −1

1
3.9 x[n] =
2π

X (e− jω )
= −tan −1 im − jω = − argX(e jω )
X re (e jω )
X re (e )

Xim (e jω )

π

∫ X(e

jω

)e

jωn

−π

1
dω . Hence, x * [n] =
2π

π

∫ X *(e jω ) e –jωndω .

−π

(a) Since x[n] is real and even, hence X(e jω ) = X *(e jω ) . Thus,
1
x[– n] =
2π

π

∫ X(e jω ) e − jωn dω ,

−π

1
1
Therefore, x[n] = ( x[n] + x[−n]) =
2
2π

π

∫ X(e jω ) cos(ωn)dω.

−π

Now x[n] being even, X(e jω ) = X(e– jω ) . As a result, the term inside the above integral is even,
π

∫

1
and hence x[n] =
X(e jω )cos(ωn)dω
π
0

(b) Since x[n] is odd hence x[n] = – x[– n].

1
j
Thus x[n] = ( x[n] − x[−n]) =
2π
2

π

∫ X(e jω ) sin(ωn)dω . Again, since x[n] = – x[– n],

−π

48
π

X(e

jω

) = −X(e

– jω

∫

j
) . The term inside the integral is even, hence x[n] =
X(e jω )sin(ωn)dω
π
0

 e jω 0n e jφ + e − jω 0 n e −jφ 
 µ[n]
3.10 x[n] = α n cos(ω o n + φ)µ[n] = Aα n 


2


n
A jφ
A − jφ
= e αe jω o µ[n] + e
αe − jω o µ[n] . Therefore,
2
2
A
1
A − jφ
1
X(e jω ) = e jφ
− jω e jω o + 2 e
−jω e− jω o .
2
1 − αe
1 − αe

(

)

(

)

3.11 Let x[n] = α n µ[n], α <1. From Table 3.1, DTFT{x[n]} = X(e jω ) =
∞

∑

(a) X1 (e jω ) =

∞

α n µ[n + 1]e − jωn =

n= −∞

∑

α n e− jωn = α −1 e jω +

n= −1

1
1 − α e– jω

.

∞

∑ α n e − jωn

n =0

1
1  e jω − α 
= α −1e jω +
= 
.
1 − αe – jω α  1 − αe – jω 
(b) x 2 [n] = nα n µ[n]. Note that x 2 [n] = n x[n]. Hence, using the differentiation-in-frequency
dX(e jω )
αe− jω
property in Table 3.2, we get X 2 (e jω ) = j
=
.
dω
(1 − αe − jω ) 2
M
−1
 α n , n ≤ M,

jω
n − jωn
− n − jωn
(c) x 3 [n] = 
Then, X 3 (e ) =
α e
+
α e
 0, otherwise.

n=0
n= −M

∑

=

1 − α M+1 e− jω(M +1)
1 − α −M e− jωM
+ αM e jωM
.
1 − αe − jω
1− α −1e − jω

(d) X 4 (e jω ) =

∞

∑

α n e – jωn =

n=3

=

1
1 − αe − jω

αe

∞

∑ α n e – jωn −1 − α e− jω − α 2 e − j2ω

n =0

−1 − αe − jω − α 2 e− j2ω .

(e) X 5 (e jω ) =
=

∑

− jω

(1 − αe − jω ) 2

(f) X 6 (e jω ) =

∞

∑

n α n e – jωn =

n =−2

∞

∑ n α n e– jωn − 2α−2 e j2ω − α −1 e jω
n= 0

− 2α −2 e j2ω − α −1 e jω .
−1

∑

n = −∞

α n e – jωn =

∞

∑

α − me jωm =

m=1

∞

∑
m= 0

49

α −m e jωm − 1 =

e jω
−1 =
.
1 − α−1 e jω
α − e jω
1
 1,

3.12 (a) y 1[n] = 
 0,


 1,

y 0 [n] = 
 0,


N

Then Y1(e

otherwise.


n
 1− ,
y 2 [n] = 
N
 0,


(b)

−N ≤ n ≤ N,

jω

)=

∑e

− jωn

=e

n= − N

jωN

[

Now y 2 [n] = y0 [n] * y 0 [n] where

otherwise.
  N +1  
sin2  ω 

  2 
jω
2 jω
Thus Y2 (e ) = Y0 (e ) =
.
sin 2 (ω / 2)

−N / 2 ≤ n ≤ N / 2,

jω

1
)=
2
=

N

∑e

−j(πn / 2N) − jωn

e

n= −N
N

1
2

∑ e − jω−

n= − N

(

π 
n
2N 

+

π

1

1
+
2

1
2

)

N

∑ e j( πn / 2N) e− jωn

n= −N
N
π 

e – j ω+ 2N  n

∑

n =− N

(

π

1

)

1 sin (ω − 2N )(N + 2 ) 1 sin (ω + 2N )(N + 2 )
=
+
.
2 sin (ω − π ) / 2
2 sin (ω + π ) / 2
3.13 Denote x m[n] =

(

(

2N

)

(n + m −1)! n
α µ[n], α <1. We shall prove by induction that
n!(m − 1)!

DTFT {x m [n]} = X m (e jω ) =
Let m = 2. Then x 2 [n] =
X 2 (e jω ) =

)

2N

α e – jω
– jω 2

(1 − αe
)
property of Table 3.2.

1
(1 − αe – jω ) m

. From Table 3.1, it follows that it holds for m = 1.

( n + 1)! n
α µ[n] = (n + 1)x1 [n] = n x1 [n] + x1 [n]. Therefore,
n!(

+

1
1 − αe

– jω

=

1
(1 − αe – jω ) 2

using the differentiation-in-frequency

Now asuume, the it holds for m. Consider next x m+1 [n] =

(n + m)! n
α µ[n]
n!(m)!

1
 n + m  (n + m − 1)! n
 n + m
=

α µ[n], = 
 x m[n] = ⋅ n ⋅ x m[n] + x m [n]. Hence,
 m  n!(m − 1)!
 m 
m


1 d 
1
1
α e– jω
1

X m +1 (e jω ) = j
=

+
– jω m 
– jω m
– jω m +1 +
m dω  (1 − α e
)  (1 − α e
)
(1 − αe
)
(1 − α e– jω ) m

1
=
.
(1 − α e – jω ) m +1

50

1

]

(1− e− jω(2N+1) ) sin(ω N + 2 )
=
(1 − e − jω )
sin(ω / 2)

otherwise.
 cos(πn / 2N), −N ≤ n ≤ N,

y 3 [n] = 
Then,

0,
otherwise.


(c)

Y3 (e

–N ≤ n ≤ N,
∞

3.14 (a) X a (e

jω

)=

∑

1
δ(ω + 2πk) . Hence, x[n] =
2π

k= −∞

(b) X b (e jω ) =

jω(N +1)

1− e
1− e − jω

N

=

∑ e− jωn .
n= 0

N

(c) X c (e jω ) =1 + 2

∑

N

cos(ωl ) =

l =0

∫ δ(ω) e jωndω = 1.

−π

 1,
Hence, x[n] = 
 0,


∑ e− jωl .

l =−N

π

 1,
Hence x[n] = 

 0,

0 ≤ n ≤ N,
otherwise.
−N ≤ n ≤ N,
otherwise.

− jαe − jω
(d) X d (e jω ) =
, α < 1. Now we can rewrite X d (e jω ) as
(1− αe − jω ) 2
d
1
d
1
X d (e jω ) =
=
X (e jω ) where X o (e jω ) =
.
dω (1 − αe− jω ) dω o
1 − αe − jω

(

)

Now x o [n] = α n µ[n] . Hence, from Table 3.2, x d [n] = −jnα n µ[n].
3.15 (a) H1 (e

jω

 e jω + e – jω 
 e j2ω + e– j2ω 
3
3
jω
– jω
) = 1 + 2
+ e j2ω + e– j2ω .
 + 3
 = 1+ e + e
2
2
2
2





Hence, the inverse of H1 (e jω ) is a length-5 sequence given by
h 1[n] = [1.5 1 1 1 1.5], − 2 ≤ n ≤ 2.

 e jω + e – jω 
 e j2ω + e– j2ω    e jω /2 + e – jω/ 2  – jω /2
(b) H 2 (e jω ) =  3 + 2 
 + 4
  ⋅
 ⋅e
2
2
2




 



1
= 2 e j2ω + 3 e jω + 4 + 4 e – jω + 3e – j2ω + 2e– j3ω . Hence, the inverse of H 2 (e jω ) is a length-6
2
sequence given by h 2 [n] = [1 1.5 2 2 1.5 1], − 2 ≤ n ≤ 3.

(

(c) H 3 (e

(

)

jω


 e jω + e – jω 
 e j2ω + e – j2ω    e jω/2 − e – jω /2 
 3 + 4
)= j
 + 2
 ⋅

2
2
2j




 




)

1 j3ω
e
+ 2 e j2ω + 2 e jω + 0 − 2 e – jω − 2e– j2ω − e – j3ω . Hence, the inverse of H 3 (e jω ) is a
2
length-7 sequence given by h 3 [n] = [ 0.5 1 1 0 –1 –1 − 0.5], − 3 ≤ n ≤ 3.
=


 e jω + e – jω 
 e j2ω + e– j2ω    e jω /2 − e – jω/ 2 
jω/2
(d) H 4 (e jω ) = j  4 + 2
+ 3

  ⋅
 ⋅e
2
2
2j




 



13
1
1
3

=  e j3ω − e j2ω + 3 e jω − 3 + e – jω − e – j2ω  . Hence, the inverse of H 4 (e jω ) is a length-6

22
2
2
2
sequence given by h 4 [n] = [ 0.75 −0.25 1.5 −1.5 0.25 −0.75], − 3 ≤ n ≤ 2.

3.16 (a) H 2 (e jω ) =1 + 2 cos ω +

3
3
5
3
(1 + cos 2ω) = e j2ω + e jω + + e – jω + e – j2ω .
2
4
2
4

51
Hence, the inverse of H1 (e jω ) is a length-5 sequence given by
h 1[n] = [ 0.75 1 2.5 1 0.75], − 2 ≤ n ≤ 2.
4


(b) H 2 (e jω ) = 1 + 3cos ω + (1 + cos2ω)  cos(ω / 2) e jω/ 2
2


1 j3ω 5 j2ω 11 jω 11 5 – jω 1 – j2ω
= e
+ e
+
e + + e
+ e
. Hence, the inverse of H 2 (e jω ) is a length-6
2
4
4
4 4
2
sequence given by h 2 [n] = [ 0.5 1.25 2.75 2.75 1.25 0.5 ], − 3 ≤ n ≤ 2.
(c) H 3 (e jω ) = j[ 3 + 4 cos ω + (1 + cos 2ω) ] sin(ω)
1 j3ω
7
7
1
e
+ e j2ω + e jω + 0 − e – jω − e – j2ω − e– j3ω . Hence, the inverse of H 3 (e jω ) is a
4
4
4
4
length-7 sequence given by h 3 [n] = [ 0.25 1 1.75 0 −1.75 −1 −0.25], − 3 ≤ n ≤ 3.
=

3


(d) H 4 (e jω ) = j  4 + 2 cosω + (1 + cos 2ω) sin(ω / 2) e– jω/ 2
2


1  3 j2ω 1 jω 9 9 – jω 1 – j2ω 3 – j3ω 
=  e
+ e + − e
+ e
− e
 . Hence, the inverse of H 4 (e jω ) is a

24
4
2 2
4
4
3 1 9
9
1
3

length-6 sequence given by h 4 [n] = 
−
−
−  , − 2 ≤ n ≤ 3.
4
8
8
8 8 4

(

)

∞

∑ n= −∞ x[n] e− jωn. Hence,
∞
∞
∞
Y(e jω ) = ∑
y[n] e− jωn = X((e jω ) 3 ) = ∑
x[n](e −jωn ) 3 = ∑
x[m / 3]e − jωm .
n= −∞
n =−∞
m =−∞

3.17 Y(e jω ) = X(e j3ω ) = X (e jω )3 . Now, X(e jω ) =

Therefore, y[n] =
∞

3.18

X(e

jω

{x[n],
0,

∑ x[n] e− jωn.

)=

n= −∞
∞

X(e jω/ 2 ) =

∑

x[n] e− j(ω / 2)n , and X(−e jω / 2 ) =

n= −∞
∞

Y(e

jω

y[n] =

n = 0, ± 3, ± 6,K
otherwise.

)=

(

∑ y[n] e

n= −∞

∞

∑ x[n](−1)n e − j(ω / 2) n.

n =−∞
−ωn

{

}

1
1
= X(e jω / 2 ) + X(−e jω / 2 ) =
2
2

)

{

Thus,

∞

∑ (x[n] + x[n](−1)n )e − j(ω / 2) n.

Thus,

n= −∞

1
x[n], for n even
x[n] + x[n](−1) n = 0.
for n odd .
2

3.19 From Table 3.3, we observe that even sequences have real-valued DTFTs and odd sequences
have imaginary-valued DTFTs.
(a) Since −n = n, , x 1[n] is an even sequence with a real-valued DTFT.
(b) Since (−n) 3 = −n 3 , x 2 [n] is an odd sequence with an imaginary-valued DTFT.

52
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual
Digital signal processing (2nd ed) (mitra) solution manual

More Related Content

What's hot

Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemAttaporn Ninsuwan
 
Fourier Series for Continuous Time & Discrete Time Signals
Fourier Series for Continuous Time & Discrete Time SignalsFourier Series for Continuous Time & Discrete Time Signals
Fourier Series for Continuous Time & Discrete Time SignalsJayanshu Gundaniya
 
single stage amplifier Unit 5 AMVLSI
single stage amplifier Unit 5 AMVLSIsingle stage amplifier Unit 5 AMVLSI
single stage amplifier Unit 5 AMVLSIHarsha Raju
 
Transient response of RC , RL circuits with step input
Transient response of RC , RL circuits  with step inputTransient response of RC , RL circuits  with step input
Transient response of RC , RL circuits with step inputDr.YNM
 
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLAB
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLABDIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLAB
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLABMartin Wachiye Wafula
 
DIGITAL SIGNAL PROCESSING BASED ON MATLAB
DIGITAL SIGNAL PROCESSING BASED ON MATLABDIGITAL SIGNAL PROCESSING BASED ON MATLAB
DIGITAL SIGNAL PROCESSING BASED ON MATLABPrashant Srivastav
 
Comparison of modulation methods
Comparison of modulation methodsComparison of modulation methods
Comparison of modulation methodsDeepak Kumar
 
Phase-locked Loops - Theory and Design
Phase-locked Loops - Theory and DesignPhase-locked Loops - Theory and Design
Phase-locked Loops - Theory and DesignSimen Li
 
Network analysis and synthesis pdddfff
Network  analysis and synthesis pdddfffNetwork  analysis and synthesis pdddfff
Network analysis and synthesis pdddfffKshitij Singh
 
LDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check MatrixLDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check MatrixKavi
 
Pluse Amplitude modulation (PAM)
Pluse Amplitude modulation (PAM)Pluse Amplitude modulation (PAM)
Pluse Amplitude modulation (PAM)MDHALIM7
 
Linear block coding
Linear block codingLinear block coding
Linear block codingjknm
 
inverse z-transform ppt
inverse z-transform pptinverse z-transform ppt
inverse z-transform pptmihir jain
 
Reference for z and inverse z transform
Reference for z and inverse z transformReference for z and inverse z transform
Reference for z and inverse z transformabayteshome1
 
Signals & Systems PPT
Signals & Systems PPTSignals & Systems PPT
Signals & Systems PPTJay Baria
 
MOSFET and Short channel effects
MOSFET and Short channel effectsMOSFET and Short channel effects
MOSFET and Short channel effectsLee Rather
 
專題製作發想與報告撰寫技巧
專題製作發想與報告撰寫技巧專題製作發想與報告撰寫技巧
專題製作發想與報告撰寫技巧Simen Li
 
3.Properties of signals
3.Properties of signals3.Properties of signals
3.Properties of signalsINDIAN NAVY
 

What's hot (20)

Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant System
 
Fourier Series for Continuous Time & Discrete Time Signals
Fourier Series for Continuous Time & Discrete Time SignalsFourier Series for Continuous Time & Discrete Time Signals
Fourier Series for Continuous Time & Discrete Time Signals
 
single stage amplifier Unit 5 AMVLSI
single stage amplifier Unit 5 AMVLSIsingle stage amplifier Unit 5 AMVLSI
single stage amplifier Unit 5 AMVLSI
 
Transient response of RC , RL circuits with step input
Transient response of RC , RL circuits  with step inputTransient response of RC , RL circuits  with step input
Transient response of RC , RL circuits with step input
 
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLAB
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLABDIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLAB
DIGITAL SIGNAL PROCESSING: Sampling and Reconstruction on MATLAB
 
DIGITAL SIGNAL PROCESSING BASED ON MATLAB
DIGITAL SIGNAL PROCESSING BASED ON MATLABDIGITAL SIGNAL PROCESSING BASED ON MATLAB
DIGITAL SIGNAL PROCESSING BASED ON MATLAB
 
Comparison of modulation methods
Comparison of modulation methodsComparison of modulation methods
Comparison of modulation methods
 
Phase-locked Loops - Theory and Design
Phase-locked Loops - Theory and DesignPhase-locked Loops - Theory and Design
Phase-locked Loops - Theory and Design
 
Network analysis and synthesis pdddfff
Network  analysis and synthesis pdddfffNetwork  analysis and synthesis pdddfff
Network analysis and synthesis pdddfff
 
LDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check MatrixLDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check Matrix
 
Pluse Amplitude modulation (PAM)
Pluse Amplitude modulation (PAM)Pluse Amplitude modulation (PAM)
Pluse Amplitude modulation (PAM)
 
Linear block coding
Linear block codingLinear block coding
Linear block coding
 
inverse z-transform ppt
inverse z-transform pptinverse z-transform ppt
inverse z-transform ppt
 
Reference for z and inverse z transform
Reference for z and inverse z transformReference for z and inverse z transform
Reference for z and inverse z transform
 
Signals & Systems PPT
Signals & Systems PPTSignals & Systems PPT
Signals & Systems PPT
 
MOSFET and Short channel effects
MOSFET and Short channel effectsMOSFET and Short channel effects
MOSFET and Short channel effects
 
Chapter 10
Chapter 10Chapter 10
Chapter 10
 
Sampling Theorem
Sampling TheoremSampling Theorem
Sampling Theorem
 
專題製作發想與報告撰寫技巧
專題製作發想與報告撰寫技巧專題製作發想與報告撰寫技巧
專題製作發想與報告撰寫技巧
 
3.Properties of signals
3.Properties of signals3.Properties of signals
3.Properties of signals
 

Similar to Digital signal processing (2nd ed) (mitra) solution manual

Errata Factory Physics Second Edition
Errata Factory Physics Second EditionErrata Factory Physics Second Edition
Errata Factory Physics Second EditionPriscila Pazmiño
 
The best sextic approximation of hyperbola with order twelve
The best sextic approximation of hyperbola with order twelveThe best sextic approximation of hyperbola with order twelve
The best sextic approximation of hyperbola with order twelveIJECEIAES
 
Additional mathematics
Additional mathematicsAdditional mathematics
Additional mathematicsgeraldsiew
 
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)ENGR. KADIRI K. O. Ph.D
 
Theories and Engineering Technics of 2D-to-3D Back-Projection Problem
Theories and Engineering Technics of 2D-to-3D Back-Projection ProblemTheories and Engineering Technics of 2D-to-3D Back-Projection Problem
Theories and Engineering Technics of 2D-to-3D Back-Projection ProblemSeongcheol Baek
 
Application of recursive perturbation approach for multimodal optimization
Application of recursive perturbation approach for multimodal optimizationApplication of recursive perturbation approach for multimodal optimization
Application of recursive perturbation approach for multimodal optimizationPranamesh Chakraborty
 
Rabotna tetratka 5 odd
Rabotna tetratka 5 oddRabotna tetratka 5 odd
Rabotna tetratka 5 oddMira Trajkoska
 
Errata of Seismic analysis of structures by T.K. Datta
Errata of Seismic analysis of structures by T.K. DattaErrata of Seismic analysis of structures by T.K. Datta
Errata of Seismic analysis of structures by T.K. Dattatushardatta
 
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdf
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdfLecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdf
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdfUmerKhan147799
 
Applied mathematics for complex engineering
Applied mathematics for complex engineeringApplied mathematics for complex engineering
Applied mathematics for complex engineeringSahl Buhary
 
Solutions completo elementos de maquinas de shigley 8th edition
Solutions completo elementos de maquinas de shigley 8th editionSolutions completo elementos de maquinas de shigley 8th edition
Solutions completo elementos de maquinas de shigley 8th editionfercrotti
 

Similar to Digital signal processing (2nd ed) (mitra) solution manual (20)

20110326202335912
2011032620233591220110326202335912
20110326202335912
 
Errata Factory Physics Second Edition
Errata Factory Physics Second EditionErrata Factory Physics Second Edition
Errata Factory Physics Second Edition
 
The best sextic approximation of hyperbola with order twelve
The best sextic approximation of hyperbola with order twelveThe best sextic approximation of hyperbola with order twelve
The best sextic approximation of hyperbola with order twelve
 
2013-June: 3rd Semester E & C Question Papers
2013-June: 3rd Semester E & C Question Papers2013-June: 3rd Semester E & C Question Papers
2013-June: 3rd Semester E & C Question Papers
 
3rd Semester Electronic and Communication Engineering (2013-June) Question P...
3rd  Semester Electronic and Communication Engineering (2013-June) Question P...3rd  Semester Electronic and Communication Engineering (2013-June) Question P...
3rd Semester Electronic and Communication Engineering (2013-June) Question P...
 
Steven Duplij, "Polyadic rings of p-adic integers"
Steven Duplij, "Polyadic rings of p-adic integers"Steven Duplij, "Polyadic rings of p-adic integers"
Steven Duplij, "Polyadic rings of p-adic integers"
 
Negative numbers
Negative numbersNegative numbers
Negative numbers
 
Indices
IndicesIndices
Indices
 
Additional mathematics
Additional mathematicsAdditional mathematics
Additional mathematics
 
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)
INTRODUCTION TO CIRCUIT THEORY (A PRACTICAL APPROACH)
 
Theories and Engineering Technics of 2D-to-3D Back-Projection Problem
Theories and Engineering Technics of 2D-to-3D Back-Projection ProblemTheories and Engineering Technics of 2D-to-3D Back-Projection Problem
Theories and Engineering Technics of 2D-to-3D Back-Projection Problem
 
Application of recursive perturbation approach for multimodal optimization
Application of recursive perturbation approach for multimodal optimizationApplication of recursive perturbation approach for multimodal optimization
Application of recursive perturbation approach for multimodal optimization
 
Rabotna tetratka 5 odd
Rabotna tetratka 5 oddRabotna tetratka 5 odd
Rabotna tetratka 5 odd
 
Errata of Seismic analysis of structures by T.K. Datta
Errata of Seismic analysis of structures by T.K. DattaErrata of Seismic analysis of structures by T.K. Datta
Errata of Seismic analysis of structures by T.K. Datta
 
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdf
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdfLecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdf
Lecture6 Chapter3- Function Simplification using Quine-MacCluskey Method.pdf
 
Solid mechanics 1 BDA 10903
Solid mechanics 1 BDA 10903Solid mechanics 1 BDA 10903
Solid mechanics 1 BDA 10903
 
Applied mathematics for complex engineering
Applied mathematics for complex engineeringApplied mathematics for complex engineering
Applied mathematics for complex engineering
 
Sol cap 10 edicion 8
Sol cap 10   edicion 8Sol cap 10   edicion 8
Sol cap 10 edicion 8
 
Shi20396 ch18
Shi20396 ch18Shi20396 ch18
Shi20396 ch18
 
Solutions completo elementos de maquinas de shigley 8th edition
Solutions completo elementos de maquinas de shigley 8th editionSolutions completo elementos de maquinas de shigley 8th edition
Solutions completo elementos de maquinas de shigley 8th edition
 

Recently uploaded

USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 

Recently uploaded (20)

USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 

Digital signal processing (2nd ed) (mitra) solution manual

  • 1. SOLUTIONS MANUAL to accompany Digital Signal Processing: A Computer-Based Approach Second Edition Sanjit K. Mitra Prepared by Rajeev Gandhi, Serkan Hatipoglu, Zhihai He, Luca Lucchese, Michael Moore, and Mylene Queiroz de Farias 1
  • 2. Errata List of "Digital Signal Processing: A Computer-Based Approach", Second Edition Chapter 2 1. Page 48, Eq. (2.17): Replace "y[n]" with " x u [n]". 1 2 2. Page, 51: Eq. (2.24a): Delete " (x[n] + x * [N − n]) ". Eq. (2.24b): Delete 1 2 " (x[n] − x * [N − n]) ". 3. Page 59, Line 5 from top and line 2 from bottom: Replace "− cos((ω1 + ω 2 − π)n ) " with "cos((2π – ω1 – ω 2 )n) ". 4. Page 61, Eq. (2.52): Replace "A cos((Ω o + k ΩT )t + φ ) " with "A cos( ±(Ωo t + φ) + k Ω Tt ) ". 5. Page 62, line 11 from bottom: Replace " Ω T > 2Ω o " with " Ω T > 2 Ω o ". 6. Page 62, line 8 from bottom: Replace "2πΩ o / ω T " with "2πΩ o / ΩT ". 7. Page 65, Program 2_4, line 7: Replace "x = s + d;" with "x = s + d';". ∞ 8. Page 79, line 5 below Eq. (2.76): Replace " ∑ n=0 α n " with " ∞ ∑ α n ". n=0 n n 9. Page 81, Eq. (2.88): Replace "α L+1λn + α N λn 2 N−L " with " α L+1λ 2 + L + α N λ N− L ". 10. Page 93, Eq. (2.116): Replace the lower limit "n=–M+1" on all summation signs with "n=0". 11. Page 100, line below Eq. (2.140) and caption of Figure 2.38: Replace "ω o = 0.03 " with " ω o = 0.06π ". 12. Page 110, Problem 2.44: Replace "{y[n]} = {−1, −1, 11, −3, −10, 20, −16} " with "{y[n]} = {−1, −1, 11, −3, 30, 28, 48} ", and "{y[n]} = {−14 − j5, −3 − j17, −2 + j 5, −26 + j 22, 9 + j12} " with "{y[n]} = {−14 − j5, −3 − j17, −2 + j 5, −9.73 + j12.5, 5.8 + j5.67} ". 13. Page 116, Exercise M2.15: Replace "randn" with "rand". Chapter 3 1. Page 118, line 10 below Eq. (3.4): Replace "real" with "even". 2
  • 3. ∞ ∑ 2. Page 121, Line 5 below Eq. (3.9): Replace " α n " with " n=0 ∞ ∑ α n ". n=0 3. Page 125, Eq. (3.16): Delete the stray α. 4. Page 138, line 2 below Eq. (3.48): Replace "frequency response" with "discrete-time Fourier transform". 5. Page 139, Eq. (3.53): Replace "x(n + mN)" with "x[n + mN]". 6. Page 139, lin2 2, Example 3.14: Replace "x[n] = {0 1 2 "{x[n]} = {0 1 2 3 4 5} ". 3 4 5} " with 7. Page 139, line 3, Example 3.14: Replace "x[n]" with "{x[n]}", and "πk/4" with "2πk/4". 8. Page 139, line 6 from bottom: Replace "y[n] = {4 6 2 "{y[n]} = {4 6 2 3} ". 3 4 6} " with 9. Page 141, Table 3.5: Replace " N[g < −k >N ] " with " N g[< −k > N ] ". 10. Page 142, Table 3.7: Replace "argX[< −k >N ] " with "– arg X[< −k >N ] ". 1 1 1 1 1 1 1 1 1 j −1 − j  1 − j −1 j   " with "   ". 11. Page 147, Eq. (3.86): Replace "  1 −1 1 −1 1 −1 1 −1     1 − j −1 j  1 j −1 − j  −1 12. Page 158, Eq.(3.112): Replace " ∑ α n z −n " with " − n =−∞ −1 ∑ α n z− n ". n= −∞ 13. Page 165, line 4 above Eq. (3.125); Replace "0.0667" with "0.6667". 14. Page 165, line 3 above Eq. (3.125): Replace "10.0000" with "1.000", and "20.0000" with "2.0000". 15. Page 165, line above Eq. (3.125): Replace "0.0667" with "0.6667", "10.0" with "1.0", and "20.0" with "2.0". 16. Page 165, Eq. (3.125): Replace "0.667" with "0.6667". 17. Page 168, line below Eq. (3.132): Replace " z > λ l " with " z > λ l ". 18. Page 176, line below Eq. (3.143): Replace " R h " with "1/ R h ". 19. Page 182, Problem 3.18: Replace " X(e − jω/2 ) " with " X( −e jω/2 ) ". 3
  • 4. 20. Page 186, Problem 3.42, Part (e): Replace "arg X[< −k >N ] " with "– arg X[< −k >N ] ". 21. Page 187, Problem 3.53: Replace "N-point DFT" with "MN-point DFT", replace " 0 ≤ k ≤ N − 1 " with " 0 ≤ k ≤ MN − 1 ", and replace "x[< n > M ] " with " x[< n > N ]". 22. Page 191, Problem 3.83: Replace " lim " with " lim ". n→ ∞ 23. Page 193, Problem 3.100: Replace " z →∞ P(z) P(z) " with " −λ l ". D' (z) D' (z ) 24. Page 194, Problem 3.106, Parts (b) and (d): Replace " z < α " with " z > 1 / α ". 25. page 199, Problem 3.128: Replace " (0.6)µ [n]" with " (0.6)n µ[n] ", and replace "(0.8)µ [ n] " with " (0.8)n µ[n]". 26. Page 199, Exercise M3.5: Delete "following". Chapter 4 1. Page 217, first line: Replace " ξ N " with " ξ M ". 2. Page 230, line 2 below Eq. (4.88): Replace "θ g (ω) " with "θ(ω)". 3. Page 236, line 2 below Eq. (4.109): Replace "decreases" with "increases". 4. Page 246, line 4 below Eq. (4.132): Replace "θ c (e jω )" with "θ c (ω)". 5. Page 265, Eq. (4.202): Replace "1,2,K,3" with "1,2,3". 6. Page 279, Problem 4.18: Replace " H(e j0 ) " with " H(e jπ/ 4 ) ". 7. Page 286, Problem 4.71: Replace "z 3 = − j 0.3" with "z 3 = − 0.3 ". 8. Page 291, Problem 4.102: Replace 0.4 + 0.5z −1 + 1.2 z −2 + 1.2z −3 + 0.5z −4 + 0.4 z −5 "H(z) = " with 1 + 0.9z −2 + 0.2 z −4 0.1 + 0.5z −1 + 0.45z −2 + 0.45z −3 + 0.5z −4 + 0.1z −5 "H(z) = ". 1+ 0.9 z −2 + 0.2z −4 9. Page 295, Problem 4.125: Insert a comma "," before "the autocorrelation". Chapter 5 1. Page 302, line 7 below Eq. (5.9): Replace "response" with "spectrum". 4
  • 5. 2. Page 309, Example 5.2, line 4: Replace "10 Hz to 20 Hz" with "5 Hz to 10 Hz". Line 6: Replace "5k + 15" with "5k + 5". Line 7: Replace "10k + 6" with "5k + 3", and replace "10k – 6" with "5(k+1) – 3". 3. Page 311, Eq. (5.24): Replace " G a ( jΩ − 2k(∆Ω)) " with " G a ( j(Ω − 2k(∆Ω))) ". 4. Page 318, Eq. (5.40): Replace "H(s)" with "Ha(s)", and replace "" with "". 5. Page 321, Eq. (5.54): Replace "" with "". ˆ ˆ 6. Page 333, first line: Replace "Ω p1 " with " Ω p1 ", and " Ω p2 " with " Ω p2 ". 7. Page 349, line 9 from bottom: Replace "1/T" with "2π/T". 8. Page 354, Problem 5.8: Interchange "Ω1 " and "Ω 2 ". 9. Page 355, Problem 5.23: Replace "1 Hz" in the first line with "0.2 Hz". 10. Page 355, Problem 5.24: Replace "1 Hz" in the first line with "0.16 Hz". Chapter 6 1. Page 394, line 4 from bottom: Replace "alpha1" with "fliplr(alpha1)". 2. Page 413, Problem 6.16: Replace " H(z) = b0 + b1 z −1 + b 2z −1 z −1 + b3z −1 1 +L+ bN−1z −1 (1 + bN z −1)  " with   ( )) ( " H(z) = b0 + b1z −1 1 + b2 z −1( z −1 + b3z −1(1 +L+ b N−1z −1(1 + b N z −1 ))) ".   3. Page 415, Problem 6.27: Replace "H(z) = 3z 2 + 18.5z + 17.5 " with (2 z + 1)(z + 2) 3z 2 + 18.5z + 17.5 "H(z) = ". (z + 0.5)(z + 2) 4. Page 415, Problem 6.28: Replace the multiplier value "0.4" in Figure P6.12 with "–9.75". 5. Page 421, Exercise M6.1: Replace "−7.6185z −3 " with "−71.6185z −3 ". 6. Page 422, Exercise M6.4: Replace "Program 6_3" with "Program 6_4". 7. Page 422, Exercise M6.5: Replace "Program 6_3" with "Program 6_4". 8. Page 422, Exercise M6.6: Replace "Program 6_4" with "Program 6_6". 5
  • 6. Chapter 7 1. Page 426, Eq. (7.11): Replace "h[n – N]" with "h[N – n]". 2. Page 436, line 14 from top: Replace "(5.32b)" with "(5.32a)". 3. Page 438, line 17 from bottom: Replace "(5.60)" with "(5.59)". 4. Page 439, line 7 from bottom: Replace "50" with "40". 5. Page 442, line below Eq. (7.42): Replace "F −1 (ˆ ) " with "1 / F( ˆ ) ". z z 6. Page 442, line above Eq. (7.43): Replace "F −1 (ˆ ) " with "F( ˆ ) ". z z L  z−α  ˆ l ". 7. Page 442, Eq. (7.43): Replace it with " F( ˆ ) = ± ∏  z  l =1  1 − α *ˆ  lz 8. Page 442, line below Eq. (7.43): Replace "where α l " with "where α l ". 9. Page 446, Eq. (7.51): Replace " β(1 − α) " with " β(1 + α) ". 10. Page 448, Eq. (7.58): Replace " ω c < ω ≤ π " with " ω c < ω ≤ π ". 11. Page 453, line 6 from bottom: Replace "ω p − ωs " with "ω s − ω p ". 12. Page 457, line 8 from bottom: Replace "length" with "order". 13. Page 465, line 5 from top: Add "at ω = ω i " before "or in". 14. Page 500, Problem 7.15: Replace "2 kHz" in the second line with "0.5 kHz". Bs 15. Page 502, Problem 7.22: Replace Eq. (7.158) with "Ha (s) = 2 ". s + Bs + Ω 2 0 16. Page 502, Problem 7.25: Replace "7.2" with "7.1". 17. Page 504, Problem 7.41: Replace "Hint (e jω ) = e − jω " in Eq. (7.161) with 1 "Hint (e jω ) = ". jω 18. Page 505, Problem 7.46: Replace "16" with "9" in the third line from bottom. 19. Page 505, Problem 7.49: Replace "16" with "9" in the second line. 20. Page 510, Exercise M7.3: Replace "Program 7_5" with "Program 7_3". 21. Page 510, Exercise M7.4: Replace "Program 7_7" with "M-file impinvar". 22. Page 510, Exercise M7.6: Replace "Program 7_4" with "Program 7_2". 23. Page 511, Exercise M7.16: Replace "length" with "order". 6
  • 7. 24. Page 512, Exercise M7.24: Replace "length" with "order". Chapter 8 1. Page 518, line 4 below Eq. (6.7): Delete "set" before "digital". 2. Page 540, line 3 above Eq. (8.39): Replace "G[k]" with " X0 [k]" and " H[k]" with " X1[k]". Chapter 9 1. Page 595, line 2 below Eq. (9.30c): Replace "this vector has" with "these vectors have". 2. Page 601, line 2 below Eq. (9.63): Replace "2 b " with "2 −b ".  a z + 1  a k z + 1 3. Page 651, Problem 9.10, line 2 from bottom: Replace "  k  " with "   ".  1 + ak z  z + ak  4. Page 653, Problem 9.15, line 7: Replace "two cascade" with "four cascade". d1d 2 + d1z −1 + z −2 5. Page 653, Problem 9.17: Replace "A 2 (z) = " with 1 + d1z −1 + d1d 2z −2 "A 2 (z) = d2 + d1z−1 + z −2 ". 1 + d1z −1 + d2 z −2 6. Page 654, Problem 9.27: Replace "structure" with "structures". 7. Page 658, Exercise M9.9: Replace "alpha" with "α ". Chapter 10 1. Page 692, Eq. (10.57b): Replace "P0 (α 1) = 0.2469" with "P0 (α 1) = 0.7407". 2. Page 693, Eq. (10.58b): Replace "P0 (α 2 ) = –0.4321" with "P0 (α 2 ) = –1.2963 ". 3. Page 694, Figure 10.38(c): Replace "P−2 (α 0 )" with "P1 (α 0 ) ", "P−1(α 0 )" with "P0 (α 0 )", "P0 (α 0 )" with "P−1(α 0 )", "P1 (α 0 ) " with "P−2 (α 0 )", "P−2 (α1 )" with "P1 (α1 )", "P−1(α 1)" with "P0 (α 1) ", "P0 (α 1) " with "P−1(α 1)", "P1 (α1 )" with "P−2 (α1 )", "P−2 (α 2 ) " with "P1 (α 2 ) ", "P−1(α 2 )" with "P0 (α 2 )", "P0 (α 2 )" with "P−1(α 2 )", and "P−1(α 2 )" with "P−2 (α 2 ) ". 4. Page 741, Problem 10.13: Replace "2.5 kHz" with "1.25 kHz". N N−1 5. Page 741, Problem 10.20: Replace " ∑ i =0 z i " with " ∑ i =0 z i ". 7
  • 8. 6. Page 743, Problem 10.28: Replace "half-band filter" with a "lowpass half-band filter with a zero at z = –1". 7. Page 747, Problem 10.50: Interchange "Y k " and "the output sequence y[n]". 8. Page 747, Problem 10.51: Replace the unit delays " z −1 " on the right-hand side of the structure of Figure P10.8 with unit advance operators "z". [ ] 9. Page 749, Eq. (10.215): Replace " 3 H2 (z) − 2H 2 (z) " with " z −2 3H 2 (z) − 2H2 (z) ". 10. Page 751, Exercise M10.9: Replace "60" with "61". 11. Page 751, Exercise M10.10: Replace the problem statement with "Design a fifth-order IIR half-band Butterworth lowpass filter and realize it with 2 multipliers". 12. Page 751, Exercise M10.11: Replace the problem statement with "Design a seventh-order IIR half-band Butterworth lowpass filter and realize it with 3 multipliers". Chapter 11 1. Page 758, line 4 below Figure 11.2 caption: Replace "grid" with "grid;". 2. age 830, Problem 11.5: Insert "ga (t) = cos(200πt) " after "signal" and delete "= cos(200πn) ". 3. Page 831, Problem 11.11: Replace "has to be a power-of-2" with "= 2 l , where l is an integer". 8
  • 9. Chapter 2 (2e) 2.1 (a) u[n] = x[n] + y[n] = {3 5 1 −2 8 14 0} (b) v[n] = x[n]⋅ w[n] = {−15 −8 0 6 −20 0 2} (c) s[n] = y[n] − w[n] = {5 3 −2 −9 9 9 −3} (d) r[n] = 4.5y[n] ={0 31.5 4.5 −13.5 18 40.5 −9} 2.2 (a) From the figure shown below we obtain v[n] x[n] v[n–1] –1 z α β y[n] γ v[n] = x[n] + α v[n −1] and y[n] = βv[n −1] + γ v[n −1] = (β + γ )v[n −1]. Hence, v[n −1] = x[n −1] + αv[n − 2] and y[n −1] = (β + γ )v[n − 2]. Therefore, y[n] = (β + γ )v[n −1] = (β + γ )x[n −1] + α(β + γ )v[n − 2] = (β + γ )x[n −1] + α(β + γ ) = (β + γ )x[n −1] + α y[n −1] . (b) From the figure shown below we obtain –1 x[n] x[n–1] z –1 –1 x[n–2] z –1 x[n–4] z x[n–3] z α β γ y[n] y[n] = γ x[n − 2] + β( x[n −1] + x[n − 3]) + α( x[n] + x[n − 4]) . (c) From the figure shown below we obtain v[n] x[n] –1 z –1 –1 d1 z 6 v[n–1] y[n] y[n − 1] (β+ γ )
  • 10. v[n] = x[n] − d1 v[n − 1] and y[n] = d 1v[n] + v[n − 1] . Therefore we can rewrite the second ( ) ( ) equation as y[n] = d 1 x[n] − d1 v[n −1] + v[n −1] = d 1x[n] + 1 − d 2 v[n −1] 1 ( )( ) ( (1) ) ( ) 2 2 2 = d1 x[n] + 1 − d1 x[n −1] − d 1v[n − 2] = d1 x[n] + 1 − d1 x[n −1] − d1 1− d1 v[n − 2] ( ( ) ) 2 From Eq. (1), y[n −1] = d1 x[n − 1] + 1 − d1 v[n − 2] , or equivalently, 2 d 1y[n −1] = d1 x[n −1] + d1 1 − d 2 v[n − 2]. Therefore, 1 2 2 2 y[n] + d 1y[n −1] = d1x[n] + 1− d 1 x[n − 1] − d 1 1 − d1 v[n − 2] + d1 x[n −1] + d1 ( ) ( ) (1− d2 )v[n − 2] 1 = d1 x[n] + x[n − 1] , or y[n] = d 1x[n] + x[n −1] − d1y[n −1]. (d) From the figure shown below we obtain x[n] v[n] –1 z v[n–1] –1 v[n–2] y[n] z –1 d1 w[n] u[n] d2 v[n] = x[n] − w[n], w[n] = d 1v[n −1] + d 2 u[n], and u[n] = v[n − 2] + x[n]. From these equations we get w[n] = d 2 x[n] + d1x[n −1] + d 2 x[n − 2] − d1w[n − 1] − d 2 w[n − 2] . From the figure we also obtain y[n] = v[n − 2] + w[n] = x[n − 2] + w[n] − w[n − 2], which yields d 1y[n −1] = d1x[n − 3] + d 1w[n −1] − d1 w[n − 3], and d 2 y[n − 2] = d 2 x[n − 4] + d 2 w[n − 2] − d 2 w[n − 4], Therefore, y[n] + d 1y[n −1] + d 2 y[n − 2] = x[n − 2] + d1 x[n − 3] + d 2 x[n − 4] ( ) ( ) + w[n] + d1w[n − 1] + d 2 w[n − 2] − w[n − 2] + d1w[n − 3] + d 2 w[n − 4] = x[n − 2] + d 2 x[n] + d1x[n −1] or equivalently, y[n] = d 2 x[n] + d 1x[n −1] + x[n − 2] − d 1y[n −1] − d 2 y[n − 2]. 2.3 (a) x[n] = {3 −2 0 1 4 5 2}, Hence, x[−n] ={2 5 4 1 0 −2 3}, − 3 ≤ n ≤ 3. 1 Thus, x ev [n] = (x[n] + x[−n]) = {5 / 2 3 / 2 2 1 2 3 / 2 5 / 2}, − 3 ≤ n ≤ 3, and 2 1 x od [n] = (x[n] − x[−n]) = { / 2 −7 / 2 −2 0 2 7 / 2 −1 / 2}, − 3 ≤ n ≤ 3. 1 2 (b) y[n] = {0 7 1 −3 4 9 −2}. Hence, y[−n] ={−2 9 4 −3 1 7 0}, − 3 ≤ n ≤ 3. 1 Thus, y ev [ n] = (y[n] + y[−n]) = {−1 8 5 / 2 −3 5 / 2 8 −1 − 3 ≤ n ≤ 3, and }, 2 1 y od [n] = (y[n] − y[−n]) = { −1 −3 / 2 0 1 3 / 2 −1}, − 3 ≤ n ≤ 3 . 1 2 7
  • 11. (c) w[n] = {−5 4 3 6 −5 0 1}, Hence, w[−n] = { 0 −5 6 3 4 −5}, − 3 ≤ n ≤ 3. 1 1 Thus, w ev [n] = (w[n] + w[−n]) = {−2 2 −1 6 −1 2 −2}, − 3 ≤ n ≤ 3, and 2 1 w od [n] = (w[n] − w[−n]) = {−3 2 4 0 −4 −2 3}, − 3 ≤ n ≤ 3, 2 2.4 (a) x[n] = g[n]g[n] . Hence x[− n] = g[−n]g[−n] . Since g[n] is even, hence g[–n] = g[n]. Therefore x[–n] = g[–n]g[–n] = g[n]g[n] = x[n]. Hence x[n] is even. (b) u[n] = g[n]h[n]. Hence, u[–n] = g[–n]h[–n] = g[n](–h[n]) = –g[n]h[n] = –u[n]. Hence u[n] is odd. (c) v[n] = h[n]h[n]. Hence, v[–n] = h[n]h[n] = (–h[n])(–h[n]) = h[n]h[n] = v[n]. Hence v[n] is even. 2.5 Yes, a linear combination of any set of a periodic sequences is also a periodic sequence and the period of the new sequence is given by the least common multiple (lcm) of all periods. For our example, the period = lcm(N1 , N2 , N3 ) . For example, if N1 = 3, N 2 = 6, and N 3 =12, then N = lcm(3, 5, 12) = 60. 2.6 1 1 (a) x pcs[n] = {x[n] + x * [−n]} = {Aα n + A *(α*) −n }, and 2 2 1 1 x pca [n] = {x[n] − x * [−n]} = {Aα n − A *(α*) −n }, −N ≤ n ≤ N. 2 2 (b) h[n] = {−2 + j5 4 − j3 5 + j6 3 + j −7 + j2} −2 ≤ n ≤ 2 , and hence, h * [−n] = {−7 − j2 3 − j 5 − j6 4 + j3 −2 − j5}, −2 ≤ n ≤ 2 . Therefore, 1 h pcs[n] = {h[n] + h * [−n]} ={−4.5 + j1.5 3.5 − j2 5 3.5 + j2 −4.5 − j1.5} and 2 1 h pca [n] = {h[n] − h * [−n]}= {2.5 + j3.5 0.5 − j j6 −0.5 − j −2.5 + j3.5} −2 ≤ n ≤ 2 . 2 { n} 2.7 (a) {x[n]} = Aα Since for n < 0, α where A and α are complex numbers, with α < 1. n can become arbitrarily large hence {x[n]} is not a bounded sequence. (b) {y[n]} = Aα µ[n] n where A and α are complex numbers, with α <1. In this case y[n] ≤ A ∀n hence {y[n]} is a bounded sequence. (c) {h[n]} = Cβ µ[n] where C and β are complex numbers,with β > 1. n n Since β becomes arbitrarily large as n increases hence {h[n]} is not a bounded sequence. 8
  • 12. (d) {g[n]} = 4 sin(ω an). Since − 4 ≤ g[n] ≤ 4 for all values of n, {g[n]} is a bounded sequence. (e) {v[n]} = 3 cos 2 (ω b n 2 ). Since − 3≤ v[n] ≤ 3 for all values of n, {v[n]} is a bounded sequence. 2.8 1 (a) Recall, x ev [n] = ( x[n] + x[−n]). 2 Since x[n] is a causal sequence, thus x[–n] = 0 ∀n > 0 . Hence, x[n] = x ev [n] + x ev [−n] = 2x ev [n], ∀n > 0 . For n = 0, x[0] = xev[0]. Thus x[n] can be completely recovered from its even part.  1 x[n], n > 0,  1 Likewise, x od [n] = (x[n] – x[−n]) =  2 2  0, n = 0.  Thus x[n] can be recovered from its odd part ∀n except n = 0. (b) 2y ca [n] = y[n] − y * [−n] . Since y[n] is a causal sequence y[n] = 2y ca [n] ∀n > 0 . For n = 0, Im{y[0]} = y ca [0] . Hence real part of y[0] cannot be fully recovered from y ca [n] . Therefore y[n] cannot be fully recovered from y ca [n] . 2y cs [n] = y[n] + y *[−n] . Hence, y[n] = 2y cs[n] ∀n > 0 . For n = 0, Re{y[0]} = y cs [0]. Hence imaginary part of y[0] cannot be recovered from y cs[n] . Therefore y[n] cannot be fully recovered from y cs[n] . 2.9 1 1 x ev [n] = ( x[n] + x[−n]). This implies, x ev [–n] = ( x[–n] + x[n]) = x ev [n]. 2 2 Hence even part of a real sequence is even. 1 1 x od [n] = (x[n] – x[–n]). This implies, x od [–n] = ( x[–n] – x[n]) = –x od[n]. 2 2 Hence the odd part of a real sequence is odd. 1 1 2.10 RHS of Eq. (2.176a) is x cs [n] + x cs [n − N] = ( x[n] + x *[−n]) + (x[n − N] + x *[N − n]). 2 2 Since x[n] = 0 ∀n < 0 , Hence 1 2 x cs [n] + x cs [n − N] = ( x[n] + x *[N − n]) = x pcs[n], 0 ≤ n ≤ N – 1. RHS of Eq. (2.176b) is 1 2 1 2 x ca [n] + x ca [n − N] = ( x[n] − x * [−n]) + ( x[n − N] − x * [n − N]) 1 2 = ( x[n] − x *[N − n]) = x pca [n], 9 0 ≤ n ≤ N – 1.
  • 13. 1 ( ) 2.11 x pcs[n] = x[n] + x * [< −n >N ] for 0 ≤ n ≤ N – 1, Since, x[< − n > N ] = x[N − n] , it 2 follows 1 that x pcs[n] = ( x[n] + x * [N − n]), 1 ≤ n ≤ N – 1. 2 1 For n = 0, x pcs[0] = ( x[0] + x *[0]) = Re{x[0]}. 2 1 ( ) 1 Similarly x pca [n] = x[n] − x *[< −n > N ] = ( x[n] − x *[N − n]), 1 ≤ n ≤ N – 1. Hence, 2 2 1 for n = 0, xpca [0] = ( x[0] − x * [0]) = jIm{x[0]}. 2 ∞ ∑ n= −∞ x[n] < ∞. Therefore, by Schwartz inequality, ∞ ∞ ∞    ∑ n= −∞ x[n]2 ≤  ∑ n= −∞ x[n]  ∑ n= −∞ x[n] < ∞. 2.12 (a) Given (b) Consider x[n] = {1/0,n, n ≥1, otherwise. The convergence of an infinite series can be shown via the integral test. Let a n = f(x), where f(x) is a continuous, positive and decreasing function for all x ≥ 1. Then the series ∞ ∑ n =1 an and the integral ∞ ∫1 f(x) dx both converge or ∞1 ∞ both diverge. For a n = 1 / n , f(x) = 1/n. But ∫ dx = ( ln x ) 1 = ∞ − 0 = ∞. Hence, 1 x ∞ ∞ ∑ n =−∞ x[n] = ∑ n=1 1 does not converge, and as a result, x[n] is not absolutely n 1 summable. To show {x[n]} is square-summable, we observe here a n = 2 , and thus, n ∞ 1 ∞ 1 1 1 1 ∞  1 f (x) = 2 . Now, ∫ = − + = 1. Hence, ∑ n =1 2 converges, or in other 2 dx =  − x  1 x  1 ∞ 1 x n words, x[n] = 1/n is square-summable. 2.13 See Problem 2.12, Part (b) solution. 2.14 x 2 [n] = ∑ cos ω c n πn , 1 ≤ n ≤ ∞. Now, ∑  cosω cn  2   ≤ n=1  πn  ∞ ∞ ∑ 1 . Since, n=1 π 2 n 2 ∑ 1 π2 = , n=1 n 2 6 ∞  cosω cn  2 1   ≤ . Therefore x 2 [n] is square-summable. n=1  πn  6 ∞ Using integral test we now show that x 2 [n] is not absolutely summable. cos ω c x ∞ cos ω x x 1 c dx = ⋅ x ⋅ cos int(ω cx) 1 πx π cos ω c x ∞ ∫ Since ∞ ∫1 cos ω c x πx where cosint is the cosine integral function. 1 dx diverges, ∞ ∑ n=1 cosω cn πn 10 also diverges.
  • 14. ∞ ∑ 2.15 x2 [n] = n=−∞ ∞ = ∑ ∞ 2 ∑ ( xev [n] + x od[n]) n=−∞ ∞ ∑ x2 [n] + ev n=−∞ n= −∞ ∞ as n= –∞ xev [n]xod [n] ∑ 2.16 x[n] = cos(2πkn / N), N−1 Ex = ∑ cos 2 ∑ x ev [n]x od [n] = n=−∞ ∞ ∑ n= −∞ x2 [n] + ev ∞ ∑ x2 [n] od n=−∞ = 0 since x ev [n]x od [n] is an odd sequence. 0 ≤ n ≤ N –1. Hence, (2πkn / N) = n= 0 N−1 Let C = ∞ x2 [n] + 2 od 1 2 N −1 ∑ (1 + cos(4πkn / N)) = n =0 N 2 + 1 2 N −1 ∑ cos(4πkn / N). n= 0 N−1 ∑ cos(4πkn / N), and S = ∑ sin(4πkn / N). n =0 j4πk n= 0 N−1 Therefore C + jS = 1 ∑ e j4πkn / N = e e / N−− 1 = 0. Thus, C = Re{C + jS} = 0 . j4πk n= 0 As C = Re{C + jS} = 0 , it follows that E x = . 2 N ∞ ∞ 2.17 (a) x 1[n] = µ[n]. Energy = ∑ n =−∞ µ 2 [n] = ∑ n= −∞ 12 = ∞. 1 1 K 1 K K Average power = lim lim lim ∑ n= − K (µ[n]) 2 = K→∞ 2K +1 ∑ n =0 12 = K→ ∞ 2K + 1 = 2 . K→ ∞ 2K + 1 2 ∞ ∞ (b) x 2 [n] = nµ[n]. Energy = ∑ n =−∞ ( nµ[n]) = ∑ n= −∞ n 2 = ∞. 1 1 K K Aveerage power = lim lim ∑ n= − K (nµ[n]) 2 = K→ ∞ 2K + 1 ∑ n= 0 n 2 = ∞. 2K + 1 K→ ∞ (c) x 3 [n] = A o e jω o n . Energy = ∞ ∫n =−∞ jω o n A oe 2 ∞ = ∑ n = −∞ A o 2 = ∞. Average power = 2 1 1 2K A 2 K K jω n 2 o Ao e o = lim A o = lim = A 2. ∑ n= − K ∑ n =− K o K→ ∞ 2K + 1 K→∞ 2K +1 K→ ∞ 2K + 1 lim 2π A  2πn  jω n − jω n (d) x[n] = Asin + φ = Ao e o + A1e o , where ω o = , A o = − e jφ and  M  M 2 A − jφ 3 2 A1 = e . From Part (c), energy = ∞ and average power = A 2 + A2 + 4A 2 A1 = A 2 . o 1 o 2 4 2.18  1, n ≥ 0,  1, n < 0, Now, µ[n] =  Hence, µ[−n − 1] =  Thus, x[n] = µ[n] + µ[−n −1].  0, n < 0.  0, n ≥ 0. n 2.19 (a) Consider a sequence defined by x[n] = ∑ δ[k]. k = −∞ 11
  • 15. If n < 0 then k = 0 is not included in the sum and hence x[n] = 0, whereas for n ≥ 0 , k = 0 is n  1, n ≥ 0, included in the sum hence x[n] = 1 ∀ n ≥ 0 . Thus x[n] = δ[k] =  = µ[n].  0, n < 0,  ∑  1, (b) Since µ[n] =   0,   1, n ≥ 0, it follows that µ[n – 1] =  n < 0,  0,  { Now x[n] = Asin( ω 0 n + φ) . { (a) Given x[n] = 0 − 2 n ≥ 1, n ≤ 0. n = 0, n ≠ 0, = δ[n]. 1, Hence, µ[n] – µ[n −1] = 0, 2.20 k = −∞ −2 − 2 0 2 2 } 2 . The fundamental period is N = 4, hence ω o = 2π / 8 = π / 4. Next from x[0] = Asin(φ) = 0 we get φ = 0, and solving π x[1] = Asin( + φ) = Asin(π / 4) = − 2 we get A = –2. 4 (b) Given x[n] = { 2 2 − 2 } − 2 . The fundamental period is N = 4, hence ω 0 = 2π/4 = π/2. Next from x[0] = Asin(φ) = 2 and x[1] = Asin(π / 2 + φ) = A cos(φ) = 2 it can be seen that A = 2 and φ = π/4 is one solution satisfying the above equations. (c) x[n] = {3 −3} . Here the fundamental period is N = 2, hence ω 0 = π . Next from x[0] = A sin(φ) = 3 and x[1] = Asin(φ + π) = −A sin(φ) = −3 observe that A = 3 and φ = π/2 that A = 3 and φ = π/2 is one solution satisfying these two equations. (d) Given x[n] = {0 1.5 0 −1.5} , it follows that the fundamental period of x[n] is N = 4. Hence ω 0 = 2π / 4 = π / 2 . Solving x[0] = Asin(φ) = 0 we get φ = 0, and solving x[1] = Asin(π / 2) =1.5 , we get A = 1.5. ˜ 2.21 (a) x 1[n] = e − j0.4πn . Here, ω o = 0.4π. From Eq. (2.44a), we thus get N= 2πr 2π r = = 5 r = 5 for r =1. ωo 0.4π ˜ (b) x 2 [n] = sin(0.6πn + 0.6π). Here, ω o = 0.6π. From Eq. (2.44a), we thus get N= 2πr 2π r 10 = = r = 10 for r = 3. ωo 0.6π 3 ˜ (c) x 3 [n] = 2 cos(1.1πn − 0.5π) + 2 sin(0.7πn). Here, ω 1 = 1.1π and ω 2 = 0.7π . From Eq. 2π r1 2π r1 20 2πr2 2π r2 20 (2.44a), we thus get N1 = = = r1 and N 2 = = = r 2 . To be periodic ω1 1.1π 11 ω2 0.7π 7 12
  • 16. we must have N1 = N 2 . This implies, 20 20 r1 = r2 . This equality holds for r1 = 11 and r 2 = 7 , 11 7 and hence N = N1 = N 2 = 20. 2π r1 20 2πr2 20 = r1 and N 2 = = r2 . It follows from the results of Part (c), N = 20 1.3π 13 0.3π 3 with r1 = 13 and r 2 = 3. (d) N1 = 2π r1 5 2πr2 5 = r1 , N 2 = = r2 and N 3 = N 2 . Therefore, N = N1 = N2 = N 2 = 5 for 1.2π 3 0.8π 2 r1 = 3 and r 2 = 2. (e) N1 = ˜ ˜ ˜ (f) x 6 [n] = n modulo 6. Since x 6 [n + 6] = (n+6) modulo 6 = n modulo 6 = x 6 [n]. ˜ Hence N = 6 is the fundamental period of the sequence x 6 [n]. 2πr 2πr 100 = = r =100 for r = 7. ωo 0.14π 7 2πr 2πr 25 (b) ω o = 0.24π. Therefore, N = = = r = 25 for r = 3. ωo 0.24π 3 2πr 2πr 100 (c) ω o = 0.34π. Therefore, N = = = r =100 for r = 17. ωo 0.34π 17 2πr 2π r 8 (d) ω o = 0.75π. Therefore, N = = = r = 8 for r = 3. ωo 0.75π 3 2.22 (a) ω o = 0.14π. Therefore, N = 2.23 x[n] = x a (nT) = cos(Ω o nT) is a periodic sequence for all values of T satisfying Ω o T ⋅ N = 2πr for r and N taking integer values. Since, Ω o T = 2πr / N and r/N is a rational number, Ω o T must also be rational number. For Ω o = 18 and T = π/6, we get N = 2r/3. Hence, the smallest value of N = 3 for r = 3. 2.24 (a) x[n] = 3δ[n + 3] − 2 δ[n + 2] + δ[n] + 4 δ[n −1] + 5 δ[n − 2] + 2 δ[n −3] (b) y[n] = 7 δ[n + 2] + δ[n +1] − 3δ[n] + 4 δ[n − 1] + 9 δ[n − 2] − 2 δ[n − 3] (c) w[n] = −5δ[n + 2] + 4 δ[n + 2] + 3δ[n +1] + 6δ[n] − 5δ[n −1] +δ[n − 3] 2.25 (a) For an input x i [n], i = 1, 2, the output is y i [n] = α1x i [n] + α 2 x i [n −1] + α 3x i [n − 2] + α 4 x i [n − 4], for i = 1, 2. Then, for an input x[n] = Ax 1[n] + B x 2 [n], the output is y[n] = α1 ( A x1 [n] + Bx 2 [n] ) + α 2 ( A x1[ n −1] + Bx 2 [n −1]) +α3 ( A x1 [n − 2] + Bx 2 [n − 2]) + α4 ( Ax 1[n − 3] + B x 2 [n − 3]) = A( α1x1 [n] + α 2 x1[n −1] + α 3x 1[n − 2] + α 4 x1[n − 4]) + B ( α1x 2 [n] + α 2 x 2 [n − 1] + α3 x 2 [n − 2] + α 4 x 2 [n − 4]) = A y1[ n] + By 2 [n]. Hence, the system is linear. 13
  • 17. (b) For an input x i [n], i = 1, 2, the output is y i [n] = b 0 x i [ n] + b1x i [n − 1] + b 2 x i [n − 2] + a1y i [n − 1] + a2 y i [n − 2], i = 1, 2. Then, for an input x[n] = Ax 1[n] + B x 2 [n], the output is y[n] = A ( b 0 x1 [n] + b 1x1 [n − 1] + b 2 x1 [n − 2] + a1 y1[n −1] + a 2 y1 [n − 2] ) + B ( b 0 x 2 [ n] + b1x 2 [n − 1] + b 2 x 2 [n − 2] + a1y 2 [n −1] + a 2 y 2 [n − 2]) = A y1[ n] + By 2 [n]. Hence, the system is linear.  x i [n / L], n = 0, ± L, ± 2L, K (c) For an input x i [n], i = 1, 2, the output is y i [n] =,  0, otherwise,  Consider the input x[n] = Ax 1[n] + B x 2 [n], Then the output y[n] for n = 0, ± L, ± 2L, K is given by y[n] = x[n / L] = Ax 1[n / L] + B x 2 [n / L] = A y1[n] + By 2 [n] . For all other values of n, y[n] = A ⋅ 0 + B ⋅ 0 = 0. Hence, the system is linear. (d) For an input x i [n], i = 1, 2, the output is y i [n] = x i [n / M] . Consider the input x[n] = Ax 1[n] + B x 2 [n], Then the output y[n] = Ax 1[n / M] + B x 2 [n / M ] = A y1[n] + B y 2 [n]. Hence, the system is linear. (e) For an input x i [n], i = 1, 2, the output is y i [n] = input x[n] = Ax 1[n] + B x 2 [n], the output is y i [n] = 1 M −1 ∑ k =0 x i [ n − k]. Then, for an M 1 M −1 ∑ k =0 (Ax 1[n − k] + Bx 2 [n − k]) M M −1 M −1  1   1  = A ∑ k =0 x1 [n − k] + B ∑ k =0 x 2 [n − k] = A y1 [n] + By 2 [n]. Hence, the system is M  M  linear. (f) The first term on the RHS of Eq. (2.58) is the output of an up-sampler. The second term on the RHS of Eq. (2.58) is simply the output of an unit delay followed by an upsampler, whereas, the third term is the output of an unit adavance operator followed by an up-sampler We have shown in Part (c) that the up-sampler is a linear system. Moreover, the delay and the advance operators are linear systems. A cascade of two linear systems is linear and a linear combination of linear systems is also linear. Hence, the factor-of-2 interpolator is a linear system. (g) Following thearguments given in Part (f), it can be shown that the factor-of-3 interpolator is a linear system. 2.26 (a) y[n] = n2 x[n]. For an input xi[n] the output is yi[n] = n2 x i[n], i = 1, 2. Thus, for an input x3 [n] = Ax1 [n] + Bx2 [n], the output y3 [n] is given by y 3 [n] = n 2 ( A x1[n] + Bx 2 [n]) = A y1[ n] + By 2 [n] . Hence the system is linear. Since there is no output before the input hence the system is causal. However, y[n] being proportional to n, it is possible that a bounded input can result in an unbounded output. Let x[n] = 1 ∀ n , then y[n] = n2 . Hence y[n] → ∞ as n → ∞ , hence not BIBO stable. Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If x 1[n] = x[n − n 0 ] then y 1[n] = n 2 x1 [n] = n 2 x[n − n 0 ]. However, y[n − n 0 ] = (n − n 0 ) 2 x[n − n 0 ] . Since y 1[n] ≠ y[n − n 0 ], the system is not time-invariant. 14
  • 18. (b) y[n] = x 4 [n] . For an input x1 [n] the output is yi[n] = xi4 [n], i = 1, 2. Thus, for an input x3 [n] = Ax1 [n] + 4 Bx2 [n], the output y3 [n] is given by y 3 [n] = (A x1[n] + Bx 2 [n]) 4 ≠ A4 x 1 [n] + A 4 x 4 [n] 2 Hence the system is not linear. Since there is no output before the input hence the system is causal. Here, a bounded input produces bounded output hence the system is BIBO stable too. Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If 4 x 1[n] = x[n − n 0 ] then y 1[n] = x 1 [n] = x 4 [n − n 0 ] = y[n − n 0 ]. Hence, the system is time- invariant. 3 (c) y[n] = β + ∑ x[n − l] . l =0 For an input xi[n] the output is y i [n] = β + 3 ∑ x i [n − l ] , i = 1, 2. Thus, for an input x3 [n] = l =0 Ax 1 [n] + Bx2 [n], the output y3 [n] is given by 3 3 3 l =0 y[n] = β + l =0 l =0 ∑ (A x1 [n − l ] + Bx 2 [ n − l ]) = β + ∑ A x1 [n − l] + ∑ Bx 2 [ n − l ] ≠ A y1[ n] + By 2 [n]. Since β ≠ 0 hence the system is not linear. Since there is no output before the input hence the system is causal. Here, a bounded input produces bounded output hence the system is BIBO stable too. Also following an analysis similar to that in part (a) it is easy to show that the system is timeinvariant. 3 (d) y[n] = β + ∑ x[n − l ] l = –3 For an input xi[n] the output is y i [n] = β + 3 ∑ x i [n − l] , i = 1, 2. l = –3 Thus, for an input x3 [n] = Ax 1 [n] + Bx2 [n], the output y3 [n] is given by 3 3 3 l = −3 y[n] = β + l = −3 l = −3 ∑ (A x1[n − l ] + Bx 2 [n − l ]) = β + ∑ A x1 [n − l ] + ∑ Bx 2 [n − l ] ≠ A y1[ n] + By 2 [n]. Since β ≠ 0 hence the system is not linear. Since there is output before the input hence the system is non-causal. Here, a bounded input produces bounded output hence the system is BIBO stable. 15
  • 19. Let y[n] be the output for an input x[n], and let y1 [n] be the output for an input x1 [n]. If x 1[n] = x[n − n 0 ] then y 1[n] = β + 3 ∑ l = –3 x1[ n − l ] = β + 3 ∑ l = –3 x1 [n − n 0 − l ] = y[n − n 0 ]. Hence the system is time-invariant. (e) y[n] = αx[−n] The system is linear, stable, non causal. Let y[n] be the output for an input x[n] and y 1[n] be the output for an input x 1[n]. Then y[n] = αx[−n] and y 1[n] = α x1 [−n] . Let x 1[n] = x[n − n 0 ], then y 1[n] = α x1 [−n] = α x[−n − n 0 ], whereas y[n − n 0 ] = αx[n 0 − n]. Hence the system is time-varying. (f) y[n] = x[n – 5] The given system is linear, causal, stable and time-invariant. 2.27 y[n] = x[n + 1] – 2x[n] + x[n – 1]. Let y 1[n] be the output for an input x 1[n] and y 2 [n] be the output for an input x 2 [n] . Then for an input x 3 [n] = αx 1[n] +βx 2 [n] the output y 3 [n] is given by y 3 [n] = x 3[n +1] − 2x 3[n] + x 3 [n −1] = αx1[n +1] − 2αx 1[n] + αx1[n −1] +βx 2 [n +1] − 2βx 2 [n] + βx 2 [n −1] = αy1[n] + βy 2 [n] . Hence the system is linear. If x 1[n] = x[n − n 0 ] then y 1[n] = y[n − n 0 ]. Hence the system is time-inavariant. Also the system's impulse response is given by  −2, n = 0,  h[n] =  1, n =1,-1,  elsewhere.  0, Since h[n] ≠ 0 ∀ n < 0 the system is non-causal. 2.28 Median filtering is a nonlinear operation. Consider the following sequences as the input to a median filter: x 1[n] = {3, 4, 5} and x 2 [n] ={2, − 2, − 2}. The corresponding outputs of the median filter are y 1[n] = 4 and y2 [n] = −2 . Now consider another input sequence x3 [n] = x 1 [n] + x2 [n]. Then the corresponding output is y 3 [n] = 3, On the other hand, y1 [n] + y2 [n] = 2 . Hence median filtering is not a linear operation. To test the timeinvariance property, let x[n] and x1 [n] be the two inputs to the system with correponding outputs y[n] and y1 [n]. If x 1[n] = x[n − n 0 ] then y 1[n] = median{x1[n − k],......., x1[n],.......x1 [n + k]} = median{x[n − k − n 0 ],......., x[n − n 0 ], .......x[n + k − n 0 ]}= y[n − n 0 ]. Hence the system is time invariant. 16
  • 20. 2.29 1 x[n]  y[n] =  y[n −1] +   2 y[n −1]   Now for an input x[n] = α µ[n], the ouput y[n] converges to some constant K as n → ∞ . 1 α The above difference equation as n → ∞ reduces to K =  K +  which is equivalent to 2 K 2 K = α or in other words, K = α . It is easy to show that the system is non-linear. Now assume y1 [n] be the output for an x [n]  1  input x1 [n]. Then y 1[n] =  y1[n −1] + 1 2 y1 [n −1]     x[n − n 0 ]  1 . If x 1[n] = x[n − n 0 ]. Then, y 1[n] =  y1[n −1] + 2 y1 [n −1]    Thus y 1[n] = y[n − n 0 ]. Hence the above system is time invariant. 2.30 For an input x i [n], i = 1, 2, the input-output relation is given by y i [n] = x i [n] − y 2 [n − 1] + y i [n −1]. Thus, for an input Ax1 [n] + Bx2 [n], if the output is i Ay 1 [n] + By2 [n], then the input-output relation is given by A y1 [n] + By 2 [ n] = A x1 [n] + Bx 2 [n] − ( A y1 [n − 1] + By 2 [n −1]) + A y1[ n −1] + By 2 [n − 1] = A x1 [n] + Bx 2 [n] 2 2 − A 2 y1 [n − 1] − B2 y 2 [n − 1] + 2AB y1 [n − 1]y 2 [n − 1] + A y1[n −1] + By 2 [n − 1] 2 ≠ A x1[ n] − A2 y 2 [n − 1] + Ay 1[n − 1] + B x 2 [n] − B2 y 2 [ n −1] + By 2 [n − 1] . Hence, the system 1 2 is nonlinear. Let y[n] be the output for an input x[n] which is related to x[n] through y[n] = x[n] − y 2 [ n −1] + y[n − 1]. Then the input-output realtion for an input x[n − n o ] is given by y[n − n o ] = x[n − n o ] − y 2 [n − n o − 1] + y[n − n o − 1], or in other words, the system is timeinvariant. Now for an input x[n] = α µ[n], the ouput y[n] converges to some constant K as n → ∞ . The above difference equation as n → ∞ reduces to K = α − K2 + K , or K 2 = α , i.e. K= α. 2.31 As δ[n] = µ[n] − µ[n − 1] , Τ{δ[n]} = Τ{µ[n]}− Τ{µ[n −1]} ⇒ h[n] = s[n] − s[n − 1] For a discrete LTI system ∞ y[n] = ∑ h[k]x[n − k] = k = −∞ 2.32 ∞ ∑ (s[k] −s[k −1])x[n − k] = k =−∞ ∞ ∞ ∑ ∞ s[k]x[n − k] − k =−∞ ∑ s[k −1]x[n − k] k= −∞ ∞ ˜ ˜ y[n] = ∑ m =−∞ h[m] x[n − m] . Hence, y[n + kN] = ∑ m =−∞ h[m] x[n + kN − m] ∞ ˜ = ∑ m = −∞ h[m] x [n − m] = y[n]. Thus, y[n] is also a periodic sequence with a period N. 17
  • 21. 2.33 Now δ[n − r ] * δ[n − s] = ∞ ∑ m = −∞ δ[m − r]δ[n − s − m] = δ[n − r − s] (a) y 1[n] = x 1[n] * h 1[n] = ( 2δ[n − 1] − 0.5δ[ n − 3] ) * ( 2δ[n] + δ[n − 1] − 3δ[n − 3] ) = 4δ[n −1] * δ[n] – δ[n − 3] * δ[n] + 2 δ[n −1] * δ[n −1] – 0.5 δ[n − 3] * δ[n −1] – 6δ[n −1] * δ[n − 3] + 1.5 δ[n − 3] * δ[n − 3] = 4 δ[n −1] – δ[n − 3] + 2 δ[n −1] – 0.5 δ[n − 4] – 6δ[n − 4] + 1.5δ[n − 6] = 4 δ[n −1] + 2 δ[n −1] – δ[n − 3] – 6.5δ[n − 4] + 1.5δ[n − 6] (b) y 2 [n] = x 2 [n] * h 2 [n] = ( −3δ[n −1] + δ[n + 2]) * ( −δ[n − 2] − 0.5δ[n −1] + 3δ[n − 3]) = − 0.5 δ[n + 1] − δ[n] + 3δ[n − 1] + 1.5δ[ n − 2] + 3δ[n − 3] − 9δ[n − 4] (c) y 3 [n] = x 1[n] * h 2 [n] = (2δ[n − 1] − 0.5δ[ n − 3]) * (−δ[n − 2] − 0.5δ[n −1] + 3δ[n − 3]) = − δ[n − 2] − 2 δ[n − 3] − 6.25δ[n − 4] + 0.5δ[n − 5] – 1.5δ[n − 6] (d) y 4 [n] = x 2 [n] * h 1[n] = ( −3δ[n −1] + δ[n + 2]) * ( 2δ[n] + δ[n − 1] − 3δ[n − 3] ) = 2 δ[n + 2] + δ[n +1] − δ[n −1] − 3δ[n − 2] – 9 δ[n − 4] 2.34 y[n] = ∑ m2 g[m] h[n − m] . Now, h[n – m] is defined for M1 ≤ n − m ≤ M2 . Thus, for =N 1 N m = N1 , h[n–m] is defined for M1 ≤ n − N1 ≤ M 2 , or equivalently, for M1 + N1 ≤ n ≤ M2 + N1 . Likewise, for m = N 2 , h[n–m] is defined for M1 ≤ n − N2 ≤ M2 , or equivalently, for M1 + N 2 ≤ n ≤ M2 + N 2 . (a) The length of y[n] is M 2 + N2 − M1 – N1 + 1. (b) The range of n for y[n] ≠ 0 is min ( M1 + N1 , M1 + N2 ) ≤ n ≤ max( M 2 + N1 , M2 + N 2 ) , i.e., M 1 + N1 ≤ n ≤ M 2 + N2 . 2.35 y[n] = x1 [n] * x 2 [n] = ∞ ∑ k =−∞ x1[n − k]x2[k] . Now, v[n] = x1 [n – N1 ] * x 2 [n – N2 ] = k − N2 = m. Then v[n] = ∞ ∞ ∑ k = −∞ x 1[n − N1 − k] x 2 [k − N2 ]. Let ∑ m = −∞ x1 [n − N1 − N2 − m]x 2 [m] = y[n − N1 − N 2 ], 2.36 g[n] = x1 [n] * x 2 [n] * x 3 [n] = y[n] * x 3 [n] where y[n] = x1 [n] * x 2 [n]. Define v[n] = x 1 [n – N1 ] * x 2 [n – N2 ]. Then, h[n] = v[n] * x 3 [n – N3 ] . From the results of Problem 2.32, v[n] = y[n − N1 − N2 ] . Hence, h[n] = y[n − N1 − N2 ] * x 3 [n – N3 ] . Therefore, using the results of Problem 2.32 again we get h[n] = g[n– N1 – N 2 – N 3 ] . 18
  • 22. ∞ 2.37 y[n] = x[n] * h[n] = ∑ x[n − k]h[k] . Substituting k by n-m in the above expression, we k =−∞ ∞ ∑ x[m]h[n − m] = h[n] get y[n] = * x[n]. Hence convolution operation is commutative. m = −∞ ( ) Let y[n] = x[n] * h1 [n] + h 2 [n] = = ∞ k =−∞ = k =∞ ∞ ∑ x[n − k](h1[k] + h2[k]) k =−∞ k= −∞ ∑ x[n − k]h1[k] + ∑ x[n − k]h2[k] = x[n] * h 1 [n] + x[n] * h 2 [n]. Hence convolution is distributive. 2.38 x 3 [n] * x 2 [n] * x 1 [n] = x3 [n] * (x 2 [n] * x 1 [n]) As x 2 [n] * x 1 [n] is an unbounded sequence hence the result of this convolution cannot be determined. But x2 [n] * x 3 [n] * x 1 [n] = x2 [n] * (x 3 [n] * x 1 [n]) . Now x3 [n] * x 1 [n] = 0 for all values of n hence the overall result is zero. Hence for the given sequences x 3 [n] * x 2 [n] * x 1 [n] ≠ x 2 [n] * x 3 [n] * x 1 [n] . 2.39 w[n] = x[n] * h[n] * g[n]. Define y[n] = x[n] * h[n] = h[n] * g[n] = ∑ g[k] h[n − k]. ∑ x[k]h[n − k] and f[n] = k Consider w1 [n] = (x[n] * h[n]) * g[n] = y[n] * g[n] k ∑ g[m] ∑ x[k] h[n − m − k]. m k = ∑ x[k] ∑ g[m] h[n − k − m]. = k Next consider w2 [n] = x[n] * (h[n] * g[n]) = x[n] * f[n] Difference between the expressions for w1 [n] and w2 [n] is m that the order of the summations is changed. A) Assumptions: h[n] and g[n] are causal filters, and x[n] = 0 for n < 0. This implies  0, for m < 0,  y[m] =  m  k =0 x[k] h[m − k], for m ≥ 0.  ∑ Thus, w[n] = n −m ∑m = 0 g[m] y[n − m] = ∑m = 0 g[m] ∑ k= 0 x[k] h[n − m − k] . n n All sums have only a finite number of terms. Hence, interchange of the order of summations is justified and will give correct results. B) Assumptions: h[n] and g[n] are stable filters, and x[n] is a bounded sequence with x[n] ≤ X. Here, y[m] = ∞ ∑ k= −∞ h[k] x[m − k] = ∑ k= k k2 ε k ,k [m] ≤ ε n X. 1 2 19 1 h[k]x[m − k] + ε k ,k [m] with 1 2
  • 23. In this case all sums have effectively only a finite number of terms and the error can be reduced by choosing k1 and k2 sufficiently large. Hence in this case the problem is again effectively reduced to that of the one-sided sequences. Here, again an interchange of summations is justified and will give correct results. Hence for the convolution to be associative, it is sufficient that the sequences be stable and single-sided. 2.40 y[n] = ∞ ∑ k= −∞ x[n − k]h[k]. Since h[k] is of length M and defined for 0 ≤ k ≤ M – 1, the convolution sum reduces to y[n] = (M −1) ∑ k= 0 x[n − k]h[k]. y[n] will be non-zero for all those values of n and k for which n – k satisfies 0 ≤ n − k ≤ N − 1 . Minimum value of n – k = 0 and occurs for lowest n at n = 0 and k = 0. Maximum value of n – k = N–1 and occurs for maximum value of k at M – 1. Thus n – k = M – 1 ⇒ n = N + M − 2 . Hence the total number of non-zero samples = N + M – 1. ∞ 2.41 y[n] = x[n] * x[n] = ∑ x[n − k]x[k]. k =−∞ Since x[n – k] = 0 if n – k < 0 and x[k] = 0 if k < 0 hence the above summation reduces to N −1  n + 1, 0 ≤ n ≤ N −1,  y[n] = x[n − k]x[k] =    2N − n, N ≤ n ≤ 2N − 2. k =n ∑ Hence the output is a triangular sequence with a maximum value of N. Locations of the output samples with values N 2 with values N 4 otherwise are n = N 2 N 4 N 4 are n = – 1 and 3N 2 – 1 and 7N 4 – 1. Locations of the output samples – 1. Note: It is tacitly assumed that N is divisible by 4 is not an integer. 2.42 y[n] = ∑ k= 0 h[k]x[n − k] . The maximum value of y[n] occurs at n = N–1 when all terms in the convolution sum are non-zero and is given by N(N + 1) N−1 N y[N − 1] = ∑ k =0 h[k] = ∑ k =1 k = . 2 N−1 ∞ 2.43 (a) y[n] = gev[n] * h ev[n] = ∑ ∞ h ev[n − k]g ev [k] . Hence, y[–n] = k= −∞ Replace k by – k. Then above summation becomes ∞ y[−n] = ∑ ∞ h ev [−n + k]g ev [−k] = k= −∞ ∑ h ev[−(n − k)]g ev [−k] = k= −∞ = y[n]. Hence gev[n] * h ev[n] is even. 20 ∞ ∑ h ev[−n − k]gev[k]. k= −∞ ∑ h ev[(n − k)]gev[k] k= −∞
  • 24. ∞ (b) ∑ hod [(n − k)]gev[k] . As a result, y[n] = gev[n] * h od [n] = ∞ ∞ k= −∞ y[–n] = k= −∞ ∞ k= −∞ k= −∞ ∑ hod [(−n − k)]gev[k] = ∑ hod [−(n − k)]gev[−k] = − ∑ hod [(n − k)]gev[k]. Hence gev[n] * h od [n] is odd. (c) ∞ y[n] = god [n] * h od [n] = ∞ y[–n] = ∑ h od [−n − k]god [k] = k= −∞ ∑ hod [n − k]god [k] . As a result, k= −∞ ∞ ∑ ∞ h od [−(n − k)]g od[−k] = k= −∞ ∑ hod [(n − k)]god [k]. k= −∞ Hence god [n] * h od [n] is even. 2.44 (a) The length of x[n] is 7 – 4 + 1 = 4. Using x[n] = { } { } { } 1 7 y[n] − ∑ k =1 h[ k]x[n − k] , h[0] we arrive at x[n] = {1 3 –2 12}, 0 ≤ n ≤ 3 (b) The length of x[n] is 9 – 5 + 1 = 5. Using x[n] = arrive at x[n] = {1 1 1 1 9 y[n] − ∑ k =1 h[ k]x[n − k] , we h[0] 1}, 0 ≤ n ≤ 4. 1 1 5 y[n] − ∑ k =1 h[ k]x[n − k] , we get h[0] x[n] = {−4 + j, −0.6923 + j0.4615, 3.4556 + j1.1065} , 0 ≤ n ≤ 2. (c) The length of x[n] is 5 – 3 + 1 = 3. Using x[n] = 2.45 y[n] = ay[n – 1] + bx[n]. Hence, y[0] = ay[–1] + bx[0]. Next, 2 y[1] = ay[0] + bx[1] = a y[−1] + a b x[0] + b x[1] . Continuing further in similar way we obtain y[n] = a n+1y[−1] + ∑ k =0 a n− kb x[k]. n (a) Let y 1[n] be the output due to an input x 1[n]. Then y 1[n] = a n +1 y[−1] + n ∑ a n− kb x1[k]. k= 0 If x 1[n] = x[n – n 0 ] then y 1[n] = a n +1 y[−1] + n ∑a n− k b x[k − n 0 ] = a n+1 y[−1] + However, y[n − n 0 ] = a y[−1] + ∑ a n −n 0 − rb x[r] . r=0 k= n 0 n− n 0 +1 n −n 0 n− n 0 ∑ a n− n0 − r b x[r]. r= 0 Hence y 1[n] ≠ y[n − n 0 ] unless y[–1] = 0. For example, if y[–1] = 1 then the system is time variant. However if y[–1] = 0 then the system is time -invariant. 21
  • 25. (b) Let y1 [n] and y2 [n] be the outputs due to the inputs x1 [n] and x2 [n]. Let y[n] be the output for an input α x1[n] + βx 2 [n]. However, αy1 [n] + βy 2 [n] = αa n+1 y[−1] + βa n n+1 y[−1] + α ∑a n −k n b x1[k] + β k= 0 whereas y[n] = a n+1 y[−1] + α n ∑a k =0 n −k ∑ an −k b x2[k] k =0 n b x1[k] + β ∑ an −k b x2[k]. k =0 Hence the system is not linear if y[–1] = 1 but is linear if y[–1] = 0. (c) Generalizing the above result it can be shown that for an N-th order causal discrete time system to be linear and time invariant we require y[–N] = y[–N+1] = L = y[–1] = 0. 2.46 y step [n] = ∑ k= 0 h[k] µ[n − k] = ∑ k =0 h[k], n ≥ 0, and y step [n] = 0, n < 0. Since h[k] is nonnegative, y step [n] is a monotonically increasing function of n for n ≥ 0, and is not oscillatory. Hence, no overshoot. n 2.47 n n (a) f[n] = f[n – 1] + f[n – 2]. Let f[n] = αr , then the difference equation reduces to 1± 5 n n−1 n −2 2 αr − αr − αr = 0 which reduces to r − r −1 = 0 resulting in r = . 2  1+ 5  n  1− 5  n Thus, f[n] = α1   2  + α 2 2  .        α + α2 α −α 2 Since f[0] = 0 hence α1 + α2 = 0 . Also f[1] = 1 hence 1 + 5 1 = 1. 2 2 n n 1 1  1+ 5 1  1 − 5 Solving for α1 and α 2 , we get α1 = – α 2 = . Hence, f[n] =   −   . 5 5 2  5 2      (b) y[n] = y[n – 1] + y[n – 2] + x[n – 1]. As system is LTI, the initial conditions are equal to zero. Let x[n] = δ[n]. Then, y[n] = y[n – 1] + y[n – 2] + δ[n − 1] . Hence, y[0] = y[– 1] + y[– 2] = 0 and y[1] = 1. For n > 1 the corresponding difference equation is y[n] = y[n – 1] + y[n – 2] with initial conditions y[0] = 0 and y[1] = 1, which are the same as those for the solution of Fibonacci's sequence. Hence the solution for n > 1 is given by n n 1  1+ 5 1  1− 5 y[n] =   −   5 2  5 2      Thus f[n] denotes the impulse response of a causal LTI system described by the difference equation y[n] = y[n – 1] + y[n – 2] + x[n – 1]. 2.48 y[n] = αy[n−1]+x[n] . Denoting, y[n] = yre [n] + j yim [n], and α = a + jb, we get, y re[n] + jy im [n] = (a + jb)(y re[n −1] + jy im [n −1]) + x[n] . 22
  • 26. Equating the real and the imaginary parts , and noting that x[n] is real, we get y re[n] = a y re[n −1] − b y im [n −1] + x[n], (1) y im [n] = b y re[n −1] + a y im [n −1] Therefore 1 b y im [n −1] = y im [n] − y re [n −1] a a Hence a single input, two output difference equation is b b2 y re[n] = ay re[n −1] − y im [n] + y [n −1] + x[n] a a re thus b y im [n −1] = −a y re[n −1] +(a 2 + b 2 )y re [n − 2] + a x[n −1] . Substituting the above in Eq. (1) we get y re[n] = 2a y re [n − 1] − (a2 + b 2 )y re[n − 2] + x[n] − a x[n −1] which is a second-order difference equation representing y re[n] in terms of x[n]. 2.49 From Eq. (2.59), the output y[n] of the factor-of-3 interpolator is given by 1 2 y[n] = x u [n] + ( x u [n − 1] + x u [n + 2]) + ( x u [n − 2] + x u [n +1]) where x u [n] is the output of 3 3 the factor-of-3 up-sampler for an input x[n]. To compute the impulse response we set x[n] = δ[n] , in which case, x u [n] = δ[3n]. As a result, the impulse response is given by 1 2 h[n] = δ[3n] + ( δ[3n − 3] + δ[3n + 6] ) + (δ[3n − 6] + δ[3n + 3]) or 3 3 1 2 1 2 = δ[n + 2] + δ[n +1] + δ[n] + δ[n − 1] + δ[n − 2]. 3 3 3 3 2.50 The output y[n] of the factor-of-L interpolator is given by 1 2 y[n] = x u [n] + ( x u [n − 1] + x u [n + L − 1]) + ( x u [n − 2] + x u [ n + L − 2] ) L L L−1 +K + (x u [n − L + 1] + x u [n + 1]) where x u [n] is the output of the factor-of-L upL sampler for an input x[n]. To compute the impulse response we set x[n] = δ[n] , in which case, x u [n] = δ[Ln]. As a result, the impulse response is given by 1 2 h[n] = δ[Ln] + ( δ[Ln − L] + δ[Ln + L(L − 1)]) + (δ[ Ln − 2L] + δ[Ln + L(L − 2)]) L L L−1 1 2 +K + (δ[Ln − L(L – 1)] + δ[Ln + L]) = δ[n + ( L − 1)] + δ[n + (L − 2)] +K L L L L −1 1 2 L −1 + δ[n +1] + δ[n] + δ[n − 1] + δ[n − 2] + δ[n − (L − 1)] L L L L 2.51 The first-order causal LTI system is characterized by the difference equation y[n] = p 0 x[n] + p1x[n −1] − d1 y[n −1] . Letting x[n] = δ[n] we obtain the expression for its impulse response h[n] = p 0 δ[n] + p1δ[n −1] − d 1h[n −1] . Solving it for n = 0, 1, and 2, we get h[0] = p 0 , h[1] = p1 − d 1h[0] = p1 − d1 p 0 , and h[2] = −d 1h[1] == −d1 ( p 1 − d1 p 0 ) . Solving these h[2] h[2]h[0] equations we get p0 = h[0], d 1 = − , and p1 = h[1] − . h[1] h[1] 23
  • 27. M 2.52 ∑ N p k x[n − k] = k= 0 ∑ dk y[n − k]. k= 0 N k= 0 Let the input to the system be x[n] = δ[n] . Then, M k= 0 ∑ pkδ[n − k] = ∑ dk h[n − k]. Thus, N pr = ∑ dkh[r − k]. Since the system is assumed to be causal, h[r – k] = 0 ∀ k > r. k =0 r pr = r k =0 k =0 ∑ dkh[r − k] = ∑ h[k]dr − k . 2.53 The impulse response of the cascade is given by h[n] = h1 [n] * h 2 [n] where  n  n n k n− k  h 1[n] = α µ[n] and h 2 [n] = β µ[n] . Hence, h[n] =  α β   µ[n].  k= 0  ∑ ∞ n 2.54 Now, h[n] = α µ[n]. Therefore y[n] = ∑ ∞ h[k] x[n − k] = ∞ k =1 = x[n] + k = −∞ ∞ ∑ α k x[n − k] k =0 k =0 ∑ αk x[n − k] = x[n] + α ∑ α k x[n −1 − k] = x[n] + αy[n − 1] . Hence, x[n] = y[n] – αy[n −1] . Thus the inverse system is given by y[n]= x[n] – αx[n −1].   The impulse response of the inverse system is given by g[n] =  1 −α . ↑  2.55 y[n] = y[n − 1] + y[n − 2] + x[n − 1] . Hence, x[n – 1] = y[n] – y[n – 1] – y[n – 2], i.e. x[n] = y[n + 1] – y[n] – y[n – 1]. Hence the inverse system is characterised by   y[n] = x[n + 1] – x[n] – x[n – 1] with an impulse response given by g[n] = 1 –1 –1 .   ↑ d1 p1 1 2.56 y[n] = p 0 x[n] + p1x[n −1] − d1 y[n −1] which leads to x[n] = y[n] + y[n −1] − x[n − 1] p0 p0 p0 Hence the inverse system is characterised by the difference equation d p 1 y 1[n] = x1 [n] + 1 x1[n −1] − 1 y 1[n −1]. p0 p0 p0 2.57 (a) From the figure shown below we observe v[n] ↓ h [n] x[n] h1[n] 2 h 5[n] h 3 [n] h 4 [n] 24 y[n]
  • 28. v[n] = (h1 [n] + h3 [n] * h 5 [n]) * x[n] and y[n] = h2 [n] * v[n] + h3 [n] * h 4 [n] * x[n]. Thus, y[n] = (h2 [n] * h 1 [n] + h2 [n] * h 3 [n] * h 5 [n] + h3 [n] * h 4 [n]) * x[n]. Hence the impulse response is given by h[n] = h2 [n] * h 1 [n] + h2 [n] * h 3 [n] * h 5 [n] + h3 [n] * h 4 [n] (b) From the figure shown below we observe h 4 [n] v[n] ↓ x[n] h1 [n ] h 2 [n] h 3[n] y[n] h 5 [n] v[n] = h4 [n] * x[n] + h1 [n] * h 2 [n] * x[n]. Thus, y[n] = h3 [n] * v[n] + h1 [n] * h 5 [n] * x[n] = h3 [n] * h 4 [n] * x[n] + h3 [n] * h 1 [n] * h 2 [n] * x[n] + h1 [n] * h 5 [n] * x[n] Hence the impulse response is given by h[n] = h3 [n] * h 4 [n] + h3 [n] * h 1 [n] * h 2 [n] + h1 [n] * h 5 [n] 2.58 h[n] = h1[n] * h 2 [n] + h 3 [n] Now h 1[n] * h 2 [n] = ( 2 δ[n − 2] − 3 δ[n + 1]) * ( δ[n − 1] + 2 δ[n + 2]) = 2 δ[n − 2] * δ[n −1] – 3δ[n + 1] * δ[n −1] + 2 δ[n − 2] * 2 δ[n + 2] – 3δ[n + 1] * 2 δ[n + 2] = 2 δ[n − 3] – 3δ[n] + 4 δ[n] – 6 δ[n + 3] = 2 δ[n − 3] + δ[n] – 6 δ[n + 3] . Therefore, y[n] = 2 δ[n − 3] + δ[n] – 6 δ[n + 3] + 5 δ[n − 5] + 7 δ[n − 3] + 2 δ[n −1] −δ[n] + 3 δ[n + 1] = 5 δ[n − 5] + 9 δ[n − 3] + 2 δ[n −1] + 3δ[n + 1] − 6δ[n + 3] 2.59 For a filter with complex impulse response, the first part of the proof is same as that for a ∞ filter with real impulse response. Since, y[n] = ∑ h[k]x[n − k] , k = −∞ ∞ k= −∞ y[n] = ∞ k= −∞ ∑ h[k]x[n − k] ≤ ∑ h[k] x[n − k]. Since the input is bounded hence 0 ≤ x[n] ≤ Bx . Therefore, y[n] ≤ B x ∞ So if ∞ ∑ h[k]. k= −∞ ∑ h[k] = S < ∞ then y[n] ≤ B S indicating that y[n] is also bounded. x k= −∞ 25
  • 29. To proove the converse we need to show that if a bounded output is produced by a bounded h *[−n] input then S < ∞ . Consider the following bounded input defined by x[n] = . h[−n] ∞ ∑ Then y[0] = k =−∞ h * [k]h[k] = h[k] ∞ ∑ h[k] = S. Now since the output is bounded thus S < ∞ . k =−∞ ∞ Thus for a filter with complex response too is BIBO stable if and only if ∑ h[k] = S < ∞ . k= −∞ 2.60 The impulse response of the cascade is g[n] = h1 [n] * h 2 [n] or equivalently, ∞ g[k] = ∑ h1[k − r]h2[r]. r = –∞ ∞ ∑  ∞  ∞  h1[k – r] h 2 [r] ≤  h 1[k]   h 2 [r]  .     k= –∞   r= –∞  k= –∞ r = –∞ ∞ g[k] = k= –∞ Thus, ∞ ∑ ∑ ∑ Since h1 [n] and h2 [n] are stable, ∑ ∑ h1[k] < ∞ and ∑ h2[k] < ∞ . Hence, ∑ g[k] < ∞ . k k k Hence the cascade of two stable LTI systems is also stable. 2.61 The impulse response of the parallel structure g[n] = h1 [n] + h 2 [n] . Now, ∑ g[k] = ∑ h 1[k] + h 2 [k] ≤ ∑ h1 [k] + ∑ h 2 [ k]. k k k Since h1 [n] and h2 [n] are stable, k ∑ h1[k] < ∞ and ∑ h2[k] < ∞ . Hence, ∑ g[k] < ∞ . Hence the parallel connection of k k k two stable LTI systems is also stable. 2.62 Consider a cascade of two passive systems. Let y1 [n] be the output of the first system which is the input to the second system in the cascade. Let y[n] be the overall output of the cascade. ∞ The first system being passive we have ∑ ∞ 2 y 1[n] ≤ n =−∞ ∑ x[n] 2 . n= −∞ Likewise the second system being also passive we have ∞ ∑ y[n] n =−∞ 2 ∞ ∑ y1[n] ≤ n =−∞ 2 ∞ ≤ ∑ x[n] 2 , n= −∞ indicating that cascade of two passive systems is also a passive system. Similarly one can prove that cascade of two lossless systems is also a lossless system. 2.63 Consider a parallel connection of two passive systems with an input x[n] and output y[n]. The outputs of the two systems are y 1[n] and y 2 [n] , respectively. Now, ∞ ∑ n =−∞ y1 [n] 2 ∞ ≤ ∑ n= −∞ x[n] , and 2 ∞ ∑ n =−∞ y 2 [n] 2 ∞ ≤ ∑ n= −∞ x[n] . 2 Let y 1[n] = y 2 [n] = x[n] satisfying the above inequalities. Then y[n] = y1 [n] + y 2 [n] = 2x[n] ∞ ∞ ∞ and as a result, ∑ n =−∞ y[n] = 4∑ n =−∞ x[n] > ∑ n =−∞ x[n] . Hence, the parallel connection of two passive systems may not be passive. 2 2 26 2
  • 30. M 2.64 Let ∑ N p k x[n − k] = y[n] + k= 0 ∑ d ky[n − k] be the difference equation representing the causal k=1 IIR digital filter. For an input x[n] = δ[n] , the corresponding output is then y[n] = h[n], the impulse response of the filter. As there are M+1 {pk } coefficients, and N {dk } coefficients, there are a total of N+M+1 unknowns. To determine these coefficients from the impulse response samples, we compute only the first N+M+1 samples. To illstrate the method, without any loss of generality, we assume N = M = 3. Then , from the difference equation reprsentation we arrive at the following N+M+1 = 7 equations: h[0] = p 0 , h[1] + h[0]d 1 = p1 , h[2] + h[1]d 1 + h[0] d 2 = p 2 , h[3] + h[2]d 1 + h[1] d 2 + h[0] d 3 = p 3 , h[4] + h[3]d 1 + h[2] d 2 + h[1] d 3 = 0, h[5] + h[4] d1 + h[3]d 2 + h[2]d 3 = 0, h[6] + h[5]d 1 + h[4] d 2 + h[3]d 3 = 0. Writing the last three equations in matrix form we arrive at  h[4]   h[3] h[2] h[1]   d1   0   h[5]  +  h[4] h[3] h[2]  d  =  0 ,  h[6]   h[5] h[4] h[3]  2   0       d3      d   h[3]  1 and hence,  d 2  = –  h[4]  h[5] d    3 h[1]  h[2]  h[3]   h[2] h[3] h[4] –1  h[4]  h[5] .  h[6]   Substituting these values of {di} in the first four equations written in matrix form we get  p 0   h[0] 0 0 0  1  p   0 0   d1  1  =  h[1] h[0]   0   d2 .  p 2   h[2] h[1] h[0]   p   h[3] h[2] h[1] h[0]  d   3  3 2.65 y[n] = y[−1] + ∑ l =0 x[l ] = y[−1] + ∑ l = 0 l µ[l ] = y[ −1] + ∑ l = 0 l = y[−1] + n n (a) For y[–1] = 0, y[n] = n(n + 1) . 2 n(n +1) 2 (b) For y[–1] = –2, y[n] = –2 + 2.66 y(nT) = y ( (n – 1)T) + ∫ n nT (n−1) T n(n +1) n 2 + n − 4 = . 2 2 x( τ) dτ = y (( n – 1)T) + T ⋅ x (( n – 1)T). Therefore, the difference equation representation is given by y[n] = y[n − 1] + T ⋅ x[n − 1], where y[n] = y( nT) and x[n] = x( nT). 27
  • 31. 2.67 y[n] = 1 n 1 n −1 1 1 n−1 ∑ l =1 x[l ] = n ∑ l =1 x[l ] + n x[n], n ≥ 1. y[n − 1] = n − 1 ∑ l =1 x[l ], n ≥ 1. Hence, n n −1 ∑ l =1 x[l ] = (n − 1)y[n − 1]. Thus, the difference equation representation is given by 1  n − 1 y[n] =   y[ n −1] + x[n]. n ≥ 1.  n  n 2.68 y[n] + 0.5 y[n − 1] = 2 µ[n], n ≥ 0 with y[−1] = 2. The total solution is given by y[n] = y c[ n] + y p [n] where y c [n] is the complementary solution and y p [n] is the particular solution. y c [n] is obtained ny solving y c [n] + 0.5 y c [n − 1] = 0 . To this end we set y c [n] = λn , which yields λn + 0.5 λn−1 = 0 whose solution gives λ = −0.5. Thus, the complementary solution is of the form y c [n] = α(−0.5) n . For the particular solution we choose y p [n] = β. Substituting this solution in the difference equation representation of the system we get β + 0.5β = 2 µ[n]. For n = 0 we get β(1 + 0.5) = 2 or β = 4 / 3. 4 The total solution is therefore given by y[n] = y c[ n] + y p [n] = α(−0.5) n + , n ≥ 0. 3 1 −1 4 Therefore y[–1] = α(−0.5) + = 2 or α = – . Hence, the total solution is given by 3 3 1 4 n y[n] = – (−0.5) + , n ≥ 0. 3 3 2.69 y[n] + 0.1y[n −1] − 0.06y[n − 2] = 2 n µ[n] with y[–1] = 1 and y[–2] = 0. The complementary solution y c [n] is obtained by solving y c [n] + 0.1 y c[n −1] − 0.06y c [n − 2] = 0 . To this end we set y c [n] = λn , which yields λn + 0.1λn−1 – 0.06λn− 2 = λn− 2 (λ2 + 0.1λ – 0.06) = 0 whose solution gives λ1 = –0.3 and λ 2 = 0.2 . Thus, the complementary solution is of the form y c [n] = α1 (−0.3) n + α 2 (0.2) n . For the particular solution we choose y p [n] = β(2) n . ubstituting this solution in the difference equation representation of the system we get β2 n + β(0.1)2 n−1 – β(0.06)2 n− 2 = 2 n µ[n]. For n = 0 we get β+ β(0.1)2 −1 – β(0.06)2 −2 =1 or β = 200 / 207 = 0.9662 . The total solution is therefore given by y[n] = y c[ n] + y p [n] = α1 (−0.3) n + α 2 (0.2) n + 200 n 2 . 207 200 −1 2 = 1 and 207 200 −2 10 107 y[−2] = α1 (−0.3) −2 + α 2 (0.2) −2 + 2 = 0 or equivalently, – α1 + 5α 2 = and 207 3 207 100 50 α1 + 25α2 = – whose solution yields α1 = –0.1017 and α 2 = 0.0356. Hence, the total 9 207 solution is given by y[n] = −0.1017(−0.3) n + 0.0356(0.2) n + 0.9662(2) n , for n ≥ 0. From the above y[−1] = α1 (−0.3) −1 + α2 (0.2) −1 + 2.70 y[n] + 0.1y[n −1] − 0.06y[n − 2] = x[n] − 2 x[n − 1] with x[n] = 2 n µ[n] , and y[–1] = 1 and y[–2] = 0. For the given input, the difference equation reduces to y[n] + 0.1y[n −1] − 0.06y[n − 2] = 2 n µ[n] − 2 (2 n −1 ) µ[n −1] = δ[ n]. The solution of this 28
  • 32. equation is thus the complementary solution with the constants determined from the given initial conditions y[–1] = 1 and y[–2] = 0. From the solution of the previous problem we observe that the complementary is of the form y c [n] = α1 (−0.3) n + α 2 (0.2) n . For the given initial conditions we thus have y[−1] = α1 (−0.3) −1 + α2 (0.2) −1 = 1 and y[−2] = α1 (−0.3) −2 + α 2 (0.2) −2 = 0 . Combining these  −1 / 0.3 1 / 0.2   α1   1  two equations we get     =   which yields α1 = − 0.18 and α 2 = 0.08.  1 / 0.09 1 / 0.04  α2   0  Therefore, y[n] = − 0.18(−0.3) n + 0.08(0.2) n . 2.71 The impulse response is given by the solution of the difference equation y[n] + 0.5 y[n − 1] = δ[n]. From the solution of Problem 2.68, the complementary solution is given by y c [n] = α(−0.5) n . To determine the constant we note y[0] = 1 as y[–1] = 0. From the complementary solution y[0] = α(–0.5) 0 = α, hence α = 1. Therefore, the impulse response is given by h[n] = (−0.5) n . 2.72 The impulse response is given by the solution of the difference equation y[n] + 0.1y[n −1] − 0.06y[n − 2] = δ[n] . From the solution of Problem 2.69, the complementary solution is given by y c [n] = α1 (−0.3) n + α 2 (0.2) n . To determine the constants α1 and α 2 , we observe y[0] = 1 and y[1] + 0.1y[0] = 0 as y[–1] = y[–2] = 0. From the complementary solution y[0] = α1 (−0.3) 0 + α 2 (0.2) 0 = α1 + α2 = 1, and y[1] = α1 (−0.3)1 + α2 (0.2)1 = –0.3 α1 + 0.2α 2 = −0.1. Solution of these equations yield α1 = 0.6 and α 2 = 0.4. Therefore, the impulse response is given by h[n] = 0.6(−0.3) n + 0.4(0.2) n . A n+1 n +1 K n+1K 2.73 Let A n = n (λ i ) . Then = λ i . Now lim = 1. Since λ i < 1, there exists An n n→ ∞ n A 1 + λi ∞ a positive integer N o such that for all n > N o , 0 < n +1 < < 1. Hence ∑ n =0 A n An 2 converges. K n 2.74 (a) x[n] = {3 −2 0 1 4 5 2 } , −3 ≤ n ≤ 3. r xx[l ] = ∑ n =−3 x[n] x[n − l ] . Note, r xx[−6] = x[3]x[−3] = 2 × 3 = 6, r xx[−5] = x[3] x[−2] + x[2]x[−3] = 2 × (−2) + 5 × 3 =11, r xx[−4] = x[3]x[−1] + x[2] x[−2] + x[1] x[−3] = 2 × 0 + 5 × (−2) + 4 × 3 = 2, r xx[−3] = x[3] x[0] + x[2]x[−1] + x[1]x[−2] + x[0] x[−3] = 2 ×1 + 5 × 0 + 4 × (−2) +1 × 3 = −3, r xx[−2] = x[3]x[1] + x[2] x[0] + x[1] x[−1] + x[0]x[−2] + x[−1]x[−3] =11, r xx[−1] = x[3]x[2] + x[2] x[1] + x[1] x[0] + x[0]x[−1] + x[−1]x[−2] + x[−2]x[−3] = 28, r xx[0] = x[3] x[3] + x[2] x[2] + x[1]x[1] + x[0]x[0] + x[ −1] x[−1] + x[−2] x[−2] + x[−3]x[−3] = 59. The samples of r xx[l ] for 1 ≤ l ≤ 6 are determined using the property r xx[l ] = rxx[ −l ] . Thus, 3 r xx[l ] = {6 11 2 −3 11 28 59 28 11 −3 2 11 6 }, −6 ≤ l ≤ 6. y[n] = {0 7 1 −3 4 9 −2} , −3 ≤ n ≤ 3. Following the procedure outlined above we get 29
  • 33. r yy[l ] = {0 −14 61 43 −52 10 160 10 −52 43 61 −14 w[n] = {−5 4 3 6 −5 0 1}, −3 ≤ n ≤ 3. Thus, r ww[l ] = {−5 4 28 −44 −11 −20 112 −20 −11 −44 0}, −6 ≤ l ≤ 6. 28 4 −5}, −6 ≤ l ≤ 6. (b) r xy[l ] = ∑ n =−3 x[n] y[n − l ] . Note, r xy[−6] = x[3]y[−3] = 0, r xy[−5] = x[3] y[−2] + x[2]y[−3] =14, r xy[−4] = x[3]y[−1] + x[2] y[−2] + x[1] y[−3] = 37, r xy[−3] = x[3] y[0] + x[2]y[−1] + x[1]y[−2] + x[0] y[−3] = 27, r xy[−2] = x[3]y[1] + x[2] y[0] + x[1]y[−1] + x[0]y[−2] + x[−1]y[−3] = 4, r xy[−1] = x[3]y[2] + x[2] y[1] + x[1]y[0] + x[0] y[−1] + x[−1]y[−2] + x[−2] y[−3] = 27, r xy[0] = x[3] y[3] + x[2] y[2] + x[1]y[1] + x[0] y[0] + x[−1] y[−1] + x[ −2] y[−2] + x[−3]y[−3] = 40, r xy[1] = x[2] y[3] + x[1] y[2] + x[0]y[1] + x[−1]y[0] + x[ −2] y[−1] + x[ −3]y[−2] = 49, r xy[2] = x[1] y[3] + x[0] y[2] + x[−1]y[1] + x[−2]y[0] + x[−3]y[−1] = 10, r xy[3] = x[0] y[3] + x[−1]y[2] + x[−2] y[1] + x[−3]y[0] = −19, r xy[4] = x[−1]y[3] + x[−2]y[2] + x[−3]y[1] = −6, r xy[5] = x[−2]y[3] + x[−3] y[2] = 31, r xy[6] = x[−3]y[3] = −6. Hence, 3 r xy[l ] = {0 14 37 27 4 27 40 49 10 −19 −6 31 −6,} , −6 ≤ l ≤ 6. r xw[l ] = ∑ n =−3 x[n] w[n − l ] = {−10 −17 6 38 36 12 −35 6 1 29 −15 3 −2 3}, −6 ≤ l ≤ 6. ∞ ∞ 2.75 (a) x 1[n] = α n µ[n]. r xx[l ] = ∑ n =−∞ x 1[n] x1[ n − l ] = ∑ n= −∞ α n µ[n] ⋅ αn −l µ[n − l ]  ∞ α −l α 2n− l = , for l < 0,  ∑ n= 0 ∞  2 n−l 1 − α2 = ∑ n= 0 α µ[n − l ] =  l  ∞ α2 n−l = α for l ≥ 0. ∑ n= l   1 − α2 αl α −l Note for l ≥ 0, r xx[l ] = , and for l < 0, r xx[l ] = . Replacing l with −l in the 1 − α2 1 − α2 α −(− l ) αl second expression we get r xx[−l ] = = = r xx[l ]. Hence, r xx[l ] is an even function 1 − α2 1 − α2 of l . Maximum value of r xx[l ] occurs at l = 0 since α l is a decaying function for increasing l when α < 1.  1, 0 ≤ n ≤ N −1,  1, l ≤ n ≤ N −1 + l , N −1 (b) x 2 [n] =  r xx[l ] = ∑ n =0 x 2 [n − l ], where x 2 [n − l] =  otherwise. otherwise.  0,  0, 30
  • 34. for l < −(N − 1),  0,  N + l , for – (N – 1) ≤ l < 0,   for l = 0, Therefore, r xx[l ] =  N, N − l , for 0 < l ≤ N − 1,   0, for l > N −1.  It follows from the above r xx[l ] is a trinagular function of l , and hence is even with a maximum value of N at l = 0  πn  2.76 (a) x 1[n] = cos   where M is a positive integer. Period of x 1[n] is 2M, hence  M 1 1 πn π(n + l ) 2M −1 2M −1 r xx[l ] = ∑ n= 0 x 1[n] x1[n + l ] = 2M ∑ n= 0 cos M  cos M  2M 1 πn  πn πl πn πl  1 πl πn 2M −1 2M−1 = ∑ n =0 cos M   cos M  cos M  − sin M  sin M   = 2M cos M  ∑ n= 0 cos2  M  . 2M   From the solution of Problem 2.16 2M −1 ∑ n =0 1  πn  r xx[l ] = cos   .  M 2  πn  2M cos 2   = = M. Therefore,  M 2 (b) x 2 [n] = n mod ulo 6 = {0 1 2 3 4 5}, 0 ≤ n ≤ 5. It is a peridic sequence with a period 1 5 6. Thus,, r xx[l ] = ∑ n =0 x 2 [ n] x 2 [n + l ], 0 ≤ l ≤ 5. It is also a peridic sequence with a period 6. 6 1 55 r xx[0] = (x 2 [0]x 2 [0] + x 2 [1]x 2 [1] + x 2 [2]x 2 [2] + x 2 [3]x 2 [3] + x 2 [4]x 2 [4] + x 2 [5]x 2 [5]) = , 6 6 1 40 r xx[1] = (x 2 [0]x 2 [1] + x 2 [1]x 2 [2] + x 2 [2]x 2 [3] + x 2 [3]x 2 [4] + x 2 [4]x 2 [5] + x 2 [5]x 2 [0]) = , 6 6 1 32 r xx[2] = (x 2 [0]x 2 [2] + x 2 [1]x 2 [3] + x 2 [2]x 2 [4] + x 2 [3]x 2 [5] + x 2 [4]x 2 [0] + x 2 [5]x 2 [1]) = , 6 6 1 28 r xx[3] = (x 2 [0]x 2 [3] + x 2 [1]x 2 [4] + x 2 [2]x 2 [5] + x 2 [3]x 2 [0] + x 2 [4]x 2 [1] + x 2 [5]x 2 [2]) = , 6 6 1 31 r xx[4] = (x 2 [0]x 2 [4] + x 2 [1]x 2 [5] + x 2 [2]x 2 [0] + x 2 [3]x 2 [1] + x 2 [4]x 2 [2] + x 2 [5]x 2 [3]) = , 6 6 1 40 r xx[5] = (x 2 [0]x 2 [5] + x 2 [1]x 2 [0] + x 2 [2]x 2 [1] + x 2 [3]x 2 [2] + x 2 [4]x 2 [3] + x 2 [5]x 2 [4]) = . 6 6 (c) x 3 [n] = (−1) n . It is a periodic squence with a period 2. Hence, 1 1 1 r xx[l ] = ∑ n =0 x 3 [n] x3 [n + l ], 0 ≤ l ≤ 1. r xx[0] = (x 2 [0]x 2 [0] + x 2 [1]x 2 [1]) = 1, and 2 2 1 r xx[1] = (x 2 [0]x 2 [1] + x 2 [1]x 2 [0]) = −1. It is also a periodic squence with a period 2. 2 2.77 E{X + Y} = ∫∫ (x + y)p XY (x, y)dxdy = ∫∫ x p XY (x, y)dxdy + ∫∫ y p XYy (x, y)dxdy ( ) = ∫ x ∫ p XY (x, y)dy dx + ∫ y (∫ pXY (x, y)dx)dy = ∫ x pX (x)dx + ∫ y pY(y)dy 31
  • 35. = E{X} + E{Y}. From the above result, E{2X} = E(X + X} = 2 E{X}. Similarly, E{cX} = E{(c–1)X + X} = E{(c–1)X} + E{X}. Hence, by induction, E{cX} = cE{x}. { } 2.78 C = E (X − κ)2 . To find the value of κ that minimize the mean square error C, we differentiate C with respect to κ and set it to zero. Thus dC = E{2(X − κ)} = E{2X} − E{2K} = 2 E{X} − 2K = 0. which results in κ = E{X}, and dκ the minimum value of C is σ 2 . x 2.79 mean = m x = E{x} = ∫ ∞ −∞ xp X (x)dx variance = σ 2 = E{(x − m x )2 } = x ∞ ∫−∞ (x − mx)2 pX (x)dx α 1  α ∞ xdx x (a) p X (x) =  2 2  . Now, m x = π −∞ 2 2 . Since 2 2 is odd hence the π x +α  x +α x +α integral is zero. Thus m x = 0 . ∫ 2 σx = 2 α ∞ x dx . Since this integral does not converge hence variance is not defined for π −∞ x 2 + α 2 ∫ this density function. α −α x α ∞ −α x (b) p x (x) = e . Now, m x = xe dx = 0 . Next, 2 2 −∞  2 −αx ∞ ∞  ∞ 2 −αx x e α ∞ 2 −α x 2x −αx  2 σx = x e dx = α x e dx = α + e dx 2 −∞ 0 α  −α   0  0 ∞   −αx ∞ 2 −αx   2x e 2 = α 0 + + dx = 2 . 2e α −α 0 α   α  0  ∫ ∫ ∫ ∫ ∫ n (c) p x (x) = ∑  n pl (1 − p)n− l δ(x − l ) . l l =0 n ∞  n ∫−∞ ∑ l =0 mx= x 2  l  p l (1 − p )   2 2 σ x = E{x } − m x = n = ∞ n− l Now, n δ(x−l )dx=   ∑  n pl (1− p)n −l =np l l =0 n  n l 2 n− l 2 ∫−∞x ∑  l  p (1− p) δ(x − l )dx − (np) l =0   ∑ l 2 n pl (1 − p)n− l −n2p2 =n p(1−p) . l l =0 (d) p x (x) = ∞ ∑ l =0 e −α l α δ(x − l) . Now, l! 32
  • 36. ∞ ∞ ∫−∞ ∑ l =0 mx = x 2 σx = E{x e −αα l δ(x−l )dx = l! 2 2 } − mx x (e) p x (x) = α 2 e ∞ α 2 = x 2 −∞ ∫ −x 2 / 2α 2 ∞ ∞ ∑ l e −ααl = α. l! l =0 −α l ∑ e l =0 α 2 δ(x − l) dx − α = l! ∞ ∑ l2 l =0 e−αα l − α2 = α . l! µ(x) . Now, 1 ∞ 2 − x 2 /2α2 1 ∞ 2 −x 2 / 2α2 mx = 2 x e µ(x)dx = 2 x e dx = α π / 2 . α −∞ α 0 2 1 ∞ 3 −x 2 / 2α2 α π  π 2 2 2 2 σ x = E{x } − m x = 2 x e µ(x)dx − = 2− α .  2 2 α −∞ ∫ ∫ ∫ 2.80 Recall that random variables x and y are linearly independent if and only if 2  E  a1x + a2 y  >0 ∀ a1 ,a2 except when a1 = a2 = 0, Now,   2 2  2 2  2 2 2 2 E  a1 x  +E a 2 y  +E (a1 ) * a2 x y * +E a1 (a2 ) * x * y = a1 E x + a 2 E y > 0     ∀ a 1 and a 2 except when a1 = a2 = 0. Hence if x,y are statistically independent they are also linearly independent. { { 2 2.81 σ x = E (x − m x ) 2 } { { } { } } } = E{x2 + m2 − 2xm x} = E{x2} + E{m 2} − 2E{xm x}. x x 2 2 2 Since m x is a constant, hence E{mx } = m x and E{xm x } = m x E{x} = m x . Thus, 2 2 2 2 2 2 σ x = E{x } + m x − 2m x = E{x } − m x . 2.82 V = aX + bY. Therefore, E{V} = a E{X} + b E{Y} = a m x + b m y . and 2 2 2 σ v = E{(V − m v ) } = E{(a(X − m x ) + b(Y − m y )) }. 2 2 2 2 2 Since X,Y are statistically independent hence σ v = a σ x + b σ y . 2.83 v[ n] = a x[n] + b y[n]. Thus φ vv[n] = E{v[m + n}v[m]} { } = E a2 x[m + n}x[m] + b 2 y[m + n}y[m] + ab x[m + n] y[m] + ab x[m] y[m + n] . Since x[n] and y[n] are independent, { } { } φ vv[n] = E a 2 x[m + n}x[m] + E b 2 y[m + n}y[m] = a2 φxx [n] + b2 φ yy[n] . φ vx[n] = E{v[m + n] x[m]} = E{a x[m + n]x[m] + b y[m + n]x[m]} = a E{ x[m + n]x[m]} = a φxx [n]. Likewise, φ vy[n] = b φyy [n]. 2.84 φxy [l]=E{x[n + l ]y * [n]}, φxy [–l]=E{x[n – l ]y *[n]}, φyx [l]=E{y[n + l ]x * [n]}. Therefore, φyx *[l ]=E{y *[n + l ]x[n]} =E{x[n – l ] y *[n]} =φxy [–l]. 33
  • 37. Hence φxy [−l ] = φyx *[l l] . Since γ xy[l ] = φxy [l ]−m x (m y )* . Thus, γ xy[l ] = φxy [l ] − m x (m y )* . Hence, γ xy[l ] = φxy [l ] − m x (m y )* . As a result, γ xy *[l ] = φxy *[l ] − (m x )*m y . Hence, γ xy[−l ] = γ yx *[l ]. The remaining two properties can be proved in a similar way by letting x = y. { } ≥ 0. E{ x[n] } + E{ x[n − l ] } − E{x[n]x * [n − l ]} − E{x * [n]x[n − l ]} ≥ 0 2.85 E x[n] − x[n − l] 2 2 2 2φxx [0] − 2φxx[l ] ≥ 0 φxx [0] ≥ φxx [l] { 2} E{ y 2} ≤ E 2{xy} . Hence, φxx [0] φyy[0] ≤ φxy [l ] 2 . Using Cauchy's inequality E x 2 One can show similarly γ xx[0] γ yy [0] ≤ γ xy[l ] . 2.86 Since there are no periodicities in {x[n]} hence x[n], x[n+ l ] become uncorrelated as l → ∞. . 2 2 Thus lim γ xx [l ] = lim φxx[l ] − m x → 0. Hence lim φxx [l ] = m x . l →∞ l →∞ l →∞ 2.87 φ XX[l ] = 9 + 11l 2 +14 l 4 . Now, m X[n] 1+ 3l2 + 2l4 ( Hence, m X[ n] = m 7. E X[n] 2 2 = lim φ XX[l ] = lim l →∞ ) = φXX [0] = 9. l →∞ 9 +11l 2 +14l 4 1 + 3l 2 + 2 l 4 = 7. 2 Therefore, σ 2 = φ XX[0] − m X = 9 − 7 = 2. X M2.1 L = input('Desired length = '); n = 1:L; FT = input('Sampling frequency = ');T = 1/FT; imp = [1 zeros(1,L-1)];step = ones(1,L); ramp = (n-1).*step; subplot(3,1,1); stem(n-1,imp); xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude'); subplot(3,1,2); stem(n-1,step); xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude'); subplot(3,1,3); stem(n-1,ramp); xlabel(['Time in ',num2str(T), ' sec']);ylabel('Amplitude'); M2.2 % Get user inputs A = input('The peak value ='); L = input('Length of sequence ='); N = input('The period of sequence ='); FT = input('The desired sampling frequency ='); DC = input('The square wave duty cycle = '); % Create signals T = 1/FT; 34
  • 38. t = 0:L-1; x = A*sawtooth(2*pi*t/N); y = A*square(2*pi*(t/N),DC); % Plot subplot(211) stem(t,x); ylabel('Amplitude'); xlabel(['Time in ',num2str(T),'sec']); subplot(212) stem(t,y); ylabel('Amplitude'); xlabel(['Time in ',num2str(T),'sec']); 10 5 0 -5 -10 0 20 40 60 Time in 5e-05 sec 80 100 40 60 Time in 5e-05 sec 80 100 10 5 0 -5 -10 0 20 M2.3 (a) The input data entered during the execution of Program 2_1 are Type Type Type Type in in in in real exponent = -1/12 imaginary exponent = pi/6 the gain constant = 1 length of sequence = 41 35
  • 39. Real part 1 Imaginary part 1 0.5 0.5 0 0 -0.5 -0.5 -1 0 10 20 Time index n 30 -1 40 0 10 20 Time index n 30 40 (b) The input data entered during the execution of Program 2_1 are Type Type Type Type in in in in real exponent = -0.4 imaginary exponent = pi/5 the gain constant = 2.5 length of sequence = 101 Real part 1.5 Imaginary part 1.5 1 1 0.5 0.5 0 0 -0.5 0 M2.4 (a) 10 20 Time index n 30 -0.5 40 0 10 20 Time index n L = input('Desired length = '); A = input('Amplitude = '); omega = input('Angular frequency = '); phi = input('Phase = '); n = 0:L-1; x = A*cos(omega*n + phi); stem(n,x); xlabel('Time index');ylabel('Amplitude'); title(['omega_{o} = ',num2str(omega)]); (b) 36 30 40
  • 40. ω = 0.14π o 2 1 0 -1 -2 0 10 20 30 40 50 60 Time index n 70 80 90 100 ω o = 0.24 π 2 1 0 -1 -2 0 5 10 15 20 Time index n 25 30 35 40 ω = 0.68 π o 2 1 0 -1 -2 0 10 20 30 40 50 60 Time index n 37 70 80 90 100
  • 41. ω = 0.75 π o 2 1 0 -1 -2 0 5 10 15 20 25 Time index n 30 35 40 ˜ M2.5 (a) Using Program 2_1 we generate the sequence x 1[n] = e − j 0.4πn shown below Real part 1 0.5 0 -0.5 -1 0 5 10 15 20 Time index n 25 30 35 40 25 30 35 40 Imaginary part 1 0.5 0 -0.5 -1 0 (b) 5 10 15 20 Time index n ˜ Code fragment used to generate x 2 [n] = sin(0.6πn + 0.6π) is: x = sin(0.6*pi*n + 0.6*pi); 38
  • 42. 2 1 0 -1 -2 0 5 10 15 20 25 Time index n 30 35 40 ˜ (c) Code fragment used to generate x 3 [n] = 2 cos(1.1πn − 0.5π) + 2 sin(0.7πn) is x = 2*cos(1.1*pi*n - 0.5*pi) + 2*sin(0.7*pi*n); 4 2 0 -2 -4 0 5 10 15 20 25 Time index n 30 35 40 ˜ (d) Code fragment used to generate x 4 [n] = 3sin(1.3πn) − 4 cos(0.3πn + 0.45π) is: x = 3*sin(1.3*pi*n) - 4*cos(0.3*pi*n+0.45*pi); 6 4 2 0 -2 -4 -6 0 (e) 5 10 15 20 25 Time index n 30 35 40 Code fragment used to generate ˜ x 5 [n] = 5 sin(1.2πn + 0.65π) + 4 sin(0.8πn) − cos(0.8πn) is: x = 5*sin(1.2*pi*n+0.65*pi)+4*sin(0.8*pi*n)-cos(0.8*pi*n); 39
  • 43. 10 5 0 -5 -10 (f) 0 5 10 15 20 25 Time index n 30 35 40 ˜ Code fragment used to generate x 6 [n] = n mod ulo 6 is: x = rem(n,6); 6 5 4 3 2 1 0 -1 0 5 10 15 20 25 Time index n 30 35 40 M2.6 t = 0:0.001:1; fo = input('Frequency of sinusoid in Hz = '); FT = input('Samplig frequency in Hz = '); g1 = cos(2*pi*fo*t); plot(t,g1,'-') xlabel('time');ylabel('Amplitude') hold n = 0:1:FT; gs = cos(2*pi*fo*n/FT); plot(n/FT,gs,'o');hold off M2.7 t = 0:0.001:0.85; g1 = cos(6*pi*t);g2 = cos(14*pi*t);g3 = cos(26*pi*t); plot(t/0.85,g1,'-',t/0.85,g2,'--',t/0.85,g3,':') xlabel('time');ylabel('Amplitude') hold n = 0:1:8; gs = cos(0.6*pi*n); plot(n/8.5,gs,'o');hold off M2.8 As the length of the moving average filter is increased, the output of the filter gets more smoother. However, the delay between the input and the output sequences also increases (This can be seen from the plots generated by Program 2_4 for various values of the filter length.) 40
  • 44. M2.9 alpha = input('Alpha = '); yo = 1;y1 = 0.5*(yo + (alpha/yo)); while abs(y1 - yo) > 0.00001 y2 = 0.5*(y1 + (alpha/y1)); yo = y1; y1 = y2; end disp('Square root of alpha is'); disp(y1) M2.10 alpha = input('Alpha = '); yo = 0.3; y = zeros(1,61); L = length(y)-1; y(1) = alpha - yo*yo + yo; n = 2; while abs(y(n-1) - yo) > 0.00001 y2 = alpha - y(n-1)*y(n-1) + y(n-1); yo = y(n-1); y(n) = y2; n = n+1; end disp('Square root of alpha is'); disp(y(n-1)) m=0:n-2; err = y(1:n-1) - sqrt(alpha); stem(m,err); axis([0 n-2 min(err) max(err)]); xlabel('Time index n'); ylabel('Error'); title(['alpha = ',num2str(alpha)]) The displayed output is Square root of alpha is 0.84000349056114 α = 0.7056 0.06 0.04 0.02 0 -0.02 -0.04 0 5 10 15 Time index n 20 25 M2.11 N = input('Desired impulse response length = '); p = input('Type in the vector p = '); d = input('Type in the vector d = '); [h,t] = impz(p,d,N); n = 0:N-1; stem(n,h); xlabel('Time index n');ylabel('Amplitude'); 41
  • 45. 1.5 1 0.5 0 -0.5 -1 0 10 20 Time index n 30 40 M2.12 x = [ 3 −2 0 1 4 5 2 ] y = [ 0 7 1 −3 4 9 −2 ] , w = [ −5 4 3 6 −5 0 1]. (a) r xx[n] = [6 11 2 r yy[n] = [0 -14 r ww[n] = [–5 -3 61 4 11 28 43 -52 59 28 10 160 11 -3 10 -52 2 43 11 6], 61 -14 0], 28 –44 –11 –20 112 –20 –11 –44 60 28 4 –5]. 150 50 rxx[n] 40 ryy[n] 100 30 50 20 0 10 0 -6 -4 -2 0 2 Lag index 4 -50 -6 6 -4 100 -2 r ww 0 2 Lag index 4 [n] 50 0 -50 -6 (b) r xy[n] = [ –6 -4 31 –6 –19 -2 0 2 Lag index 10 42 49 40 4 27 6 4 27 37 14 0]. 6
  • 46. r xw[n] = [3 –2 –15 29 1 6 –35 12 36 38 6 –17 –10]. 50 100 r [n] xw rxy[n] 50 0 0 -50 -6 M2.13 -4 -2 0 2 Lag index 4 -50 -6 6 -4 -2 0 2 Lag index N = input('Length of sequence = '); n = 0:N-1; x = exp(-0.8*n); y = randn(1,N)+x; n1 = length(x)-1; r = conv(y,fliplr(y)); k = (-n1):n1; stem(k,r); xlabel('Lag index');ylabel('Amplitude'); gtext('r_{yy}[n]'); 30 r [n] 20 yy 10 0 -10 -30 -20 -10 0 Lag index 10 20 30 M2.14 (a) n=0:10000; phi = 2*pi*rand(1,length(n)); A = 4*rand(1,length(n)); x = A.*cos(0.06*pi*n + phi); stem(n(1:100),x(1:100));%axis([0 50 -4 4]); xlabel('Time index n');ylabel('Amplitude'); mean = sum(x)/length(x) var = sum((x - mean).^2)/length(x) 43 4 6
  • 47. 4 4 2 2 0 0 -2 -2 -4 0 20 40 60 Time index n 80 100 -4 0 4 100 80 100 0 -2 80 2 0 40 60 Time index n 4 2 20 -2 -4 0 20 40 60 Time index n 80 -4 100 0 20 40 60 Time index n mean = 0.00491782302432 var = 2.68081196077671 From Example 2.44, we note that the mean = 0 and variance = 8/3 = 2.66667. M2.15 n=0:1000; z = 2*rand(1,length(n)); y = ones(1,length(n));x=z-y; mean = mean(x) var = sum((x - mean).^2)/length(x) mean = 0.00102235365812 var = 0.34210530830959 1 −1 (1 + 1) 2 1 2 Using Eqs. (2. 129) and (2.130) we get m X = = 0, and σ X = = . It should 2 22 3 be noted that the values of the mean and the variance computed using the above MATLAB program get closer to the theoretical values if the length is increased. For example, for a length 100001, the values are mean = 9.122420593670135e-04 44
  • 49. 3.1 X( e jω Chapter 3 (2e) ∞ )= ∑ x[n]e− jωn where x[n] is a real sequence. Therefore, n =−∞ ∞ ∞  ∞  X re (e jω ) = Re  ∑ x[n]e− jωn  = ∑ x[n]Re e − jωn = ∑ x[n]cos(ωn), and  n =−∞  n =−∞ n =−∞ ∞ ∞  ∞  X im (e jω ) = Im ∑ x[n]e − jωn  = ∑ x[n] Im e− jωn = − ∑ x[n]sin (ωn) . Since cos (ωn) and  n= −∞  n= −∞ n= −∞ ( ) ( ) sin (ωn) are, respectively, even and odd functions of ω , X re (e jω ) is an even function of ω , and X im (e jω ) is an odd function of ω . X(e jω ) = X2 (e jω ) + X2 (e jω ) . Now, X 2 (e jω ) is the square of an even function and re im re X 2 (e jω ) is the square of an odd function, they are both even functions of ω . Hence, im X(e jω ) is an even function of ω . { arg X(e jω  Xim (e jω )  ) = tan  jω  . The argument is the quotient of an odd function and an even  Xre (e )  } −1 { } function, and is therefore an odd function. Hence, arg X(e jω ) is an odd function of ω . 3.2 1 1 − αe –ω 1 − α e –ω 1 − α cos ω − jα sin ω = ⋅ = – jω – jω –ω 2 = 1−αe 1− α e 1 − αe 1 − 2 α cosω + α 1 − 2 αcos ω + α 2 1 − α cos ω αsin ω jω Therefore, X re (e jω ) = . 2 and X im (e ) = – 1 − 2 αcos ω + α 1 − 2 αcos ω + α 2 X(e jω ) = 1 1 2 X(e jω ) = X(e jω ) ⋅ X * (e jω ) = Therefore, X(e jω ) = tan θ(ω) = 3.3 X im (e jω ) Xre (e jω ) 1 − αe 1 – jω 1 − 2 α cos ω + α 2 =– ⋅ 1 1−αe jω = 1 1 − 2α cos ω + α2 . . α sin ω  αsin ω  . Therefore, θ(ω ) == tan −1  – . 1 − α cosω  1 − α cosω  (a) y[n] = µ[n] = y ev [n] + y od[n], where y ev [n] = ( y[n] + y[−n]) = ( µ[n] + µ[−n]) = + δ[ n], 2 2 2 2 1 1 and y od [n] = (y[n] − y[−n]) = (µ[n] − µ[−n]) = µ[n] − – δ[n] . 2 2 2 2  ∞  ∞ 1 1 1 Now, Yev (e jω ) =  2π δ(ω + 2πk)  + = π δ(ω + 2πk) + .   2 2 2  k= –∞  k= –∞ 1 1 1 ∑ 1 1 ∑ 1 1 1 Since y od [n] = µ[n] − + δ[n], y od [n] = µ[n − 1] − + δ[n −1]. As a result, 2 2 2 2 45 1 1
  • 50. 1 2 − jω 1 2 1 2 1 2 y od [n] − y od [n −1] = µ[n] − µ[n − 1] + δ[n −1] − δ[n] = δ[n] + δ[n −1]. Taking the DTFT of both sides we then get Yod (e jω ) − e Yod (e jω ) = − jω 1 1+ e 2 1 − e− jω = Yod (e jω ) = 1 1 − jω − 2 . Hence, 1− e Y(e jω ) = Yev (e jω ) + Yod (e jω ) = 1 +π 1− e − jω (1 + e –jω ). or ∞ ∑ δ(ω + 2πk). k= −∞ (b) Let x[n] be the sequence with the DTFT X(e 1 DTFT is then given by x[n] = 2π 1 2 jω ∞ )= ∑ 2πδ(ω −ω o + 2πk) . Its inverse k =−∞ π ∫ 2πδ(ω − ωo )e jωndω = e jω n . o −π ∞ 3.4 Let X(e jω ) = ∑ k= −∞ 2π δ(ω + 2πk) . Its inverse DTFT is then given by 1 π 2π x[n] = ∫ –π 2π δ(ω) e jωn dω = 2π = 1. 2π ∞ 3.5 (a) Let y[n] = g[n – no ], Then Y(e jω )= ∑ g[n]e − jωn = e − jωn n= −∞ (b) Let h[n] = e jω o n g[n] , then H(e ∞ ∞ jω )= ∞ ∑ h[n] e ∑ g[n − no] e− jωn = n= −∞ ∞ = e − jωn o ∑ y[n] e − jωn n =−∞ o G(e jω ) . − jωn ∞ = n= −∞ ∑ e jω ng[n]e − jωn o n =−∞ ∑ g[n] e− j(ω−ω )n = G(e j(ω− ω ) ) . n =−∞ ∞ ∞ d (G(e jω ) ) (c) G(e jω ) = ∑ g[n]e − jωn . Hence = ∑ −jng[n] e− jωn . dω n= −∞ n =−∞ jω ∞ d (G(e ) ) d (G(e jω ) ) Therefore, j = ∑ ng[n] e− jωn . Thus the DTFT of ng[n] is j . dω dω = o o n =−∞ ∞ (d) y[n] = g[n] * h[n] = ∞ = ∑ g[k]h[n − k] . Hence Y(e k= −∞ ∞ ∞ jω )= ∞ ∑ ∑ g[k] h[n − k] e− jωn n= −∞ k= −∞ ∑ g[k]H(e jω ) e− jωk = H(e jω ) ∑ g[k]e − jωk = H(e jω )G(e jω ) . k =−∞ (e) y[n] = g[n]h[n]. Hence Y(e jω ) = k= −∞ ∞ ∑ g[n] h[n] e− jωn n= −∞ 46
  • 51. 1 Since g[n] = 2π Y(e jω 1 )= 2π = π ∫ G(e jθ ) e jθndθ −π ∞ π ∑ ∫ n =−∞ 1 2π π h[n]e − jωn we can rewrite the above DTFT as jθ G(e ) e jθn −π 1 dθ = 2π π ∫ ∞ G(e jθ) ∑ h[n]e − j(ω−θ)ndθ n= −∞ −π ∫ G(e jθ)H(e j(ω−θ) )dθ . −π  π   1  jω − jωn (f) y[n] = g[n] h * [n] = g[n]  H* (e ) e dω   2π  n =−∞ n= −∞   −π ∞ ∞ ∑ 1 = 2π ∑ π ∫ H* (e jω −π ∫  ∞  ) g[n] e− jωn  dω    n= −∞  ∑ 1 = 2π π ∫ H* (e jω ) G(e jω ) dω . −π ∞ 3.6 DTFT{x[n]} = X(e jω ) = (a) DTFT{x[–n]} = (b) DTFT{x*[-n]} = ∑ x[n]e− jωn . n= −∞ ∞ ∑ x[ −n]e − jωn = n =−∞ ∞ ∞ ∑ x[m]e jωm = X(e − jω ). m =−∞  ∞  x * [−n]e− jωn =  ∑ x[−n]e jωn  using the result of Part (a).  n =−∞  n =−∞ ∑ Therefore DTFT{x*[-n]} = X * (e jω ). { }  x[n] + x * [n]  1 jω − jω (c) DTFT{Re(x[n])} = DTFT  ) using the result of Part  = X(e ) + X * (e 2   2 (b). { }  x[n] − x * [n]  1 jω − jω (d) DTFT{j Im(x[n])} = DTFT  j ) .  = X(e ) − X * (e 2j 2   { } { { } }  x[n] + x * [−n]  1 jω jω jω jω (e) DTFT {x cs [n]} = DTFT   = X(e ) + X * (e ) = Re X(e ) = Xre (e ). 2   2  x[n] − x * [−n]  1 jω jω jω (f) DTFT {x ca [n]} = DTFT   = X(e ) − X * (e ) = jX im (e ). 2 2   ∞ 3.7 X(e jω X(e jω )= ∑ x[n]e− jωn where x[n] is a real sequence. For a real sequence, n= −∞ { } ) = X * (e – jω ), and IDFT X * (e – jω ), = x * [−n]. 47
  • 52. { } { } 1 X(e jω ) + X * (e – jω ) . Therefore, IDFT X re (e jω ) 2 1 1 1 = IDFT X(e jω ) + X * (e – jω ) = {x[n] + x * [−n]} = {x[n] + x[−n]} = x ev [n]. 2 2 2 (a) X re (e jω ) = { } { } { } 1 X(e jω ) − X * (e– jω ) . Therefore, IDFT jXim (e jω ) 2 1 1 1 = IDFT X(e jω ) − X * (e – jω ) = {x[n] − x * [−n]} = {x[n] − x[ −n]} = x od [n]. 2 2 2 (b) jX im (e jω ) = { ∞ 3.8 ∑ (a) X(e jω ) = } x[n]e− jωn . Therefore, X * (e jω ) = ∑ ∑ x[n]e jωn = X(e – jω ), and hence, n= −∞ n= −∞ ∞ X * (e– jω ) = ∞ x[n]e – jωn = X(e jω ). n =−∞ (b) From Part (a), X(e jω ) = X * (e – jω ). Therefore, X re (e jω ) = X re (e – jω ). (c) From Part (a), X(e jω ) = X * (e – jω ). Therefore, X im (e jω ) = −X im (e– jω ). (d) X(e jω ) = X2 (e jω ) + X2 (e jω ) = X 2 (e –jω ) + X 2 (e –jω ) = X(e − jω ) . re im re im (e) arg X(e jω ) = tan −1 1 3.9 x[n] = 2π X (e− jω ) = −tan −1 im − jω = − argX(e jω ) X re (e jω ) X re (e ) Xim (e jω ) π ∫ X(e jω )e jωn −π 1 dω . Hence, x * [n] = 2π π ∫ X *(e jω ) e –jωndω . −π (a) Since x[n] is real and even, hence X(e jω ) = X *(e jω ) . Thus, 1 x[– n] = 2π π ∫ X(e jω ) e − jωn dω , −π 1 1 Therefore, x[n] = ( x[n] + x[−n]) = 2 2π π ∫ X(e jω ) cos(ωn)dω. −π Now x[n] being even, X(e jω ) = X(e– jω ) . As a result, the term inside the above integral is even, π ∫ 1 and hence x[n] = X(e jω )cos(ωn)dω π 0 (b) Since x[n] is odd hence x[n] = – x[– n]. 1 j Thus x[n] = ( x[n] − x[−n]) = 2π 2 π ∫ X(e jω ) sin(ωn)dω . Again, since x[n] = – x[– n], −π 48
  • 53. π X(e jω ) = −X(e – jω ∫ j ) . The term inside the integral is even, hence x[n] = X(e jω )sin(ωn)dω π 0  e jω 0n e jφ + e − jω 0 n e −jφ   µ[n] 3.10 x[n] = α n cos(ω o n + φ)µ[n] = Aα n    2   n A jφ A − jφ = e αe jω o µ[n] + e αe − jω o µ[n] . Therefore, 2 2 A 1 A − jφ 1 X(e jω ) = e jφ − jω e jω o + 2 e −jω e− jω o . 2 1 − αe 1 − αe ( ) ( ) 3.11 Let x[n] = α n µ[n], α <1. From Table 3.1, DTFT{x[n]} = X(e jω ) = ∞ ∑ (a) X1 (e jω ) = ∞ α n µ[n + 1]e − jωn = n= −∞ ∑ α n e− jωn = α −1 e jω + n= −1 1 1 − α e– jω . ∞ ∑ α n e − jωn n =0 1 1  e jω − α  = α −1e jω + =  . 1 − αe – jω α  1 − αe – jω  (b) x 2 [n] = nα n µ[n]. Note that x 2 [n] = n x[n]. Hence, using the differentiation-in-frequency dX(e jω ) αe− jω property in Table 3.2, we get X 2 (e jω ) = j = . dω (1 − αe − jω ) 2 M −1  α n , n ≤ M,  jω n − jωn − n − jωn (c) x 3 [n] =  Then, X 3 (e ) = α e + α e  0, otherwise.  n=0 n= −M ∑ = 1 − α M+1 e− jω(M +1) 1 − α −M e− jωM + αM e jωM . 1 − αe − jω 1− α −1e − jω (d) X 4 (e jω ) = ∞ ∑ α n e – jωn = n=3 = 1 1 − αe − jω αe ∞ ∑ α n e – jωn −1 − α e− jω − α 2 e − j2ω n =0 −1 − αe − jω − α 2 e− j2ω . (e) X 5 (e jω ) = = ∑ − jω (1 − αe − jω ) 2 (f) X 6 (e jω ) = ∞ ∑ n α n e – jωn = n =−2 ∞ ∑ n α n e– jωn − 2α−2 e j2ω − α −1 e jω n= 0 − 2α −2 e j2ω − α −1 e jω . −1 ∑ n = −∞ α n e – jωn = ∞ ∑ α − me jωm = m=1 ∞ ∑ m= 0 49 α −m e jωm − 1 = e jω −1 = . 1 − α−1 e jω α − e jω 1
  • 54.  1,  3.12 (a) y 1[n] =   0,   1,  y 0 [n] =   0,  N Then Y1(e otherwise.  n  1− , y 2 [n] =  N  0,  (b) −N ≤ n ≤ N, jω )= ∑e − jωn =e n= − N jωN [ Now y 2 [n] = y0 [n] * y 0 [n] where otherwise.   N +1   sin2  ω     2  jω 2 jω Thus Y2 (e ) = Y0 (e ) = . sin 2 (ω / 2) −N / 2 ≤ n ≤ N / 2, jω 1 )= 2 = N ∑e −j(πn / 2N) − jωn e n= −N N 1 2 ∑ e − jω− n= − N ( π  n 2N  + π 1 1 + 2 1 2 ) N ∑ e j( πn / 2N) e− jωn n= −N N π   e – j ω+ 2N  n ∑ n =− N ( π 1 ) 1 sin (ω − 2N )(N + 2 ) 1 sin (ω + 2N )(N + 2 ) = + . 2 sin (ω − π ) / 2 2 sin (ω + π ) / 2 3.13 Denote x m[n] = ( ( 2N ) (n + m −1)! n α µ[n], α <1. We shall prove by induction that n!(m − 1)! DTFT {x m [n]} = X m (e jω ) = Let m = 2. Then x 2 [n] = X 2 (e jω ) = ) 2N α e – jω – jω 2 (1 − αe ) property of Table 3.2. 1 (1 − αe – jω ) m . From Table 3.1, it follows that it holds for m = 1. ( n + 1)! n α µ[n] = (n + 1)x1 [n] = n x1 [n] + x1 [n]. Therefore, n!( + 1 1 − αe – jω = 1 (1 − αe – jω ) 2 using the differentiation-in-frequency Now asuume, the it holds for m. Consider next x m+1 [n] = (n + m)! n α µ[n] n!(m)! 1  n + m  (n + m − 1)! n  n + m =  α µ[n], =   x m[n] = ⋅ n ⋅ x m[n] + x m [n]. Hence,  m  n!(m − 1)!  m  m   1 d  1 1 α e– jω 1  X m +1 (e jω ) = j =  + – jω m  – jω m – jω m +1 + m dω  (1 − α e )  (1 − α e ) (1 − αe ) (1 − α e– jω ) m  1 = . (1 − α e – jω ) m +1 50 1 ] (1− e− jω(2N+1) ) sin(ω N + 2 ) = (1 − e − jω ) sin(ω / 2) otherwise.  cos(πn / 2N), −N ≤ n ≤ N,  y 3 [n] =  Then,  0, otherwise.  (c) Y3 (e –N ≤ n ≤ N,
  • 55. ∞ 3.14 (a) X a (e jω )= ∑ 1 δ(ω + 2πk) . Hence, x[n] = 2π k= −∞ (b) X b (e jω ) = jω(N +1) 1− e 1− e − jω N = ∑ e− jωn . n= 0 N (c) X c (e jω ) =1 + 2 ∑ N cos(ωl ) = l =0 ∫ δ(ω) e jωndω = 1. −π  1, Hence, x[n] =   0,  ∑ e− jωl . l =−N π  1, Hence x[n] =    0, 0 ≤ n ≤ N, otherwise. −N ≤ n ≤ N, otherwise. − jαe − jω (d) X d (e jω ) = , α < 1. Now we can rewrite X d (e jω ) as (1− αe − jω ) 2 d 1 d 1 X d (e jω ) = = X (e jω ) where X o (e jω ) = . dω (1 − αe− jω ) dω o 1 − αe − jω ( ) Now x o [n] = α n µ[n] . Hence, from Table 3.2, x d [n] = −jnα n µ[n]. 3.15 (a) H1 (e jω  e jω + e – jω   e j2ω + e– j2ω  3 3 jω – jω ) = 1 + 2 + e j2ω + e– j2ω .  + 3  = 1+ e + e 2 2 2 2     Hence, the inverse of H1 (e jω ) is a length-5 sequence given by h 1[n] = [1.5 1 1 1 1.5], − 2 ≤ n ≤ 2.   e jω + e – jω   e j2ω + e– j2ω    e jω /2 + e – jω/ 2  – jω /2 (b) H 2 (e jω ) =  3 + 2   + 4   ⋅  ⋅e 2 2 2          1 = 2 e j2ω + 3 e jω + 4 + 4 e – jω + 3e – j2ω + 2e– j3ω . Hence, the inverse of H 2 (e jω ) is a length-6 2 sequence given by h 2 [n] = [1 1.5 2 2 1.5 1], − 2 ≤ n ≤ 3. ( (c) H 3 (e ( ) jω   e jω + e – jω   e j2ω + e – j2ω    e jω/2 − e – jω /2   3 + 4 )= j  + 2  ⋅  2 2 2j          ) 1 j3ω e + 2 e j2ω + 2 e jω + 0 − 2 e – jω − 2e– j2ω − e – j3ω . Hence, the inverse of H 3 (e jω ) is a 2 length-7 sequence given by h 3 [n] = [ 0.5 1 1 0 –1 –1 − 0.5], − 3 ≤ n ≤ 3. =   e jω + e – jω   e j2ω + e– j2ω    e jω /2 − e – jω/ 2  jω/2 (d) H 4 (e jω ) = j  4 + 2 + 3    ⋅  ⋅e 2 2 2j          13 1 1 3  =  e j3ω − e j2ω + 3 e jω − 3 + e – jω − e – j2ω  . Hence, the inverse of H 4 (e jω ) is a length-6  22 2 2 2 sequence given by h 4 [n] = [ 0.75 −0.25 1.5 −1.5 0.25 −0.75], − 3 ≤ n ≤ 2. 3.16 (a) H 2 (e jω ) =1 + 2 cos ω + 3 3 5 3 (1 + cos 2ω) = e j2ω + e jω + + e – jω + e – j2ω . 2 4 2 4 51
  • 56. Hence, the inverse of H1 (e jω ) is a length-5 sequence given by h 1[n] = [ 0.75 1 2.5 1 0.75], − 2 ≤ n ≤ 2. 4   (b) H 2 (e jω ) = 1 + 3cos ω + (1 + cos2ω)  cos(ω / 2) e jω/ 2 2   1 j3ω 5 j2ω 11 jω 11 5 – jω 1 – j2ω = e + e + e + + e + e . Hence, the inverse of H 2 (e jω ) is a length-6 2 4 4 4 4 2 sequence given by h 2 [n] = [ 0.5 1.25 2.75 2.75 1.25 0.5 ], − 3 ≤ n ≤ 2. (c) H 3 (e jω ) = j[ 3 + 4 cos ω + (1 + cos 2ω) ] sin(ω) 1 j3ω 7 7 1 e + e j2ω + e jω + 0 − e – jω − e – j2ω − e– j3ω . Hence, the inverse of H 3 (e jω ) is a 4 4 4 4 length-7 sequence given by h 3 [n] = [ 0.25 1 1.75 0 −1.75 −1 −0.25], − 3 ≤ n ≤ 3. = 3   (d) H 4 (e jω ) = j  4 + 2 cosω + (1 + cos 2ω) sin(ω / 2) e– jω/ 2 2   1  3 j2ω 1 jω 9 9 – jω 1 – j2ω 3 – j3ω  =  e + e + − e + e − e  . Hence, the inverse of H 4 (e jω ) is a  24 4 2 2 4 4 3 1 9 9 1 3  length-6 sequence given by h 4 [n] =  − − −  , − 2 ≤ n ≤ 3. 4 8 8 8 8 4 ( ) ∞ ∑ n= −∞ x[n] e− jωn. Hence, ∞ ∞ ∞ Y(e jω ) = ∑ y[n] e− jωn = X((e jω ) 3 ) = ∑ x[n](e −jωn ) 3 = ∑ x[m / 3]e − jωm . n= −∞ n =−∞ m =−∞ 3.17 Y(e jω ) = X(e j3ω ) = X (e jω )3 . Now, X(e jω ) = Therefore, y[n] = ∞ 3.18 X(e jω {x[n], 0, ∑ x[n] e− jωn. )= n= −∞ ∞ X(e jω/ 2 ) = ∑ x[n] e− j(ω / 2)n , and X(−e jω / 2 ) = n= −∞ ∞ Y(e jω y[n] = n = 0, ± 3, ± 6,K otherwise. )= ( ∑ y[n] e n= −∞ ∞ ∑ x[n](−1)n e − j(ω / 2) n. n =−∞ −ωn { } 1 1 = X(e jω / 2 ) + X(−e jω / 2 ) = 2 2 ) { Thus, ∞ ∑ (x[n] + x[n](−1)n )e − j(ω / 2) n. Thus, n= −∞ 1 x[n], for n even x[n] + x[n](−1) n = 0. for n odd . 2 3.19 From Table 3.3, we observe that even sequences have real-valued DTFTs and odd sequences have imaginary-valued DTFTs. (a) Since −n = n, , x 1[n] is an even sequence with a real-valued DTFT. (b) Since (−n) 3 = −n 3 , x 2 [n] is an odd sequence with an imaginary-valued DTFT. 52