\documentclass{report} \input{preamble} \input{macros} \input{letterfonts} \title{\Huge{Digital Signal Processing}} \author{\huge{Aidan Sharpe}} \date{} \begin{document} \maketitle \newpage% or \cleardoublepage % \pdfbookmark[]{}{<dest>} \pdfbookmark[section]{\contentsname}{toc} \tableofcontents \pagebreak A signal is anything that is used to convey information. Analog signals are continuously variable, but in this course the focus will be put on digital signals. Digital signals are signals that are made up of discrete values. Some advantages of using digital signals are as follows: \begin{enumerate} \item Flexibility and programmibility \item More immune to noise \item Signal reproducibility \item Ease of maintenance and troubleshooting \item Signal storage \end{enumerate} Furthermore, a signal can depend on any number of variables. A signal that only depends on one variable is considered \emph{one-dimensional}. This course will primarily be directed towards these one-dimensional signals. One example of the many use cases for digital signal processing is speech processing. This can include another range of topics such as speech recognition, speech enhancement, and speech encoding and compression. \chapter{Discrete Time Signals} \section{Uniform Sampling} The first step to uniform sampling is to discretize the time axis. Uniform sampling converts a continuous time signal, $x(t)$ into a discrete signal by considering the samples of $x(t)$ at uniform times, $t=nT_s$; where $n$ is an integer and $T_s$ is the sampling period. \\ \\ The Dirac Delta function, or the impulse function is defined as: $$\delta(t) = \begin{cases} 0 & t < 0 \\ \infty & t = 0 \\ 0 & t > 0 \end{cases}$$ The strength of the impulse function is typically defined to be unity. This is more rigorously given by: $$\int\limits_{-\infty}^\infty \delta(t) dt = 1$$ The sampling property of the impulse function is defined as: $$f(t)\delta(t-t_0) = f(t_0)\delta(t-t_0)$$ The sifting property of the impulse function is given by: $$\int\limits_{-\infty}^{\infty} f(t)\delta(t-t_0) dt = f(t_0)$$ To sample a signal, $x(t)$, uniformly through time, an impulse train is used. Each sample of the signal can be written in the form: $$x(t)\delta(t - nT_s)$$ Therefore, the resulting sampled signal at all times, $t$, is given by: $$x_s(t) = x(t)\sum_{n=-\infty}^\infty \delta(t - nT_s)$$ The signal, $x_\text{imp}(t)$, is a continuous and periodic function with period, $T_s$. It is given by: $$x_\text{imp}(t) = \sum_{n=-\infty}^\infty \delta(t - nT_s)$$ Since the definition of the Fourier series states that any continuous and periodic signal can be represented as a linear weighted combination of complex exponentials, we can rewrite $x_\text{imp}(t)$ in terms of a complex Fourier series: $$x_\text{imp}(t) = \sum_{k = -\infty}^\infty X_k e^{jk\Omega_s t}$$ Where: \begin{itemize} \item[$X_k$] are the Fourier series coefficients \item[$\Omega_s$] is the angular sampling frequency defined as $\Omega_s = {2\pi \over T_s}$ \end{itemize} Since $\Omega_s$ is entirely dependent on $T_s$, the only unknown is the values of the Fourier series coefficients, $X_k$. These coefficients can be found by: $$X_k = {1\over T_s} \int\limits_{-T_s \over 2}^{T_s \over 2} \delta(t) e^{-jk\Omega_s t} dt$$ In this case, $X_k$ is always $1 \over T_s$. Therefore, the Fourier series definition for $x_\text{imp}(t)$ is: $$x_\text{imp}(t) = \sum_{k=-\infty}^\infty {1 \over T_s} e^{jk\Omega_s t}$$ To sample $x(t)$ and get $x_s(t)$, simply take the product of $x(t)$ and $x_\text{imp}(t)$: $$x_s(t) = \sum_{k = -\infty}^\infty {1 \over T_s} x(t) e^{jk\Omega_s t}$$ This sampled signal is called a discrete signal or a digital signal. While working with $x_s(t)$ in this form is not often practical, taking a Fourier transform gives more insight. $$F[\delta(t - a)] = e^{-j\Omega a}$$ $$x(t) \xleftrightarrow{F} X(\Omega)$$ $$x(t - \alpha) \xleftrightarrow{F} X(\Omega)e^{-j\Omega\alpha}$$ $$x(t)e^{jn\Omega_0 t} \xleftrightarrow{F} = X(\Omega - \Omega_0)$$ Using these properties, the Fourier transform of $x_s(t)$ is: $$F[x_s(t)] = F\lt[{1\over T_s} \sum_{n = -\infty}^{\infty} x(t)e^{jn\Omega_s t}\rt]$$ $$X_s(\Omega) = {1\over T_s} \sum_{n=-\infty}^\infty F\lt[x(t) e^{jn\Omega_s t}\rt]$$ $$X_s(\Omega) = {1\over T_s} \sum_{n=-\infty}^\infty X(\Omega - n\Omega_s)$$ $$X_s(\Omega) = {1\over T_s} X(\Omega)$$ \thm{The Sampling Theorem} { A band-limited signal, $x(t)$---its low-pass spectrum $X(\Omega)$ is such that $X(\Omega) = 0$ for $|\Omega| > \Omega_\text{max}$ where $\Omega_\text{max}$ is the maximum frequency in $x(t)$---can be sampled uniformly and without frequency aliasing (overlap between spectral copies) using a sampling frequency of $\Omega_s = {2\pi \over T_s} \ge 2\Omega$ called the Nyquist sampling rate condition. } \ex{Sampling Theorem} { A signal, $x(t)$, is given by: $$x(t) = 2\cos(2\pi t + {\pi \over 4})$$ Since there exists a maximum frequency, in this case, 1[Hz], $x(t)$ is considered a band-limited signal. $$f_s \ge 2f_m$$ $$f_s \ge 2\text{[Hz]}$$ $$T_s \le {1\over 2}\lt[\text{samples/s}\rt]$$ } A good figure of merit is to let $\Omega_\text{max}$ be the frequency such that 99\% of the signal energy is in the interval $[-\Omega_\text{max}, \Omega_\text{max}]$. The signal energy in the time domain is given by: $$E_x = \int\limits_{-\infty}^\infty |x(t)|^2 dt$$ Using the Perseval's relationship, the signal energy can be computed using only knowledge from the frequency domain. $$E_x = {1\over 2\pi} \int\limits_{-\infty}^\infty |X(\omega)|^2 dt$$ \section{Characterizing Discrete Signals in the Time Domain} \dfn{Causal Signal} { $x[n]$ is said to be a \emph{causal signal} if $x[n]$ has zero value for $n < 0$. } \noindent When a signal, $x[n]$, is discritized following the sampling theorem, it domain is the integers, and it can take real and complex values. \dfn{Anti-Causal Signal} { $x[n]$ is said to be an \emph{anti-causal signal} if $x[n]$ has zero value for $n \ge 0$. } \noindent At $n=0$, a causal signal can have a nonzero value, but an anti-causal signal cannot. \dfn{Finite Support Signal} { $x[n]$ is said to have \emph{finite support} if there exists integers, $N_1$ and $N_2$, such that $x[n]$ has zero value for $n < N_1$ and $N_2$, where $N_2 \le N_1$. } \noindent Finite support signals have a finite domain, and for discrete signals, finite values of $n$ that correspond to a non-zero signal value. \dfn{Infinite Support Signals} { $x[n]$ is said to have \emph{infinite support} or infinite duration if it does not have finite support. } \noindent For infinite support signals, there do not exist two integers, $N_1$ and $N_2$, such that $x[n] = 0$ for $n < N_1$ and $n > N_2$. Infinite support signals exist in the following forms: \begin{enumerate} \item Right-sided signal: $N < n < \infty$ \item Left-sided signal: $-\infty < n < N$ \item Two-sided signal: $-\infty < n < \infty$ \end{enumerate} \dfn{Discrete Impulse} { The impulse signal can be written in the discrete domain as: $$\delta[n] = \begin{cases} 1 & n=0 \\ 0 & \text{otherwise} \end{cases}$$ } \noindent Since the discrete impulse signal has a non-zero value only at $n=1$, it is a causal signal with finite support. \ex{} { Consider the signal, $x[n] = \delta[n - a]$: $$\delta[n-a] = \begin{cases} 0 & n < a \\ 1 & n = a \\ 0 & n > a \\ \end{cases}$$ Since $x[n]$ has a non-zero value only at $n = a$, it is a finite support signal. If $a \ge 0$, the signal is causal, but if $a < 0$, the signal is non-causal. In this case, if $a < 0$, then it is also an anti-causal signal. } \noindent Any finite support signal can be represented in terms of discrete impulses. $$x[n] = \sum_{a = N_1}^{N_2} k_a \delta[n - a]$$ \dfn{Discrete Unit Step} { The \emph{discrete unit step}, $u[n]$ is defined as: $$u[n] = \begin{cases} 1 & n \ge 0 \\ 0 & \text{otherwise} \end{cases}$$ } \noindent The discrete unit step has right-sided infinite support and is a causal signal. The signal, $u[-n]$ is defined as: $$u[-n] = \begin{cases} 1 & n \le 0 \\ 0 & \text{otherwise} \end{cases}$$ $u[-n]$ has left-sided infinite support, but it is neither causal nor anti-causal. \section{The Norm of a Discrete Signal} \dfn{Norm of a Discrete Signal} { The mapping of a signal to a value in the range, $[0, \infty)$. } \dfn{The $L_p$ Norm} { Given a discrete time signal, $x[n]$, the $L_p$ norm of the signal is: $$\lt[\sum_n |x[n]|^p \rt]^{1/p}$$ } \ex{$L_p$ Norm} { $$x[n] = \begin{bmatrix}3 & -5 & 7 & -5 & -9\end{bmatrix}$$ Find the $L_1$, $L_2$, and $L_\infty$ norms: \\ \\ $L_1$ norm: $$\sum_n |x[n]| = 3 + 5 + 7 + 5 + 9 = 29$$ $L_2$ norm: $$\lt[\sum_n |x[n]|^2 \rt]^{1/2} = sqrt{9 + 25 + 49 + 25 + 81} = \sqrt{189} = 3\sqrt{21}$$ $L_\infty$ norm: $$\max|x[n]| = 9$$ } \ex{$L_p$ Norms with Complex Numbers} { $$x[n] = \begin{bmatrix}3+j & -5+j3 & -7-j & 9-j4 & 10\end{bmatrix}$$ \\ \\ $L_1$ norm: $$\sum_n |x[n]| = \sqrt{10} + sqrt{34} + \sqrt{50} + \sqrt{97} + 10$$ $L_2$ norm: $$\lt[\sum_n |x[n]|^2 \rt]^{1/2} = \sqrt{10 + 34 + 50 + 97 + 100}$$ $L_\infty$ norm: $$\max|x[n]| = 10$$ } \ex{} { $$x[n] = \begin{bmatrix}1 & j & 1 & j & 1 & j & 1 & j & \cdots\end{bmatrix}, n \in \mathbb{Z} \cap [0, 99]$$ \\ \\ $L_1$ norm: $$\sum_n |x[n]| = 100$$ $L_2$ norm: $$\lt[\sum_n |x[n]|^2 \rt]^{1/2} = \sqrt{100} = 10$$ $L_\infty$ norm: $$\max|x[n]| = 1$$ } \subsection{Geometric Series} There are two types of geometric series: finite and infinite. \dfn{Infinite Geometric Series} { Infinite geometric series are of the form: $$\sum_{n=m}^{\infty} s^n = {s^m \over 1 - s}$$ If $|s| < 1$, the series will converge, and if $|s| > 1$, the series will diverge. } \dfn{Finite Geometric Series} { A finite geometric series is of the form: $$\sum_{n=0}^{N-1} s^n$$ If $s = 1$, the series converges to $N$, otherwise the series can be evaluated by: $$\sum_{n=0}^{N-1} s^n = {1 - s^N \over 1 - s}$$ } \ex{Norms of Geometric Series} { $$x[n] = (-0.5)^n u[n]$$ \\ \\ $L_1$ norm: $$\sum_n |x[n]| = \sum_{n=0}^\infty |(-0.5)^n| = \sum_{n=0}^\infty {1\over2}^n = 2$$ $L_2$ norm: $$\lt[ \sum_n |x[n]|^2 \rt]^{1/2} = \lt[ \sum_{n=0}^\infty |(-0.5)^n|^2 \rt]^{1/2} = \lt[ \sum_{n=0}^\infty {1\over4}^n \rt]^{1/2} = \lt({1\over 1 - 0.25}\rt)^{1/2} = \sqrt{4\over3}$$ $L_\infty$ norm: $$\max|x[n]| = \max(0.5^n), n \in \mathbb{N}_0 = 1$$ } \noindent For geometric series, if $s < 1$, the $L_\infty$ norm will occur at the smallest $n$ in the series.Conversely, for geometric series where $s > 1$, the $L_\infty$ norm will occur at the largest $n$ in the series. \section{Elementary Operations on Signals} An elementary operation operates on each element of a signal. Consider two signals, $x[n]$ and $h[n]$. Assume that the two signals are sampled at the same frequency. \begin{center} \begin{tabular}{c | c | c | c | c | c} $n$ & $x[n]$ & $h[n]$ & $x[n] + h[n]$ & $x[n] - h[n]$ & $x[n]h[n]$\\ \hline -2 & 3 & & 3 & 3 & 0\\ -1 & -5 & -7 & -12 & 2 & 35\\ 0 & 2 & 3 & 5 & -1 & 6\\ 1 & -6 & 10 & 4 & -16 & -60\\ 2 & 9 & 21 & 30 & -12 & 289 \\ 3 & & -16 & -16 & 16 & 0\\ 4 & & 6 & 6 & -6 & 0\\ 5 & & -3 & -3 & 3 & 0 \end{tabular} \end{center} \noindent In addition to these element-by-element arithmetic operations, elementary operations may also manipulate the input space. \\ \\ \noindent A signal may be time delayed by $a$ in the form $h[n - a]$. Signals can be advanced by $a$ in the form $h[n + a]$. Time may also be reflected around $n=0$ in the form $h[-n]$. Perhaps the most interesting operation is the circular shift. \dfn{Circular Shift} { The circular shift is a time shifting operation on a finite length sequence that results in another sequence of the same length and defined for the same range of values of $n$. The domain of the signal will be unchanged. } \noindent The modulus operation: $$r = m\bmod N = \langle m \rangle_N$$ \noindent Another type of discrete elementary operation is the signal rate operation. This involves adding or removing samples. \dfn{The Downsample Operation} { The downsample or sub-sample operation on a signal, $x[n]$ is given by: $$y[n] = x[Mn], M \in \mathbb{Z}$$ This downsamples from every $n$ to every $M^\text{th}$ sample from $x[n]$ } \noindent When downsampling, the sampling frequency $f_s$ becomes $f_s \over M$, and the sampling time, $T_s$ becomes $M T_s$. \nt { To avoid aliasing, $f_s \overset{!}{\ge} 2f_m$. Beware that downsampling may cause aliasing by decreasing $f_s$ too much. } \noindent On the other hand, a signal may also be upsampled. \dfn{The Upsample Operation} { Sampling frequency increases by a factor of $M$. $$y[n] = x\lt[{n \over M}\rt], M \in \mathbb{Z}$$ This will insert $M-1$ zeros between successive samples of $x[n]$. } \ex{Upsampling} { $$x[n] = \begin{bmatrix} 1 & 2 & 3 & 4 \end{bmatrix}$$ $$M = 4$$ $$y[n] = \begin{bmatrix} 1 & 0 & 0 & 0 & 2 & \cdots & 4 \end{bmatrix}$$ } \noindent When upsampling, the sampling frequency, $f_s$ becomes $Mf_s$, and the sampling time, $T_s$ becomes $T_s \over M$. \chapter{Discrete Systems} A discrete system is any system that takes a discrete signal as input and produces a discrete signal as output. $$x[n] \to \boxed{S} \to y[n]$$ \section{Properties of Systems} \subsection{Linearity} For a system to be considered linear, it must satisfy two conditions: homogeneity and superposition. \subsubsection{Homogeneity} For a system to be homogeneous, a scaling of the input by a constant, $a$, must produce an output scaled by that same constant, $a$. $$y[n] = S[x[n]$$ $$ay[n] = S[ax[n]]$$ \subsubsection{Superposition} For a system to satisfy superposition, taking the sum of two inputs $x_1[n]$ and $x_2[n]$ as input must produce the sum of their individual outputs $y_1[n] + y_2[n]$. $$y_1[n] = S[x_1[n]]$$ $$y_2[n] = S[x_2[n]]$$ $$y_1[n] + y_2[n] = S[x_1[n] + x_2[n]]$$ \subsection{Time Invarience} For a system to be considered time invariant, delaying or advancing the input must lead to the same delay or advance in the output. $$y[n] = S[x[n]$$ $$y[n-a] = S[x[n-a]]$$ \subsection{Causality} For a system to be considered causal, the output must only rely on past and present inputs; it cannot depend on any future inputs. \subsection{Bounded Input, Bounded Output (BIBO) Stability} A system is BIBO stable if every bounded input has a corresponding bounded output. \\ \\ \noindent A bounded input is of the form: $$\lt|x[n]\rt| \le B_x < \infty, \forall n$$ A bounded output is of the form: $$\lt|y[n]\rt| \le B_y < \infty, \forall n$$ \nt { For unbounded inputs, outputs do not have to be bounded. } \ex{Properties of Downsampling} { The definition of the downsampling operation: $$y[n] = S[x[n]] = x[Mn]$$ \subsubsection{Homogeneity} $$S[a x[n]] = a x[M n]$$ $$s y[n] = a x[M n] \to \text{The system is homogeneous.}$$ \subsubsection{Superposition} $$y_1[n] = S[x_1[n]] = x_1[M n]$$ $$y_2[n] = S[x_2[n]] = x_2[M n]$$ $$S[x_1[n] + x_2[n]] = x_1[M n] + x_2[M n] = y_1[n] + y_2[n] \to \text{Superposition holds.}$$ \subsubsection{Time Invariance} $$S[x[n-a] = x[Mn-a]$$ $$y[n - a] = x[M(n-a)] = X[Mn - Ma] \ne S[x[n-a]] \to \text{The system is not time invariant}$$ \subsubsection{Causality} For any value of $M$ greater than $1$, and $n$ greater than $1$, $Mn$ will always be greater than $n$, therefore relying on future inputs. \subsubsection{BIBO Stability} If $|x[n]|$ is bounded by $B_x$ for all $n$, then the output is bounded by $B_y = B_x$ for all $n$. } \section{LTI Systems} Linear and time-invariant (LTI) systems are simple systems that are very useful for modelling. \dfn{Impulse Response} { The output of a system when the Dirac delta (impulse) function is applied as input. } \noindent For discrete systems, the impulse response is given by $h[n]$ as opposed to the $h(t)$ for continuous systems. \paragraph{Properties of the Discrete LTI Systems} \begin{enumerate} \item The system is causal if the umpulse response is a causal signal \item The system is BIBO stable if the impulse response is absolutely stable $$\sum_n |h[n]| < \infty$$ \item $y[n]$ is computeable for any $x[n]$ \end{enumerate} \subsubsection{The Integral Test} Given a continuous, positive, and decreasing function in the interval $[1, \infty)$, the indefinite integral and sum either both converge or both diverge. \subsubsection{Convolution} If the input to the LTI system is $x[n]$ and the impulse response of the LTI system is $h[n]$, the convolution of $x[n]$ and $h[n]$ is given by: $$y[n] = x[n] * h[n]$$ In the continuous case, the convolution of $x(t)$ and $h(t)$ is given by: $$y(t) = x(t) * h(t) = \int\limits_{-\infty}^\infty x(\tau)h(t-\tau)d\tau$$ In the discrete case, the convolution of $x[n]$ and $h[n]$ is given by: $$y[n] = x[n] * h[n] = \sum_{k=-\infty}^\infty x[k]h[n-k] = \sum_{k=-\infty}^\infty h[k]x[n-k]$$ \section{Periodicity} A periodic continuous signal with period $T$ is given by: $$x(t) = x(t - kT)$$ A periodic discrete signal with integer period $N$ is given by: $$x[n] = x(t - kN)$$ The period of a discrete signal must be an integer because discrete signals are only defined at integer intervals. \\ \\ \noindent If two discrete signals, $x[t]$ and $y[t]$ are periodic with periods $N_1$ and $N_2$, the period of the sum of the signals is the least common multiple of $N_1$ and $N_2$. If even one signal in a sum of signals is aperiodic, then the entire sum becomes aperiodic. \\ \\ \noindent In the continuous time domain, sinusoidal and complex exponential signals are always periodic. For discrete signals, however, it is not as simple. A cosine in the discrete domain takes the form: $$x[n] = \cos(\omega_0 n + \varphi)$$ By the definition of periodicity, for $x[n]$ to be periodic with period $N$: $$x[n] = x[n+N] = \cos(\omega_0(n+N) + \varphi)$$ By distributing $\omega_0$ and applying a trigonometric identity: $$x[n+N] = \cos(\omega_0 n + \varphi)\cos(\omega_0 N) - \sin(\omega_0 n + \varphi)\cos(\omega_0 N)$$ Therefore, for $x[n]$ to equal $x[n+N]$, the sine terms must go to 0 and $\cos(\omega_0 N)$ must be unity. If $\omega_0$ is an integer multiple of $2\pi$, this condition will be satisfied. \ex{Periodic Sampling} { Given a signal $\cos(\pi n)$, what is it's corresponding integer period? $$\omega_0 = \pi$$ $$\omega_0 N = 2\pi r$$ $$\pi N = 2\pi r$$ $$N = 2r$$ The smallest integer $r$ resulting in an integer $N$ is $r=1$. For $r=1$, $N=2$, therefore the signal is periodic with $N=2$. } \ex{Aperiodic Sampling} { Given the signal $\cos(e\pi n + {7\pi /9})$ find it's corresponding integer period. $$\omega_0 = e \pi$$ $$\omega_0 N = 2\pi r$$ $$e\pi N = 2 \pi r$$ $$N = {2 \over e} r$$ There is no integer $r$ such that $N$ is an integer. Therefore the original signal is not discretely periodic. } \section{Energy and Power} \subsection{Energy} For a continuous signal, $x(t)$, the total energy is given by: $$E_x = \int\limits_{-\infty}^{\infty} |x(t)|^2 dt$$ Similarly, for a discrete time signal, the total energy is given by: $$E_x = \sum_n |x[n]|^2$$ which is the same as the square of the L-2 norm of $x[n]$. \subsection{Power} The power of a signal is calculated slightly differently for periodic and aperiodic signals. For discrete, periodic signals with period, $N$, the power is given by: $$P_x = {1\over N} \sum_{n=0}^{N-1} |x[n]|^2$$ For aperiodic discrete signals, the power is given by: $$P_x = \lim_{k\to\infty} {1 \over 2k + 1} \sum_{n=-k}^{k} |x[n]|^2$$ \nt { Periodic signals have infinite energy and finite power, while aperiodic signals with finite energy have zero power. } \ex{} { Given $[x] = 3(-1)^n u[n]$ compute the energy and power. Since $x[n]=0$ for $n<0$, the signal is aperiodic. $$E_x = \sum_n |x[n]|^2$$ $$E_x = \sum_{n=0}^{\infty} |3(-1)^n|^2$$ $$E_x = \sum_{n=0}^{\infty} 9 \rightarrow \infty$$ Given that the signal is aperiodic, the power is given by: $$P_x = \lim_{k\to\infty} {1 \over 2k + 1} \sum_{n=-k}^{k} |x[n]|^2$$ $$P_x = \lim_{k\to\infty} {1 \over 2k + 1} \sum_{n=0}^{k} 9$$ $$P_x = \lim_{k\to\infty} {1 \over 2k + 1} 9(k+1)$$ $$P_x = \lim_{k\to\infty} {9(k+1) \over 2k + 1} 9(k+1)$$ By L'hopital's rule, $$P_x = \lim_{k\to\infty} {9\over2} = {9 \over 2}$$ } \chapter{} \section{The Discrete Time Fourier Transform} \subsection{Derivation of the Discrete Time Fourier Transform} Consider the signal $x_s(t)$ given by: $$x_s(t) = \sum_{n = -\infty}^\infty x(t) \delta(t - nT_s)$$ Apply a Fourier transform: $$F\lt[x_s(t)\rt] = F\lt[\sum_{n=-\infty}^\infty x(t) \delta(t-nT_s)\rt]$$ By the sampling property of the impulse, $x(t)\delta(t-nT_s)$ becomes $x(nT_s)\delta(t-nT_s)$. $$F\lt[x_s(t)\rt] = F\lt[\sum_{n=-\infty}^\infty x(nT_s) \delta(t-nT_s)\rt]$$ The sum and $x(t)$ may be taken out of the Fourier transform: $$F\lt[x_s(t)\rt] = \sum_{n=-\infty}^\infty x(nT_s) F\big[\delta(t-nT_s)\big]$$ The Fourier transform of the impulse is unity, and the delay by $nT_s$ becomes a complex exponential: $$X_s(\Omega) = \sum_{n=-\infty}^\infty x[nT_s]e^{-j\Omega n T_s}$$ If $T_s$ is unity, then simplification is possible. Converting $x[nT_s]$ to $x[n]$ yields the discrete time Fourier transform: $$X(e^{-j\omega}) = \sum_{n=-\infty}^\infty x[n]e^{-j\omega n}$$ \nt { $\Omega$ is the continuous angular frequency while $\omega$ is the discrete angular frequency. $$\Omega T_s = \omega$$ } \noindent This is a natural extension of the continuous Fourier transform: $$X(\Omega) = \int\limits_{-\infty}^{\infty}x(t)e^{-\Omega t}dt$$ $X(e^{j\omega}$ is periodic with a period of $2\pi$. \dfn{Discrete Time Fourier Transform} { $$X(e^{j\omega}) = \sum_{n=-\infty}^\infty x[n]e^{-j\omega n}$$ } \ex{DTFT of the Impulse} { Consider $x[n] = \delta[n]$. Find the DTFT of $x[n]$. $$X(e^{j\omega}) = \sum_{n=-\infty}^\infty x[n]e^{-j\omega n}$$ $$X(e^{j\omega}) = \sum_{n=-\infty}^\infty \delta[n]e^{-j\omega n}$$ The expression in the sum has a non-zero value only at $n=0$, therefore the sum only has one term. $$X(e^{j\omega}) = e^{-j\omega (0)} = 1$$ } \ex{More Complicated DTFT} { Consider $x[n] = a^n u[n]$. Find the DTFT of $x[n]$. $$X(e^{j\omega}) = \sum_{n=-\infty}^\infty x[n]e^{-j\omega n}$$ Since $u[n]$ only has a non-zero value for $n\ge0$, the sum can be simplified to: $$X(e^{j\omega}) = \sum_{n=0}^\infty a^n e^{-j\omega n}$$ Since both terms in the sum have $n$ in the exponent: $$X(e^{j\omega}) = \sum_{n=0}^\infty (a e^{-j\omega})^n$$ The sum has the form of a geometric series, which can be evaluated by: $$\sum_{n=m}^{\infty} = {s^m \over 1-s}, |s| < 1$$ Since $|e^{-j\omega}| = 1$, $|s| = |a|$. Therefore: $$F\lt[a^n u[n]\rt] = {1 \over 1 - ae^{-j\omega}}, |a| < 1$$ } \subsection{Properties of the Discrete Time Fourier Transform} \begin{enumerate} \item The DTFT is a linear operation. \item If a signal is delayed or advanced ($x[n-a]$), the DTFT is scaled by $e^{-j\omega a}$. \item The DTFT of $nx[n]$ is $j{d\over d\omega}(X(e^{j\omega}))$ \item The DTFT of the convolution, $x[n] * h[n]$, is the same as the product of the DTFT of the two signals. \end{enumerate} \ex{Time Multiplication Property} { Consider $x[n] = na^nu[n]$. Find the DTFT of $x[n]$. Since the DTFT of $a^nu[n]$ is known to be${1 \over 1 - ae^{-j\omega}}$, by the time multiplication property of the DTFT, the DTFT of $x[n]$ is: $$X(e^{j\omega}) = j{d \over d\omega} \lt[{1 \over 1 - ae^{-j\omega}}\rt]$$ } \nt { If $x[n]$ is absolutely summable ($\sum|x[n]| < \infty$) then the DTFT exists. } \subsection{Special Cases} In some special cases, the DTFT exists, but the corresponding time domain signal is not absolutely summable. \ex{Ideal Lowpass Filter} { Consider the transfer function of an ideal lowpass filter, with cutoff frequency, $\omega_c$. } \chapter{The Z-Transform} Recall the Laplace transform is defined as: $$X(s) = \int_{-\infty}^\infty x(t) e^{-st}dt$$ where $s$ is a complex variable. The analogous transform in the discrete domain is called the Z-transform, which is given by: $$X(z) = \sum_n x[n]z^{-n}$$ where $z$ is a complex variable. The region of convergence of a Z-transform is the set of all $z$ for which the sum converges. \ex{Z-Transform of causal finite support signals} { Given a causal, finite support signal, $x[n] = \begin{bmatrix}a&b&c&d\end{bmatrix}, 0 \le n \le 3$. Find its $z$ transform. $$X(z) = \sum_{n=0}^3 x[n] z^{-n}$$ $$X(z) = a + bz^{-1} + cz^{-2} + dz^{-3}$$ The region of convergence is all values of $z$ except $z=0$. } \ex{Z-Transform of anti-causal finite support signals} { Given the finite support, anti-causal signal, $x[n] = \begin{bmatrix}a&b&c&d\end{bmatrix}, -4\le n \le -1$. $$X(z) = \sum_{n=-4}^{-1} x[n] z^{-n}$$ $$X(z) = az^4 + bz^4 + cz^2 + dz$$ The region of convergence is all values of $z$ except $z=\infty$. } \ex{Z-Transform of causal nor anti-causal signals} { Given the finite support signal, $x[n] = \begin{bmatrix}a&b&c&d\end{bmatrix}, -2\le n\le 1$. $$X(z) = \sum_{n=-2}^1 x[n] z^{-n}$$ $$X(z) = az^2 + bz + c + dz^{-1}$$ The region of convergence is all $z$ except $z=0$ and $z=\infty$ } \noindent For finite support, discrete signals, the region of convergence is always the entire $z$-plane with the possible exception of $z=0$ and $z=\infty$. \ex{Z-Transform of the discrete unit impulse} { Given $x[n] = \delta[n]$: $$X(z) = \sum_n = \delta[n]z^{-n}$$ $$X(z) = 1$$ The region of convergence is the entire $z$ plane. } \ex{Z-Transform of infinite support signals} { Given the infinite support signal, $x[n] = a^n u[n]$: $$X(z) = \sum_{n=0}^\infty a^nz^{-n}$$ The common exponent, $n$ can be taken out to put it in the form: $$\sum_{n=m}^\infty s^n = {s^m \over 1 -s}$$ where $s=a z^{-1}$ in this case. This sum will converge as long as $|a z^{-1}| < 1$. In simpler terms, the sum will converge when $|z| > |a|$. } \noindent Since $z$ is a complex variable, it can be described in a polar form: $$z = re^{j\theta}$$ where $r$ is the magnitude and $\theta$ is the phase angle. \\ \\ \noindent If a region of convergence (ROC) does not include the unit circle, then its DTFT does not exist except for special case signals. If the ROC \emph{does} contain the unit circle, then the DTFT can be found by substituting $z = e^{j\omega}$. \section{Z Transform Transfer Functions} Recall that the impulse response of an LTI system completely characterizes the system. It can be used to determine responses of other systems via convolution, and it is a useful tool in determining the BIBO stability of the system. \\ \\ Given an impulse response to a discrete LTI system, the Z-transform of the impulse response is called the \emph{transfer function} $$H(z) = Z[h[n]] = \sum_n h[n]z^{-n}$$ An LTI system is causal if $h[n]$ is a causal signal. This is the same as if the ROC of $H(z)$ includes $z=\infty$. These conditions are equivalent because the ROC of a Z-transform only contains $z=\infty$ for causal signals. \\ \\ Additionally, an LTI system is BIBO stable if $h[n]$ is absolutely summable. $$\sum_n |h[n]| < \infty$$ The equivalent condition for $H(z)$ is the ROC of $H(z)$ must include the unit circle. These conditions are equivalent because the DTFT exists when the unit circle is in the ROC, and the DTFT only exists when the signal is absolutely summable. \nt{ For causal systems only, BIBO stability is achieved if all the poles of $H(z)$ are inside the unit circle. Since non-causal systems do not include $z=\infty$ in the ROC, this test will not hold for non-causal systems. } \noindent If the impulse response $h[n]$ has finite support, then the output, $y[n]$, given an input, $x[n]$, can be found by convolution. $$y[n] = h[n] * x[n] = \sum_{k=0}^{N-1} h[k] x[n-k]$$ However, if the impulse response $h[n]$ has infinite support, the output, $y[n]$, is given by a constant coefficient difference equation. $$\sum_{j=0}^N a[j]y[n-j] = \sum_{k=0}^M b[k]x[n-k]$$ where $a[j]$ are the constant coefficient of the output, and $b[k]$ are the constant coefficients of the input. \ex{} { Given the infinite impulse response: $$h[n] = a^n u[n]$$ The transfer function of the system is: $$H(z) = {z \over z - a}$$ with a ROC of $|z| > |a|$. Since this includes $z=\infty$, the system is causal. If $|a| < 1$, then the system is also BIBO stable. $$H(z) = {z \over z-a} = {Y(z) \over X(z)}$$ $${Y(z) \over X(z)} = {1 \over 1-az^{-1}}$$ $$Y(z)-az^{-1}Y(z) = X(z)$$ Taking the inverse Z-transform: $$y[n] - a y[n-1] = x[n]$$ $$y[n] = x[n] + ay[n-1]$$ Since the output relies on both the input and the previous output, the recursive implementation of the input-output relationship is used. } \ex{} { Given the impulse response, $$h[n] = na^n u[n]$$ the corresponding transfer function is $$H(z) = {az \over (z-a)^2}$$ with ROC, $|z| > |a|$. Since the impulse response is causal, the system is causal. Equivalently, since the ROC includes $z=\infty$, the system is causal. Given this region of convergence, the system is BIBO stable if and only if $|a| < 1$. $$H(z) = {az\over (z-a)^2} = {Y(z) \over X(z)}$$ $$H(z) = {az \over z^2 - 2az + a^2} = {az \over z^2(1 -2az^{-1} + a^2z^{-2})}$$ $$az^{-1}X(z) = Y(z) - 2az^{-1}Y(z) + a^2z^{-2}Y(z)$$ Taking the inverse Z-transform to find the input-output relationship: $$x[n-1] = y[n] - 2ay[n-1] + a^2y[n-2]$$ $$y[n] = x[n-1] + 2ay[n-1] - a^2y[n-2]$$ } \ex{} { $$h[n] = u[n] - u[n-4]$$ $$h[n] = \begin{bmatrix} 1 & 1 & 1 & 1 \end{bmatrix}, 0 \le n \le 3$$ $$H(z) = \sum_n h[n] z^{-n} = 1 + z^{-1} + z^{-2} + z^{-3}$$ For finite support signals, the Z-transform is the entire Z-plane with the possible exception of $z=0$ and $z=\infty$. Checking $z=0$ reveals a pole, but $z=\infty$ works, so the system is causal. All finite support impulse responses (FIR) are BIBO stable. $$y[n] = \sum_{k=0}^3 h[k]x[n-k]$$ $$y[n] = h[0]x[n] + h[1]x[n-1] + h[2]x[n-2] + h[3]x[n-3]$$ } \section{The Inverse Z-Transform} Given the causal signal $x[n] = a^n u[n]$, its Z-transform is $X(z) = {z \over z - a}$ with region of convergence $|z| > |a|$. Given an anti-causal signal $x[n] -a^n u[-n-1]$, its Z-transform is $X(z) = {z \over z - a}$ with region of convergence $|z| < |a|$. \\ \\ Consider the following second-order LTI system whose transfer function is given by: $$H(z) = {z (z - 0.2) \over (z-0.8)(z-0.9)}$$ If the system is causal, then the ROC includes $\infty$, so the ROC must be $|z|>0.9$. In this case the system would be BIBO stable. However, if the system is anti-causal, then the ROC must include 0 and therefore would be $|z|<0.8$. In this case, the system would not be BIBO stable. If the system is two-sided, then the ROC would be $0.8 < |z| < 0.9$. In this case the system would also not be BIBO stable. \\ \\ Find the partial fraction expansion of $H(z) \over z$: $${H(z) \over z} = {z - 0.2 \over (z-0.8)(z-0.9)} = {A \over z - 0.9} + {B \over z - 0.8}$$ $$z - 0.2 = (z-0.8)A + (z-0.9)B$$ $$B = -6, A = 7$$ $${H(z) \over z} = {7 \over z-0.9} - {6 \over z-0.8}$$ $$H(z) = {7z \over z-0.9} - {6z \over z-0.8}$$ Apply the inverse Z-transform on both sides. If the system is causal: $$h[n] = 7(0.9)^n u[n] - 6(0.8)^n u[n]$$ If the system is anti-causal: $$h[n] = -7(0.9)^n u[-n-1] + 6(0.8)^n u[-n-1]$$ If the system is two-sided: $$h[n] = -7(0.9)^n u[-n-1] - 6(0.8)^n u[n]$$ \nt { For FIR systems, if the Z-transform does not converge at $|z|= 0$ or $|z| = \infty$, they are not considered poles, because only IIR systems can have poles. } \end{document}