247 lines
15 KiB
TeX
247 lines
15 KiB
TeX
\documentclass{article}
|
|
|
|
\usepackage{amsmath}
|
|
\usepackage{amsfonts}
|
|
\usepackage{graphicx}
|
|
\usepackage{listings}
|
|
\usepackage{caption}
|
|
\usepackage{subcaption}
|
|
\usepackage{float}
|
|
\usepackage[margin=1in]{geometry}
|
|
|
|
|
|
\title{Uniform Convergence, Convolutions, and Correlation of Signals}
|
|
|
|
\author{Aidan Sharpe \& Elise Heim}
|
|
|
|
\DeclareMathOperator{\sinc}{sinc}
|
|
|
|
\begin{document}
|
|
\begin{titlepage}
|
|
\maketitle
|
|
\end{titlepage}
|
|
|
|
\section{Results \& Discussion}
|
|
|
|
\subsection{Simple Example of Uniform Convergence}
|
|
Consider the discrete time signal, $x[n] = a^n u[n]$, where $u[n]$ is the unit-step and $a=0.9$. Let $X(e^{j\omega})$ be the discrete time Fourier transform (DTFT) of $x[n]$. For the DTFT of a signal to exist, it must be absolutely summable over all $n \in \mathbb{Z}$. If the signal is absolutely summable, then it must also be bounded, meaning there exists some non-negative real number $B$, such that $|x[n]| \le B, \forall n \in \mathbb{Z}$. At a bare minimum, $x[n]$ must have a finite maximum value.
|
|
\\
|
|
\\
|
|
In this case, $x[n]$ is bounded because $a^n$ grows with smaller values of $n$, and $u[n]$ is zero when $n<0$. Therefore, the maximum value of $x[n] = a^n u[n]$ is $x[0] = a$.
|
|
\\
|
|
\\
|
|
Additionally, $x[n]$ takes the form of a geometric series, so its sum is given by
|
|
\begin{equation}
|
|
\sum_{n=m}^\infty s^n = {s^m \over 1-s}, |s| < 1.
|
|
\end{equation}
|
|
In this case, $m=0$, because the signal has no value for $n<0$. Since $a=0.9$, the sum evaluates to is ${1 \over 1-0.9} = 10$. Considering that $x[n]$ is always a positive real number, each term is its own absolute value, so the sum and the absolute sum are equivalent. By taking the absolute sum of the first 200 terms of $x[n]$, it becomes clear that it approaches 10 in the limit, seen in figure \ref{fig:abs_sum_anun}.
|
|
|
|
\begin{figure}[H]
|
|
\center
|
|
\includegraphics[width=\textwidth]{abs_sum_anun.png}
|
|
\caption{The signal value of $x[n]=a^n u[n]$ and its absolute sum}
|
|
\label{fig:abs_sum_anun}
|
|
\end{figure}
|
|
\noindent
|
|
Since the signal is absolutely summable, the DTFT, $X(e^{j\omega})$ exists, and can be found by evaluating the sum:
|
|
\begin{equation}
|
|
X(e^{j\omega}) = \sum_{n=-\infty}^{\infty} x[n] e^{-j\omega n}.
|
|
\end{equation}
|
|
Evaluating this sum reveals that
|
|
$$X(e^{j\omega}) = {1\over 1 - a e^{-j\omega}},$$
|
|
where $a = 0.9$. The plot for $X(e^{j\omega})$ is seen in figure \ref{fig:dtft_anun}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{dtft_anun.png}
|
|
\caption{The DTFT of $x[n] = a^n u[n]$}
|
|
\label{fig:dtft_anun}
|
|
\end{figure}
|
|
\noindent
|
|
In this case, the DTFT can be easily evalueated because the sum is a geometric series. However, for more complicated signals, the sum can only be approximated. These non-infinite sums are called truncated DTFTs and take the form:
|
|
\begin{equation}
|
|
X(e^{j\omega}) = \sum_{-K}^K x[n] e^{-j\omega n}.
|
|
\end{equation}
|
|
Seen in figure \ref{fig:truncated_DTFTs_anun}, the approximation seems to be quite poor for small values of $K$, but the accuracy seems to quickly increases for slightly larger values of $K$. The actual performance, quantified by the maximum error for a given value of $K$, lines up with these observations, as seen in figure \ref{fig:max_error_anun}. It also becomes apparent through figure \ref{fig:max_error_anun} that large values of $K$ have diminishing returns.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{truncated_DTFTs_anun.png}
|
|
\caption{The truncated DTFT of $x[n] = a^n u[n]$ for various values of $K$}
|
|
\label{fig:truncated_DTFTs_anun}
|
|
\end{figure}
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{max_error_anun.png}
|
|
\caption{The maximum error of the truncated DTFT of $x[n] = a^n u[n]$ for various values of $K$}
|
|
\label{fig:max_error_anun}
|
|
\end{figure}
|
|
|
|
\subsection{More Complex Example of Uniform Convergence}
|
|
Consider the signal $x[n] = n a^n u[n-1]$. Again, to determine if the DTFT of the signal exists, it must be absolutely summable. For $a<1$, its sum is given by:
|
|
\begin{equation}
|
|
\sum_{n=1}^\infty n a^n = {a \over (1-a)^2}.
|
|
\label{eqn:n_geometric_series}
|
|
\end{equation}
|
|
For $a=0.9$, the infinite series evaluates to 90, and this result is verified by the sum of the first 200 samples of $x[n]$ seen in figure \ref{fig:abs_sum_nanun-1}.
|
|
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{abs_sum_nanun-1.png}
|
|
\caption{The signal value of $x[n] = n a^n u[n-1]$ and its absolute sum}
|
|
\label{fig:abs_sum_nanun-1}
|
|
\end{figure}
|
|
|
|
\noindent
|
|
The signal, $x[n]$ has a maximum amplitude of 3.4868 at $n=9$ and $n=10$. Therefore, the signal is bounded by 3.4868. It is also clear that the signal must be bounded because it is absolutely summable. If the signal was unbounded, then its absolute sum would also be unbounded.
|
|
\\
|
|
\\
|
|
The DTFT of $x[n]$ can be found using equation \ref{eqn:n_geometric_series}. In this case, the value of $a$ in the equation is $a e^{-j\omega}$. Therefore, the DTFT of $x[n]$ is
|
|
\begin{equation}
|
|
X(e^{j\omega}) = \sum_{-\infty}^\infty n a^n u[n-1] e^{-j\omega n} = \sum_1^\infty n\left(a e^{-j\omega} \right)^n = {a e^{-j\omega} \over \left(1 - a e^{-j\omega} \right)^2}.
|
|
\end{equation}
|
|
\\
|
|
\\
|
|
The truncated DTFT is in this case is a variation on the geometric series of the form $k r^k$. The sum of the first $n$ terms is given by:
|
|
\begin{equation}
|
|
\sum_{k=1}^n k r^k = {r - r^{n+2} \over (1-r)^2} - {(n+1) r^{n+1} \over 1 - r}.
|
|
\end{equation}
|
|
Applying this formula allows for varying degrees of approximation of the DTFT, as seen in figure \ref{fig:truncated_dtfts_nanun-1}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{truncated_DTFTs_nanun.png}
|
|
\caption{Varying levels of approximation of the DTFT of $x[n] = n a^n u[n-1]$}
|
|
\label{fig:truncated_dtfts_nanun-1}
|
|
\end{figure}
|
|
It can clearly be seen that as $K$ gets larger, the approximation of the DTFT becomes increasingly better. By plotting the maximum error for increasing values of $K$, as seen in figure \ref{fig:max_error_nanun-1}, it becomes clear that, once again, there are diminishing returns for larger values.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{max_error_nanun-1.png}
|
|
\caption{The maximum error of the truncated DTFT of $x[n] = n a^n u[n-1]$ for increasing values of $K$}
|
|
\label{fig:max_error_nanun-1}
|
|
\end{figure}
|
|
|
|
\subsection{The Inverse DTFT}
|
|
An ideal lowpass filter, whose frequency response is seen in figure \ref{fig:ideal_lowpass_fresponse}, has unity gain for all frequencies less than the cutoff frequency, $\omega_c$, and zero gain for all frequencies greater than the cutoff frequency. Therefore, the DTFT of a lowpass signal would be a pulse of amplitude one from $-\omega_c$ to $\omega_c$. To recover the corresponding signal, the inverse DTFT should be applied. This operation is defined as:
|
|
\begin{equation}
|
|
x[n] = {1\over 2\pi} \int\limits_{-\pi}^{\pi} X(e^{j\omega}) e^{j\omega n} d\omega
|
|
\end{equation}
|
|
where $X(e^{j\omega})$ is the DTFT of the signal $x[n]$.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{ideal_lowpass_fresponse.png}
|
|
\caption{The frequency response of an ideal lowpass filter}
|
|
\label{fig:ideal_lowpass_fresponse}
|
|
\end{figure}
|
|
Considering that $X(e^{j\omega})$ has a value of one between $-\omega_c$ and $\omega_c$ and 0 elsewhere, the integral can be rewritten as:
|
|
\begin{equation}
|
|
x[n] = {1\over 2\pi} \int\limits_{-\omega_c}^{\omega_c} e^{j\omega n} d\omega.
|
|
\end{equation}
|
|
Evaluating the integral gives:
|
|
\begin{equation}
|
|
x[n] = {\omega_c \over \pi} \sinc(\omega_c n),
|
|
\end{equation}
|
|
where $\sinc(x)$ is the continuous normalized $\sinc$ function defined as:
|
|
\begin{equation}
|
|
\sinc(x) = \begin{cases}
|
|
1 & x = 0 \\
|
|
{\sin(\pi x) \over \pi x} & x \ne 0
|
|
\end{cases}
|
|
\end{equation}
|
|
\noindent
|
|
The plot of $x[n]$, where $\omega_c = 0.4\pi$ is seen in figure \ref{fig:recovered_lowpass}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{recovered_lowpass_signal.png}
|
|
\caption{The inverse DTFT of an ideal lowpass filter}
|
|
\label{fig:recovered_lowpass}
|
|
\end{figure}
|
|
\noindent
|
|
Given this signal, finite sum approximations of the DTFT yield oscillating frequency responses that resemble the ideal case, as seen in figure \ref{fig:truncated_DTFTs_lowpass}. Similar to other approximations of DTFTs, the more terms in the summation, the better the accuracy of the approximation.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{truncated_DTFTs_lowpass.png}
|
|
\caption{Finite sum approximations of an ideal lowpass filter with varying bounds of summation}
|
|
\label{fig:truncated_DTFTs_lowpass}
|
|
\end{figure}
|
|
|
|
\subsection{Convolutions: Pulse Response}
|
|
Consider a linear, time-invariant signal with the impulse response $h[n] = [1, 2, 1]$. If the input of the system is given as $x[n] = u[n] - u[n-2]$ where $u[n]$ is the unit step, the output of the system can be calculated analytically.
|
|
\begin{equation}
|
|
y[n] = x[0]h[n] + x[1]h[n-1] = h[n] + h[n-1]
|
|
\end{equation}
|
|
$$y[0] = 1 + 0 = 1$$
|
|
$$y[1] = 2 + 1 = 3$$
|
|
$$y[2] = 1 + 2 = 3$$
|
|
$$y[3] = 0 + 1 = 1$$
|
|
|
|
\noindent
|
|
With this in mind, one can use MATLAB to check and confirm the answer, as shown below in figure \ref{fig:number4}.
|
|
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{num4.png}
|
|
\caption{The response of $x[n] = u[n] - u[n-2]$ to an LTI system}
|
|
\label{fig:number4}
|
|
\end{figure}
|
|
|
|
\subsection{Convolutions: Exponential Decay Response}
|
|
Consider a linear, time-invariant system with the impulse response $h[n] = u[n]$, where $u[n]$ is the unit step. If the input to the system is an expression for $y[n]$ can be calculated, as shown below.
|
|
$$y[n] = \sum_{k=0}^{99} x[k] h[n-k]$$
|
|
The plot for $y[n]$ is seen in figure \ref{fig:number5a}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{num5a.png}
|
|
\caption{Convolution of $x[n] = 0.95^n u[n]$ with $h[n]$}
|
|
\label{fig:number5a}
|
|
\end{figure}
|
|
|
|
The unit step response occurs when $x[n] = u[n]$, and it is seen in figure \ref{fig:number5b}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{num5b.png}
|
|
\caption{Step response of an LTI system}
|
|
\label{fig:number5b}
|
|
\end{figure}
|
|
This system is BIBO stable because its impulse response is a bounded signal.
|
|
|
|
\subsection{Correlation Video Summaries}
|
|
The first video discusses correlation. As mentioned by the video, “correlation is a measure of how similar signals are.” The video provides examples of three different signals: x, y, and z. It also displays a formula for calculating a correlation measurement. A greater correlation measurement between signals with similar energy means that they are correlated, however the measurement tells no information on how similar or different two signals are when the signals have significantly different energy. As demonstrated by altering the signals, the correlation measurement on its own does not provide an accurate representation of correlation. This is why it is necessary to use normalized correlation.
|
|
\\
|
|
\\
|
|
In the second video, a new formula is presented to determine the normalized correlation. This takes our original equation and divides it by the square of the sum of the energies of the two signals being compared. The denominator is a scaling factor comprised of the energy of the signals being compared. Instead of taking the form of an integer, like a regular correlation, the resulting normalized correlation now takes the form of a value between -1 and 1. The more similar the signals, the greater the value. With this in mind, it might seem as though calculating correlation without normalizing would be trivial. However, through a demonstration with a MATLAB script, the value of non-normalized correlation is displayed. By calculating the correlation of multiple signals, it is easy to determine that some signals are twice as strongly present as other signals. However, the same information cannot be gleaned from the normalized correlation.
|
|
\\
|
|
\\
|
|
The third and final video explains crosscorrelation, which is a measure of similarity between signals at different time lag positions. It introduces two signals, which are displayed such that there is zero lag. This means that the first, second, third, etc. samples are aligned vertically with each other. In order to calculate the correlation, it is necessary to multiply the vertically aligned samples and sum them all. At this zero lag position, a correlation value is calculated. If sample number zero of the first signal is aligned vertically with sample number one of the second signal, a lag of one is introduced. In this way, the vertically aligned samples can be multiplied then summed, and another correlation value can be determined. For each lag position, including negative lags, a new correlation value can be calculated to create a correlation sequence. Such a sequence can be generated and plotted in MATLAB.
|
|
|
|
\subsection{Crosscorrelation}
|
|
Crosscorrelation between two real signals $x[n]$ and $h[n]$ at different lags $n$ is defined as:
|
|
\begin{equation}
|
|
c_{xh}[n] = \sum_k x[k] h[k-n]
|
|
\end{equation}
|
|
\\
|
|
\\
|
|
If $x[n] = [1 3 -2 4]$ and $h[n] = [2, 3, -1, 3]$, the crosscorrelation can be computed for $c_{xh}[0]$, $c_{xh}[1]$, and $c_{xh}[-1]$, as shown below.
|
|
$$c_{xh}[0] = 25$$
|
|
$$c_{xh}[1] = -4$$
|
|
$$c_{xh}[-1] = -6$$
|
|
The relationship of crosscorrelation versus lag can be plotted, as demonstrated by figure \ref{fig:number7}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{num7.png}
|
|
\caption{Cross correlation of $x[n]$ and $h[n]$}
|
|
\label{fig:number7}
|
|
\end{figure}
|
|
|
|
|
|
\subsection{Autocorrelation}
|
|
The autocorrelation of a real signal $x[n]$ is defined as:
|
|
\begin{equation}
|
|
c_{xx}[n] = \sum_k x[k] x[k-n]
|
|
\end{equation}
|
|
which is the crosscorrelation of a signal with itself. For example, the autocorrelation of $x[n] = [1, 3, -2, 4]$ is seen in figure \ref{fig:number8}.
|
|
\begin{figure}[H]
|
|
\includegraphics[width=\textwidth]{num8real.png}
|
|
\caption{Autocorrelation of $x[n] = [1,3,-2,4]$}
|
|
\label{fig:number8}
|
|
\end{figure}
|
|
\subsection{Crosscorrelation and Autocorrelation}
|
|
Given the signal $x[n] = \begin{bmatrix}1&1&1&1&1&1&1&1\end{bmatrix}$, its autocorrelation at zero lag is 1.5.
|
|
\\
|
|
\\
|
|
Consider the signals $v[n] = 2x[n]$, $w[n] = -x[n]$, and $y[n] = \begin{bmatrix}1&1&-1&-1&1&1&-1&-1\end{bmatrix}$.
|
|
$$c_{xv}[0] = 3$$
|
|
$$c_{xw}[0] = -1.5$$
|
|
$$c_{xy}[0] = 1$$
|
|
Since $w[n] = -x[n]$, it makes sense that the crosscorrelation of $x[n]$ and $w[n]$ is the negation of the autocorrelation of $x[n]$.
|
|
|
|
\section{Conclusions}
|
|
Multiple concepts were reinforced through this lab. Discrete-Time Fourier Transformations (DTFT), convolutions, as well as correlation were explored through a variety of exercises. This lab showed how the inverse DTFT can be utilized to model a filter, as well as how the DTFT is very easy to approximate with non-infinite sums. Additionally, a variety of different signals were represented in MATLAB to gain a visual understanding of the signals. Through some helpful sources that were included within the lab, one can learn a great deal about correlation values and how they can be used to compare signals.
|
|
|
|
\end{document}
|