5th semester files

This commit is contained in:
2024-02-22 14:23:12 -05:00
parent e39a9fec53
commit 5223b711a6
727 changed files with 1836099 additions and 0 deletions

View File

@ -0,0 +1,166 @@
# Chapter 1 - Continuous Time Signals
#### Continuous Time Signal
A continuous time signal is a function $x(t)$ that maps values from $\Reals$ to $\Reals$ or $\Complex$.
#### Causal Signals
A signal, $x(t)$, is said to be causal if it has value $0$ for $t<0$.
#### Non-causal Signals
A signal, $x(t)$, is said to be *non-causal*, or *not causal*, if $x(t)$ is not $0$ for all $t<0$.
#### Finite Support
A signal, $x(t)$, is said to have *finite support*, or *finite duration*, if there exists inputs $T_1$ and $T_2$ such that $x(t) = 0$ for $t < T_1$ and $t > T_2$.
#### Infinite Support
A signal, $x(t)$, is said to have *infinite support*, or *infinite duration*, if it does not have infinite support.
There are three types of inifinite support:
1. *Right-sided*: the signal has domain $(T_1, +\infty)$
1. *Left-sided*: the signal has domain $(-\infty, T_2)$
1. *Two-sided*: the signal has domain $(-\infty, +\infty)$
## 1.1 Basic Signal Operations
#### Signal Addition
Signal addition is of the form: $z(t) = x(t) + y(t)$, where the amplitude of the signal $z(t)$ is the net amplitude of $x(t)$ and $y(t)$.
#### Scalar Multiplication
Scalar multiplication is of the form: $z(t)=\alpha x(t)$. The amplitude of the output is proportional to $\alpha$.
#### Time Shift
A time shift is of the form: $z(t)=x(t-\tau)$. When $\tau > 0$, the time shift is said to be a *delay*. When $\tau < 0$, the time shift is said to be an *advance*.
#### Time Scale
A time scale is of the form: $z(t)=x(at)$. For $\lvert a \rvert > 1$, the scaling is said to be a compression. For $\lvert a \rvert < 1$, the scaling is said to be an expansion. For $a < 0$, a time reflection over $t=0$ occurs.
<br>
##### Figure 1.1.1 - Signal Transformations
<img src="Images/Figure1.1.1.png" width=400>
##### Figure 1.1.2 - Signal Combinations
<img src="Images/Figure1.1.2.png" width=400>
## 1.2 Combinations of Operations
### Review of Reflections
Every $t$ becomes a $-t$.
$x(t) = \begin{cases} t & 0 \leq t \leq 8 \\ 0 & otherwise \\ \end{cases}$
### Example 1
$x(t) = \begin{cases} t & 0 \leq t \leq 8 \\ 0 & otherwise \\ \end{cases}$
### How to find $y(t) = x(at - b)$
#### Method 1 (recommended)
1. Find $v(t) = x(t-b)$
1. Find $w(t) = v(\lvert a \rvert t) = x( \lvert a \rvert t - b)$
1. If $a \gt 0$, then $\lvert a \rvert = a$. $y(t) = w(t) = x(at - b)$
1. If $a \lt 0$, then $\lvert a \rvert = a$. $y(t) = w(-t) = x(-\lvert a \rvert t - b)$
#### Method 2
1. Find $v(t) = x(\lvert a \rvert t)$
1. Find $w(t) = v(t - \frac{b}{\lvert a \rvert}) = x(\lvert a \rvert \left(t - \frac{b}{\lvert a \rvert} \right))$
### Example
Find $x(3t-5)$
## Lecture 5
### The Impulse Function
$\delta(t) = \begin{cases} 0 & t \ne 0 \\ \infty & t = 0 \\ \end{cases}$
$\int_{-\infty}^{\infty} \delta(t) dt = 1$
$\delta(t - \alpha) = \begin{cases} \infty & t=\alpha \\ 0 & t \ne \alpha \\ \end{cases}$
$\delta(2t - 3) = \begin{cases} \infty & 2t-3=0, t=3/2 \\ 0 & t \ne 3/2 \end{cases}$
#### Properties
$f(t) \delta(t- \alpha) = f(\alpha) \delta(t-\alpha)$
$\int_a^b f(t) \delta(t - \alpha)dt= \begin{cases} f(\alpha) & \alpha \in [a, b] \\ 0 & otherwise \end{cases}$
#### Unit step
$u(t) = \begin{cases} 1 & t \ge 0 \\ 0 & t \lt 0 \end{cases}$
$\delta(t) = \frac{du(t)}{dt}$
$u(t) = \int_{-\infty}^{t} \delta(\tau)d\tau = \begin{cases} 1 & t \ge 0 \\ 0 & t \lt 0 \end{cases}$
$u(t - \tau) = \begin{cases} 1 & t \ge \tau \\ 0 & t \lt \tau \end{cases}$
$u(-t + 5) = \begin{cases} 1 & t \le 5 \\ 0 & t \gt 5 \end{cases}$
The difference between $u(t)$ and $u(t-1)$ is a finite support pulse from 0 to 1.
#### Ramp
$r(t) = t u(t) = \begin{cases} t & t \ge 0 \\ 0 & t \lt 0 \end{cases}$
$\frac{dr(t)}{dt} = \frac{tdu(t)}{dt} + (1)u(t) = t\delta(t) + u(t) = u(t)$
#### Derivatives
1. $\cos(2 \pi t)\left[ u(t) - u(t-1) \right] = \begin{cases} 0 & t \lt 0 \\ \cos(2\pi t) & 0 \le t \lt 1 \\ 0 & t \ge 1 \end{cases}$
Using the product rule:
$\cos(2\pi t)\left[ \delta(t) - \delta(t-1) \right] + -\sin(2\pi t)(2\pi)\left[u(t)-u(t-1)\right]$
1. $u(t) - 2u(t-1) + u(t-2) = \begin{cases} 0 & t \lt 0 \\ 1 & 0 \le t \lt 1 \\ -1 & 1 \le t \lt 2 \\ 0 & t \ge 2 \end{cases}$
## Energy and Power of Signals
### Energy
The energy of a signal $x(t)$ could be finite or infinite.
Given a real or complex signal, $x(t)$, the energy, $E_x$, is defined as:
$$E_x = \int_{-\infty}^{\infty} \lvert x(t) \rvert^2 dt$$
1. If $x(t)$ has finite support, the domain is $[a, b]$, and therefore $E_x$ is a finite integral with bounds $a$ and $b$.
1. If $x(t)$ has infinite support, $E_x$ is an improper integral and may or may not have finite value.
1. If $x(t)$ is periodic, $E_x$ is infinite.
### Power
The power, $P_x$, of an aperiodic signal is defined as:
$$P_x = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} \lvert x(t) \rvert^2 dt$$
The power, $P_x$, of a periodic signal with period, $T_0$, $x(t)$ is defined as:
$$P_x = \frac{1}{T_0}\int_{t_0}^{t_0 + T_0} \lvert x(t) \rvert^2 dt$$
The most convenient starting times, $t_0$ are $-\frac{T_0}{2}$ and $0$. The bounds of integration will be $\left[ -\frac{T_0}{2}, \frac{T_0}{2} \right]$ and $\left[ 0, T_0 \right]$ respectively.
For a periodic signal, the power is the energy of one period normalized by the length of the period.
**FACT:** A finite energy aperiodic signal has zero power.
$$P_x = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} \lvert x(t) \rvert^2 dt$$
$$=\lim_{T \to \infty} \frac{1}{2T} [N]$$
Where $N$ is some finite number.
$$\lim_{T \to \infty} \frac{N}{2T} = 0$$
### Procedure
1. Determine if $x(t)$ is finite support or infinite support.
- If finite support: $E_x \lt \infty$, $P_x = 0$
1. If $x(t)$ is infinite support, determine periodicity of $x(t)$
- If aperiodic, calculate $E_x$, $P_x$
- If periodic: $E_x = \infty$, calculate $P_x$
### Facts
If:
$$x(t) = A\cos(\omega_0 t + \theta)$$
Then:
$$E_x = \infty$$
$$P_x = \frac{A^2}{2}$$
If:
$$x(t) = \sum_k A_k\cos(\omega_k t + \theta)$$
Then:
$$E_x = \infty$$
$$P_x = \sum_k \frac{A_k^2}{2}$$

View File

@ -0,0 +1,330 @@
# Chapter 2 - Systems
A *system*, $S$, takes in an input signal, $x(t)$, and outputs a signal, $y(t)$.
$$x(t) \rightarrow \boxed{S} \rightarrow y(t)$$
$$y(t) = S[x(t)]$$
## Simple System Properties
#### Linear Systems
$S$ is said to be linear if it satisfies both hoogeneity and superposition.
#### Homogeneity
Scaling an input by a factor, $a$, scales the output by the same factor.
$$ay(t) = S[ax(t)]$$
#### Superposition
Adding two inputs lead to the corresponding outputs being added.
If
$$y_1(t) = S[x_1(t)]$$
$$y_2(t) = S[x_2(t)]$$
Then
$$y_1(t) + y_2(t) = S[x_1(t) + x_2(t)]$$
#### Time Invariance
$S$ is said to be time-invariant if delaying or advancing the input gives the same delay or advance in the output.
$$y(t - \alpha) = S[x(t - \alpha)]$$
#### Causality
$S$ is said to be causal if the output does not depend on any future inputs. Theoretically, noncausal systems cannot be realized.
#### Signal Boundedness
$x(t)$ is said to be bounded if $\lvert x(t) \rvert \le B_x \lt \infty$
Bounded example:
$$x(t) = e^{-t} u(t)$$
Unbounded example:
$$x(t) = e^t u(t)$$
#### Bounded-Input, Bounded-Output Stability
$S$ is said to be bounded-input, bounded-output (BIBO) stable if every bounded input gives rise to a bounded output.
Fact: The system is BIBO unstable if just one bounded input gives rise to an unbounded output.
Every bounded $x(t)$ gives rise to a bounded $y(t)$.
### Example 2.1
$$y(t) = S[x(t)] = x^2(t)$$
##### Homogeneity:
$$S[ax(t)] = [ax(t)]^2 = a^2x^2(t)$$
$$ay(t) = ax^2(t)$$
Since $S[ax(t)] \ne ay(t)$, $S$ is not homogeneous, and therefore not linear.
##### Time invariance:
$$S[x(t-\alpha)] = [x(t-\alpha)]^2 = x^2(t-\alpha) =y(t-\alpha)$$
Since $S[x(t-\alpha)] = y(t-\alpha)$, $S$ is causal
##### BIBO stability:
$$\lvert x(t) \rvert \le B_x \lt \infty$$
$$\lvert y(t) \rvert = \lvert x^2(t) \rvert \le B_x^2 \lt \infty$$
### Example 2.2
$$y(t) = S[x(t)] = \cos(x(t))$$
##### Homogeneity:
$$S[ax(t)] = \cos(ax(t))$$
$$ay(t) = a\cos(x(t))$$
Since $S[ax(t)] \ne ay(t)$, $S$ is not homogeneous and therefore not linear.
##### Time invariance:
$$S[x(t-\alpha)] = \cos(x(t-\alpha)) = y(t-\alpha)$$
##### BIBO stability:
$$\lvert x(t) \rvert \le B_x \lt \infty$$
$$\lvert y(t) \rvert \le 1$$
### Example 2.3
$$y(t) = S[x(t)] = \lvert x(t) \rvert$$
##### Homogeneity:
$$S[ax(t)] = \lvert ax(t) \rvert$$
$$ay(t) = a \lvert x(t) \rvert$$
Since $\lvert a \rvert \ne a$, $S$ is not homogeneous and therefore not linear.
##### Time invariance:
$$S[x(t-\alpha)] = \lvert x(t-\alpha)\rvert$$
$$y(t-\alpha) = \lvert x(t-\alpha)\rvert$$
Since $S[x(t-\alpha)] = y(t-\alpha)$, $S$ is time invariant.
##### BIBO stability
$$\lvert x(t) \rvert \le B_x \lt \infty$$
$$\lvert y(t) \rvert = \lvert x(t) \rvert \le B_x \lt \infty$$
### Example 2.4
$$y(t) = S[x(t)] = mx(t) + b$$
##### Homogeneity:
$$S[ax(t)] = max(t) + b$$
$$ay(t) = amx(t) + ab$$
##### Superposition:
$$y_1(t) = mx_1(t) + b$$
$$y_2(t) = mx_2(t) + b$$
$$y_1(t) + y_2(t) = m(x_1(t) + x_2(t)) + 2b$$
For $b=0$, $S$ is linear, otherwise, neither homogeneity nor superposition are satisfied.
##### Time invariance:
$$S[x(t-\alpha)] = mx(t-\alpha) + b$$
$$y(t-\alpha) = mx(t-\alpha) + b$$
Since $y(t-\alpha) = x(t-\alpha)$, $S$ is time invariant.
##### BIBO stability
$$\lvert y(t) \rvert = \lvert mx(t) + b \rvert \le \lvert m \rvert \lvert x(t) \rvert + \lvert b \rvert \le \lvert m \rvert B_x \lt \infty$$
### Example 2.5
$$y(t) = S[x(t)] = tx(t+3)$$
##### Homogeneity:
$$S[ax(t)] = atx(t+3)$$
$$ay(t) = atx(t+3)$$
##### Superposition:
$$y_1(t) = tx_1(t+3)$$
$$y_2(t) = tx_2(t+3)$$
$$S[x_1(t) + x_2(t)] = t[x_1(t+3) + x_2(t+3)]$$
$$y_1 + y_2 = tx_1(t+3) + tx_2(t+3)$$
Since both homogeneity and superposition are satisfied, $S$ is a linear system.
##### Time invariance:
$$S[x(t-\alpha)] = tx(t+3-\alpha)$$
$$y(t-\alpha) = (t-\alpha)\cdot x (t+3-\alpha)$$
Not time invariant.
##### Causality:
No
##### BIBO stability:
$$\lvert y(t) \rvert = \lvert tx(t+3) \rvert = \lvert t \rvert \lvert x(t+3)\rvert$$
Since $\lvert t \rvert$ is not bounded, ther is no $B_y$ such that:
$$\lvert y(t) \rvert \le B_y \lt \infty$$
### Example 2.6 - Expansion
$$y(t) = S[x(t)] = x({t \over 10})$$
##### Homogeneity
$$S[ax(t)] = ax({t \over 10}) = ay(t)$$
##### Superposition
$$y_1(t) = x_1({t \over 10})$$
$$y_2(t) = x_2({t \over 10})$$
$$S[x_1(t) + x_2(t)] = x_1({t \over 10}) + x_2({t \over 10})$$
##### Time Invariance
$$S[x(t - \alpha)] = x({t \over 10} - \alpha)$$
$$y(t - \alpha) = x({t - \alpha \over 10})$$
##### Causal
$$y(-10) = x(-1)$$
##### BIBO Stability
Expansion affects input space, not output space
### Example 2.7
$$y(t) = S[x(t)] = {1 \over T} \int\limits_{t-T}^t x(\tau)d\tau + B$$
##### Homogeneity
$$S[ax(t)] = {1 \over T} \int\limits_{t-T}^t ax(\tau)d\tau + B$$
$$ay(t) = {a \over T} \int\limits_{t-T}^t x(\tau)d\tau + B$$
Not homogeneous unless $B = 0$.
##### Superposition
$$y_1(t) = {1 \over T} \int\limits_{t-T}^t x_1(\tau)d\tau + B$$
$$y_2(t) = {1 \over T} \int\limits_{t-T}^t x_2(\tau)d\tau + B$$
$$S[x_1(t) + x_2(t)] = {1 \over T} \int\limits_{t-T}^t [x_1(\tau) + x_2(\tau)]d\tau + B$$
$$y_1(t) + y_2(t) = {1 \over T} \int\limits_{t-T}^t x_1(\tau)d\tau + {1 \over T} \int\limits_{t-T}^t x_2(\tau)d\tau + 2B$$
##### Time Invariance
$$S[x(t - \alpha)] = {1 \over T} \int\limits_{t-T}^t x(\tau - \alpha)d\tau + B$$
Substitute $v = \tau - \alpha$:
$$S[x(t - \alpha)] = {1 \over T} \int\limits_{t-T-\alpha}^{t-\alpha} x(v)dv + B$$
$$y(t - \alpha) = {1 \over T} \int\limits_{t-T-\alpha}^{t-\alpha} x(\tau)d\tau + B$$
##### Causality
$y(t)$ depends on $x(t)$ which uses inputs from $t-T$ to $t$, and no future inputs. Therefore, the system is causal.
If instead, inputs ranged from $t-T$ to $t+T$, the system would rely on future inputs and would not be causal.
##### BIBO Stability
$$\lvert x(t) \rvert \le B_x < \infty$$
$$|y(t)| = \left| {1 \over T} \int\limits_{t-T}^t x(\tau)d\tau + B \right|$$
$$|y(t)| \le \left| {1 \over T} \int\limits_{t-T}^t x(\tau)d\tau \right| + |B|$$
$$|y(t)| \le {1 \over T} \int\limits_{t-T}^t |x(\tau)|d\tau + |B|$$
$$|y(t)| \le {1 \over T} \int\limits_{t-T}^t B_x d\tau + |B|$$
$$|y(t)| \le {1 \over T} \left[\tau B_x\Big|_{t-T}^{t}\right] + |B|$$
$$|y(t)| \le {1 \over T} T B_x + |B|$$
$$|y(t)| \le B_x + |B|$$
### Example 2.8 - AM
$$y(t) = S[m(t)] = m(t)\cos(\omega_c t)$$
##### Homogeneity
$$S[am(t)] = am(t)\cos(\omega_c t) ay(t)$$
##### Superposition
$$y_1(t) = m_1(t)\cos(\omega_c t)$$
$$y_2(t) = m_2(t)\cos(\omega_c t)$$
$$S[m_1(t) + m_2(t)] = [m_1(t) + m_2(t)]\cos(\omega_c t)$$
$$y_1(t) + y_2(t) = [m_1(t) + m_2(t)]\cos(\omega_c t)$$
##### Time Invariance
$$S[m(t-\alpha)] = m(t-\alpha)\cos(\omega_c t)$$
$$y(t - \alpha) = m(t-\alpha)\cos(\omega_c (t -\alpha))$$
##### Causality
Yes
##### BIBO Stability
$$|m(t)| \le B_x$$
$$|y(t)| = |m(t)\cos(\omega_c t)| \le |m(t)| \le B_x$$
### Example 2.9 - FM
$$y(t) = S[m(t)] = \cos\left(\omega_c t + \int\limits_{-\infty}^{t} m(\tau) d\tau \right)$$
##### Homogeneity
$$S[am(t)] = \cos\left(\omega_c t + \int\limits_{-\infty}^{t} am(\tau) d\tau \right)$$
$$ay(t) = a\cos\left(\omega_c t + \int\limits_{-\infty}^{t} m(\tau) d\tau \right)$$
##### Time Invariance
$$S[m(t-\alpha)] = \cos\left(\omega_c t + \int\limits_{-\infty}^{t} m(\tau - \alpha) d\tau \right)$$
$$y(t-\alpha) = \cos\left(\omega_c (t - \alpha) + \int\limits_{-\infty}^{t} m(\tau) d\tau \right)$$
##### BIBO Stability
$$|y(t)| \le 1$$
## Linear, Time-Invariant Systems
The implulse response $h(t)$ of an LTI system is the output of the system when
$$\delta(t) \rightarrow \boxed{\text{LTI}} \rightarrow h(t)$$
Causal if $h(t) = 0$ for $t<0$.
BIBO stable if:
$$\int |h(t)|dt < \infty$$
If $h(t)$ has finite supports, it is *always* BIBO stable. If $h(t)$ has infinite support, BIBO stability must be checked.
A linear, constant coefficient ODE with input $x(t)$ and output $y(t)$ is LTI under zero initial conditions and when $x(t)$ is a causal input.
### Circuit Application
An RLC circuit with no initial voltage across the capacitor and no initial current through the inductor is LTI.
#### Without Initial Condition
For a capacitor with no initial voltage:
$$V(t) = S[I(t)] = {1\over C} \int\limits_0^t I(\tau) d\tau$$
##### Homogeneity
$$S[aI(t)] = {1\over C}\int\limits_0^t aI(\tau) d\tau = {a\over C}\int\limits_0^t I(\tau) d\tau = aV(t)$$
##### Superposition
$$V_1(t) = {1\over C}\int\limits_0^t I_1(\tau) d\tau$$
$$V_2(t) = {1\over C}\int\limits_0^t I_2(\tau) d\tau$$
$$S[I_1(t) + I_2(t)] = {1\over C} \int\limits_0^t [I_1(\tau) + I_2(\tau)]d\tau = V_1(t) + V_2(t)$$
##### Time Invariance
$$S[I(t - \alpha)] = {1\over C}\int\limits_{-\alpha}^{t-\alpha} I(\tau - \alpha) d\tau$$
$$S[I(t - \alpha)] = {1\over C}\int\limits_{-\alpha}^{t-\alpha} I(p) dp$$
If $I(t)$ is causal:
$$S[I(t - \alpha)] = {1\over C}\int\limits_{0}^{t-\alpha} I(p) dp$$
$$V(t - \alpha) = {1\over C}\int\limits_0^{t-\alpha} I(\tau) d\tau$$
#### With Initial Condition
For a capacitor with initial voltage:
$$V(t) = {1\over C} \int\limits_0^t I(\tau) d\tau + V_0$$
##### Homogeneity
$$S[aI(t)] = {1\over C}\int\limits_0^t aI(\tau) d\tau + V_0$$
$$aV(t) = {a\over C}\int\limits_0^t I(\tau) d\tau + aV_0$$
#### Impulse Response
$$i(t) = \delta (t), h(t) = V(t) = ?$$
$$h(t) = {1 \over C} \int\limits_0^t \delta (\tau) d\tau =
\begin{cases}
{1\over C} & t \ge 0 \\
0 & t < 0
\end{cases}$$
$$h(t) = {1 \over C} u(t)$$
### Averager
$$y(t) = S[x(t)] = {1 \over T} \int\limits_{t-T}^t x(\tau) d\tau$$
#### Impulse response
$$h(t) = {1 \over T} \int\limits_{t-T}^t \delta(\tau) d\tau$$
3 possibilities for impulse response:
1. $0 < T - t$ means that the impulse is to the left of the limits of integration, and $h(t)$ is 0.
1. $T - t \le 0 \le t$ means that the impulse is within the bounds of integration, and $h(t)$ takes on a non-zero value.
1. $t > 0$ means that the impulse is to the right of the bounds of integration, nad $h(t)$ is 0.
For the second case, $T - t \le 0 \le t$, gives the response:
$$h(t) = \begin{cases}
{1 \over T} & 0 \le t \le T \\
0 & \text{elsewhere}
\end{cases} = {1\over T} \left[u(t) - u(t - T)\right]$$
Since $h(t)$ is 0 for $t < 0$, it is a causal signal.
## Convolution
$$y(t) = x(t)*h(t) = \int x(\tau) h(t - \tau)d\tau$$
##### Commutative:
$$x(t) * h(t) = \int h(\tau) x(t-\tau)d\tau = h(t) * x(t)$$
#### Case 1:
Two finite support wich have the same region of support (domain).
$$x(t) = h(t)$$
##### Step 1:
Decide to manipulate $x$ or $h$.
$$h(t - \tau) = h(-\tau + t)$$
Since we are looking at $\tau$ as the variable, there is a reflection and an advance by $t$.
As $h(t-\tau)$ slides along the real number line, first there will be no overlap between $x(\tau)$ and $h(t - \tau)$. They multiply to 0, so $y(t) = 0$. Then, there will be some overlap, so $y(t)$ becomes:
$$y(t) = \int\limits_0^t d\tau = t$$
$h(t-\tau)$ continues to slide and soon there is full overlap, so $y(t) = 1$.
![](Images/OverlapConvolution)

View File

@ -0,0 +1,96 @@
# Chapter 3 - The Laplace Transform
$f(t)$ is causal, $f(t) = 0$ for $t < 0$.
$$F(s) = L[f(t)] = \int\limits_0^\infty f(t) e^{-st}dt$$
$s$ is a complex variable of the form:
$$s = \sigma + j\omega$$
### The Region of Convergence
The ROC is the set of all values of $s$ for which the one-sided Laplace transform exists.
If the causal $f(t)$ has finite support in the temporal region:
$$0 \le t_1 \le t_2 < \infty$$
$$F(s) = L[f(t)] = \int\limits_{t_1}^{t_2} f(t)e^{-st}dt$$
If $f(t)$ is causal, the ROC includes $s = \infty$.
If $f(t)$ has infinite support, $F(s)$ can be written as the ratio of two functions $N(s)$ and $D(s)$.
$$F(s) = {N(s) \over D(s)}$$
### Example (Finite Support)
$$F(s) = {s \over s^2 + 2s + 2}$$
Zero at $s=0$, poles at $s = -1 \pm j$.
ROC does not contain any poles, and is not influenced by zeros.
### Example (Infinite Support)
$$F(s) = {s+1 \over (s+2)(s+5)}$$
ROC is $\Re\{s\} > -2$
### Laplace Transform of Impulse
$$L[\delta(t)] = \int\limits_0^\infty \delta(t)e^{-st}dt = 1$$
$$L[\delta(t - \alpha)] = e^{-\alpha s}$$
### Example
Find the Laplace transform of a causal version of the complex exponential
$$L[e^{ja t}u(t)] = \int\limits_0^\infty e^{ja t}e^{-st}dt$$
$$F(s) = \lim_{v \to \infty} \int\limits_0^v e^{(ja-s)t}dt$$
$$F(s) = -{1\over s-ja} \lim_{v\to\infty}[e^{(ja-s)v}-1]={1\over s- ja}$$
Since $a$ is generally a complex number, $s = \sigma + j\omega$ and $a = \alpha + j\omega$.
## The Inverse Laplace Transform
$$F(s) = {3s + 5 \over (s+1)(s+2)} = {A \over s+1} + {B \over s+2}$$
Region of convergence:
$$\Re\{s\} > -1$$
Solve for $A$ and $B$:
$$3s + 5 = A(s+2) + B(s+1)$$
Eliminate $B$ by setting $s$ to -1:
$$3(-1) + 5 = A(-1 + 2) + B(-1 + 1)$$
$$2 = A$$
Eliminate $A$ by setting $s$ to -2:
$$3(-2) + 5 = A(-2 + 2) + B(-2 + 1)$$
$$-1 = -B \therefore B=1$$
Plug back in for $F(s)$:
$$F(s) = {2 \over s+1} + {1 \over s+2}$$
$$\therefore f(t)=\left[2e^{-t} + e^{-2t}\right]u(t)$$
**Note:** multiplying by $u(t)$ is required to make $f(t)$ causal.
### Example of the Delay Property
$$F(s) = {4e^{-10s} \over s(s+2)^2}$$
Take out the $e^{-10s}$ term and evaluate.
$$G(s) = {4 \over s(s+2)^2} = {A \over s} + {B \over s+2} + {C \over (s+2)^2}$$
$$4 = A(s+2)^2 + Bs(s+2) + Cs$$
Setting $s=0$ eliminates $B$ and $C$
$$4 = A(0+2)^2 + B(0)(0+2) + C(0)$$
$$4 = 4A \therefore A = 1$$
Setting $s=-2$ eliminates $A$ and $B$:
$$4 = A(-2 + 2)^2 + B(-2)(-2+2) + C(-2)$$
$$4 = -2C \therefore C=-2$$
Set an easy $s$ value to solve for $B$, $s=1$:
$$4 = 1(1+2)^2 + B(1)(1+2) + -2(1)$$
$$4 = 9 + 3B-2 \therefore B=-1$$
Plug back in:
$$G(s) = {4 \over s(s+2)^2} = {1 \over s} - {1 \over s+2} - {2 \over (s+2)^2}$$
Solve for $g(t)$
$$g(t) = \left[1 -e^{-2t} -2te^{-2t}\right]u(t)$$
By taking out the delay term, $e^{-10t}$, we offset $g(t)$ with respect to $f(t)$. To solve for $f(t)$, we must take this offset into account. The delay is by $10$, so for each $t$ in $g(t)$, there is a $(t-10)$ in $f(t)$. Plug in and solve:
$$f(t) = \left[1-e^{-2(t-10)}-2(t-10)e^{-2(t-10)}\right] u(t-10)$$
## Differential Equations using Laplace Transforms
$${df \over dt} + 6f(t) = u(t), f(0^-) = 1$$
$$s F(s) - f(0^-) + 6F(s) = {1 \over s}$$
$$(s + 6) F(s) = {1\over s} + 1$$
$$F(s) = {1 + s \over s(s + 6)}$$
$$F(s) = {A \over s} + {B \over s + 6}$$
$$1 + s = (s +6)A + sB$$
$$A = {1\over6}, B = {5\over6}$$
$$F(s) = {1 \over 6s} + {5 \over 6(s + 6)}$$
$$f(t) = \left[{1 \over 6} + {5 \over 6} e^{-6t}\right]u(t)$$

View File

@ -0,0 +1,45 @@
# Complex Exponential Fourier Series
Inner product is just Ravi's fancy math way of saying dot product for functions, and orthogonal is his fancy math way of saying perpendicular.
#### Definition: Orthonormal
$V_1$ and $V_2$ are orthonormal if:
1. $V_1 \cdot V_2 = 0$
2. $\|V_1\| = \|V_2\| = 1$
If $\psi_l$ and $\psi_k$ are orthonormal on $[a,b]$:
$$\int\limits_a^b \psi_k(t) \psi_l^*(t)dt = \begin{cases} 1 & k=l \\ 0 & k \ne l \end{cases}$$
### Example
Consider the interval $[0,1]$:
$$\text{Let } \psi_1(t) = 1; t \in [0,1]$$
$$\text{Let } \psi_2(t) = \begin{cases} \end{cases}$$
### Fourier Series in Exponential Form
$$x(t) = \sum_{k=-\infty}^\infty X_k e^{jk\omega_0 t}$$
$$T_0 = 1$$
$$\omega_0 = 2\pi$$
$$x(t) = 1 + \sum_{k=-\infty, k\ne0}^\infty {2 \sin\left({ \pi k \over 2}\right) \over \pi k}e^{2\pi k t}$$
$$x(t) = 1 + \sum_{k=-\infty, k\ne0}^\infty {2 \sin\left({\pi k \over 2}\right) \over \pi k} e^{jk2\pi t}$$
$$x(t - 0.1) = 1 + \sum_{k=-\infty, k\ne0}^\infty {2 \sin\left({\pi k \over 2}\right) \over \pi k} e^{jk2\pi (t - 0.1)}$$
### Fourier Series in Trigonometric Form
$$x(t) = c_0 + 2\sum_{k=1}^\infty \left[c_k \cos(k \omega_0 t) + d_k \sin(k \omega_0 t)\right]$$
### Odd Signal Example
$A = 1$, $T = 2$, $T_0 = 2$, $\omega_0 = \pi$
$$x(t) = t$$
$$x(t) = \sum_k X_k e^{j\pi k t}$$
$$X_k = {1\over2} \int\limits_{-1}^1 t e^{-j \pi k t} dt$$
This gives an imaginary $X_k$.
$$X_0 = {1\over2} \int\limits_{-1}^1 t dt = 0$$
$$x(t) = c_0 + 2\sum_{k=1}^\infty c_k \cos(\pi k t) + d_k \sin(\pi k t)$$
Since it's odd:
$$c_k = 0$$
$$d_k = {1\over2} \int\limits_{-1}^1 t \sin(\pi k t) dt$$
$$x_t = 2 \sum_{k=1}^\infty {(-1)^{k+1} \over \pi k} \sin(\pi k t)$$

View File

@ -0,0 +1,47 @@
# Chapter 5
The Fourier Transform:
$$X(\omega) = F[x(t)] = \int\limits_{-\infty}^\infty x(t) e^{-j\omega t} dt$$
The Inverse Fourier Transform:
$$x(t) = F^{-1}[X(\omega)] = {1 \over 2\pi} \int\limits_{-\infty}^{\infty} X(\omega)e^{j\omega t} d\omega$$
The Fourier transform exists if:
1. $x(t)$ is abolutely integrable
2. $x(t)$ has only a finite number of discontinuities and a finite number of minima and maxima in any finite interval.
If $x(t)$ is even, $X(\omega)$ is real.
If $x(t)$ is odd, $X(\omega)$ is imaginary.
Otherwise, $X(t)$ has a real and imaginary part.
### Example
Consider $x(t) = \delta(t)$:
$$X(\omega) = \int\limits_{-\infty}^\infty \delta(t)e^{-j\omega t} dt = 1$$
### Example
Consider $x(t) = \delta(t - \alpha)$:
$$X(\omega) = \int\limits_{-\infty}^\infty \delta(t - \alpha)e^{-j\omega t}dt = e^{-j\omega \alpha}$$
### Example
Consider $x(t) = u(t+T) - u(t-T)$:
This is a pulse whose value is 1 on the interval $[-T,T]$.
$$X(t) = \int\limits_{-T}^T e^{-j\omega t}dt = -{1 \over j \omega} \left[e^{-j\omega T} - e^{-j\omega (-T)}\right] = {2 \over \omega}\left[{e^{j\omega T} - e^{-j\omega T} \over 2j}\right] = 2T\operatorname{sinc}(\omega T)$$
### Example
Consider $x(t) = e^{-|t|}$
$$X(\omega) = \int\limits_
## Method 2 - Laplace Transfor
$$x(t) \to X(s) \to X(\omega)$$
$$s = j\omega$$
Using a Laplace transform requires that $x(t)$ is a causal signal and the region of convergence of $X(s)$ includes the imaginary axis.
### Example - Finite Support Signals
The region of convergence is the entire s-plane (must check $s=0$). It definitely includes $s=j\omeaga$.
$$X(s) = {0.5 \over s^2} - {0.5e^{-2s} \over s^2} - {e^{-2s} \over s}$$
$$X(j\omega) = {0.5 \over j\omega^2} - {0.5e^{-2j\omega} \over j\omega^2} - {e^{-2j\omega} \over j\omega}$$

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB