Signal theory :
introduction
Keywords
: tutorial, signal
theory, linear systems, invariant with time linear systems, dirac,
complex exponential, vector, signal, eigen vectors, causality, fourier
transform,
vectorial space, convolution, fourier transform, localisation,
delocalisation, intrinsic frequency resolution of fourier transform,
nexyad, applied maths,
Written by Gerard
YAHIAOUI and Pierre DA SILVA DIAS, main founders of the applied maths
research company NEXYAD,
© NEXYAD, all rights reserved : for
any
question, please CONTACT
Reproduction of partial or complete content of this page is authorized
ONLY if "source
: NEXYAD http://www.nexyad.com"
is clearly
mentionned.
This
tutorial was written for
students or engineers that wish to understand main hypothesis and ideas
of linear signal processing.
Notations of digital signals was intentionnaly used in order to fit
into engineers expectations (ex : dicrete sums intead of continuous
sums).
Dirac
distribution, Dirac vector base
Vectors
base of the signals vectorial space : natural base
"Every signal can be considered as a succession of impulses".
Mathematically, this leads to say that for a given signal ,
it is always possible to write :
(1)
is
the
dirac impulse signal
NB : in must papers and books, the arrow is not drawn
on the vectors
(yes signals are vectors ! ...). In such a case, S(t) may be either the
"signal" (= the vector = the complete entity that goes from to
), or the value that this entity S takes at time t. Only the context
allows the reader to understand (it actually mostly brings
misanderstanding).
In our presentation, we prefered to make the difference explicitely,
although it adds sometimes "many" arrows (that's the reason why must
people don't put the arrows on vectors).
One can notice that
the
list of values of
the signal {S(i} is NOT
the signal : it is the list of co-ordinates of the vector (signal
entity) in a particular vector base : the Diracs vector base (one can
demonstrate that the set of diracs at every time location is a vector base
A dirac is supposed to be a distribution with an infinite height, a
null width, and a surface equal to 1 ... In practice, you may have an
idea of a dirac as a "very thin" impusion with a height equal to 1 :
of the signals
vectorial space : this base is called the "natural" base).
(time
location of a dirac can obviously be changed as shown above)
Then, (1) can be represented as follow :
NB : in continuous time, sums are continuous (the above
figure is a
discrete representation).
Linear
transformations
For
linear systems,
the superposition principle applies :
For a signal, It means that :
One can notice that this is the "classical" way of
writing a linear
transformation of vectors in vectorial spaces : knowing the
transformation for every vector of the base allows to deduct the
transformation of any vector :
example in the plan vectorial space (2 dimensions ... like in the old
school ...) :
The vectors of the natural base (dirac impulses) are
"all the same
vectors" shifted in time. So their transformation through L is also
the same response shifted in time.
Then knowing the transformation of the dirac impulse is enough to
deduct the transformation of any signal through the linear function L.
The transformation of the dirac impulse through L is called the
"impulse response" of L.
It means that if one put a dirac impulse as input to a linear system L
and record the output ... then it is possible to predict the output of
L for any input signal :
(2)
(2) is used to define a new mathematical internal
operation between
signals : the convolution.
(3)
One can demonstrate that convolution * is reflexive.
Then, replacing the input vector i by the dirac impulse into (2)
obviously shows that the dirac impulse is the neutral element of
convolution *
Matrix view
of convolution
Convolution
operation
hides a matrix operation
(4)
Because most papers and books present signal theory in
the case of
contiuous time
and also because even in case of discrete time, one can notice that the
matrix has an infinite height and width ... it is rare to see this
representation.
This representation is interest of us because it allows to make a very
intuitive introduction of Fourier Transform (see below).
One can notice that this matrix IS NOT a diagonal
matrix.
It
means that dirac impulses are not Eigen vectors
of Linear
transformations.
Reminder
1 : the
interest of working in an Eigen vectors base is that the
matrix of
L would be then a diagonal matrix : such a matrix makes co-ordinate
n°k of the
output signal depend only on co-ordinate n°k of the input
signal ...
example :
O1
= a11
. i1
O2
= a22
. i2
The values that are not on the first diagonal are the "blending"
coefficients : they make output n°k depend on inputs n°
i
:
O1
= a11
. i1
+ a12
. i2
, O1
depends on i1,
but also on i2
O2
= a21
. i1
+ a22
. i2
, O2
depends on i2,
but also on i1
One also can notice that the matrix of a linear transformation L is
made of zeros for all coefficients that are localized upper than the
first diagonal : these coefficients are the "future" of time t : output
at time t cannot depends on values of input at times (t+1), (t+2), ...
This property is called CAUSALITY : output at time t depends on every
inputs at times .... , (t-i), ...., (t-2), (t-1), t :
Let us consider the transformation L that is characterized by its
impulse response h(t) :
Then o(ti)
depends on the past of among
the whole duration of :
(NB : this duration may be infinite)
Reminder
2 : the Eigen
vectors are those that doesn't change their direction through
the transformation L :
L(vector U) = Lambda.(vector U) : the Eigen vector
U is
only mutiplied by a number.
For a signal, it means that eigen vectors are those
that doesn't change
their "shape" through the Linear transformation.
Every student that followed some electricity/electronics courses knows
that sinusoidal signals have this property :
If input signal is a sinusoid of frequency f and amplitude A, then the
output of any linear system L should be a sinusoid of frequency f (it
means exactly the same shape) and amplitude B ...
B/A is called the gain of L for the frequency f.
But the output sinusoid is also delayed of an angle Teta.
This delay makes it impossible to consider sinusoids as Eigen vectors
... (delay is not a multiplication of the signal by a number).
So the idea is to use "complex sinusoids" instead of "real" ones,
because in complex space, delay is written as a multiplication by
exp(-i.Teta).
(the
gain stays a multiplication, the delay become a multiplication,
=>
the whole transformation is then only a multiplication of the vector by
a number (a complex number) ... definition of Eigen vectors).
Discrete
Fourier Transform
Changing
from the
natural base (the diracs) to this new base (the "complex sinusoids")
will lead to represent L with a diagonal matrix.
As usual with vectors, finding the components of a vector on a new
vectors base is obtained by writing the scalar product between the
vector and every elementary vector of the new base.
For every new component n°f, it is then possible to get this
co-ordinate by a simple scalar vector :
(5)
The vector signal
that
is written
in
the natural base is then written
in
the Eigen vectors base, and every co-ordinate I(f) of
is
computed with the scalar product above that is called the DISCRETE
FOURIER TRANSFORM of the signal .
The new base vectors ("complex sinusoids" are not characterized by a
time location - because they exist from minus infinite to plu infinite
- but they are characterized by their frequencies) leads to a matrix of
L that is a diagonal : output signal for frequency number fi
depends ONLY on input signal for frequency number fi
(via
multiplication by a number that is expected to be a complex number, the
real part being the gain, and the complex exponential part being the
time delay).
That is why one generally says that FOURIER TRANSFORM
transforms
convolution into multiplication (in fact it is a multiplication of a
diagonal matrix by a signal).
But one can define an operation that takes the diagonal of the matrix
and creates a
vector made with its values, and create an external multiplication
between
vectors ... building then a new internal operation x that allows
to build a new vectorial space ...
Doing that allows to consider that the FOURIER TRANSFORM is an
application between two vectorial spaces :
- space of time representation using * : vectorial space
n°1
- space of frequency representation using x : vectorial space
n°2
Because vectorial space n°2 was created on the basis of a
simple
change of vectors base ...
it is obvious that
there exist an isomorphism between vectorial space 1 and vectorial
space 2 !
Basic
properties of Fourier Transform
Fourier
Transform
of the impulse response of a linear system L is called the TRANSFERT
FUNCTION of L :
NB : transformation of dirac impulse into a constant
vector made of "1"
is obvious : isomorphisms tranform neutral element of internal
operation 1 (here : the convolution) into the neutral element of the
internal operation 2 (here : the multiplication) ...
It also can be demonstrated that multiplication is transformed into
convolution via a FOURIER TRANSFORM.
The inverse transformation is also a scalar product ... with diracs at
every delayed location - beware : delayed diracs are written in the
frequency co-ordinates : delay becomes an exponential ... -
...
This transformation is called INVERSE FOURIER TRANSFORM.
Let us just notice the duality of localisation / delocalisation when
changing from vectorial space 1 to vectorial space 2 :
Examples
:
Sinusoid is totally delocalised in time : FOURIER TRANSFORM localises
it into 2 points (by construction cf. above) :
Dirac impulse is totally localised in time and becomes
totally
delocalised in frequency :
Square signal of width T is localised into a finite
time support T and
it becomes a delocalised sin(x)/x :
NB1 : one can notice that the more the short T, the
biggest the 1/T
Ifever T tends to zero, then square(t) looks like an impulse signal,
and its Fourier transform looks like 1(f)
NB2 : square(t) is a very IMPORTANT signal in signal processing,
because in practice, it is not possible to acquire a signal during an
infinite duration : signal capture (through sensors ...) leads to a
signal that is known during a finite duration ...
One can consider the acquired signal as a multiplication of the
theoretical signal (that is infinite in time) and a square(t) signal of
lenght T : T is the duration of the acquisition.
But because multiplication x becomes convolution * through a Fourier
Transform, then the Fourier Transform of the acquired signal will be
the convolution of a sin(x)/x signal with the Fourier Transform of the
theoretical signal (with infinite duration).
In particular, for a piece of sinusoid (duration = T), one gets :
This leads to what is called "intrinsic frequency
resolution of fourier
transform" : one cannot put several sinx/x shapes together on the same
diagram without blending them ... except every 1/T
Beware ! Ifever your signal exists during 10-3
s ... then
you will have a relevant point of the Fourier Transform ... every 1000
Hz !
The more the duration of capture, the best the frequency resolution ...
but the worse the time localisation of your extracted information ...
For
more questions or applications,
please feel free to contact
us