Class Description

Textbook: Differential Equation and Linear Algebra by Stephen W. Goode and Scott A. Annin

2.1: Matrix Definitions and Properties

  • m x n matrix
  • 3 x 4 matrix in the example below
    A=[a11a12a13a14a21a22a23a24a31a32a33a34]

A=[aij]

Equality

Matrices A and B are equal if the following are true:

  1. Same dimensions
  2. aij=bij    i,j   1im,  1jn

Vectors

  • Row Vector: A 1×n matrix
    a=a=A=[a1a2a3a4]

  • Column Vector: A n×1 matrix
    a=a=A=[a1a2a3a4]

  • The elements of a row or column vector are called the components

  • Denoted via an arrow or in bold: v or v

  • A list of row vectors arranged in a row will always consist of column vectors, and a list of vectors arranged in a column will consist of row vectors

    • Example
      a1=[17],a2=[05],a3=[24]

[a1a2a3]=[102754]

Transposition

  • Interchange row vectors and column vectors
    aijT=aji
  • Example

A=[102754]
AT=[170524]

Square Matrices

  • Square Matrix: An n×n matrix
  • Main Diagonal: aii  ,1in
  • Trace: tr(A)=a11+a22++ann
  • Lower Triangular: aij=0 when i<j
    A=[500040227]
  • Upper Triangular: aij=0 when i>j
    A=[334051009]
  • Diagonal Matrix: dij=0 whenever ij
    D=[000020009]
  • Symmetric: AT=A
    D=[023204341]
  • Skew-Symmetric: AT=A
    D=[023204341]

2.2: Matrix Algebra

Matrix Addition

  • A+B=[aij+bij]

Properties

  1. A+B=B+A (Commutative)
  2. A+(B+C)=(A+B)+C (Associative)

Scalar Multiplication

  • Scalar: Real or complex number
  • Scalar Multiplication: If s is a scalar, then sA=[saij]

Properties

  • (if s and t are scalars while A and B are matrices)
  1. 1A=A
  2. s(A+B)=sA+sB
  3. (s+t)A=sA+tA
  4. s(tA)=(st)A=(ts)A=t(sA)

Subtraction

AB=A+(1)B

Zero Matrix

  • Matrix full of all zeros 0m×n
  • Denoted with just a 0 if dimensions are clear

Properties

  1. A+0=A
  2. AA=0
  3. 0A=0

Multiplication

  • If A=[aij] is an m×n matrix, B=[bij] is an n×p matrix, and C=AB, then
    cij=k=1naikbkj      1im,   1jp

Properties

  1. A(BC)=(AB)C
  2. A(B+C)=AB+AC
  3. (A+B)C=AC+BC
  4. ABBA
  • Proof
    (A+B)Cij=k=1n(aik+bik)ckj=k=1naikckj+k=1nbikckj
    =(AC)ij+(BC)ij
    =(AC+BC)ij

Thus (A+B)C=AC+BC

Identity Matrix

  • In: Main diagonals are just 1’s and zeros everywhere else
  • Example
    I3=[100010001]

Kronecker Delta

  • A function that gives the values for the identity matrix
    δij={1i=j0ij

Properties of Identity Matrix

  1. Am×nIn=Am×n
  2. ImAm×p=Am×p
  • Proof: Use δij in proof and definition of multiplication

Properties of Transpose

  1. (AT)T=A
  2. (A+C)T=AT+CT
  3. (AB)T=BTAT
  • Proof
    (AB)ijT=(AB)ji
    =k=1najkbki
    =k=1nbkiajk=k=1nbikTakjT
    =(BTAT)ij

Properties of Triangular Matrices

  • The product of two lower triangular matrices is a lower triangular matrix (and the same applies to upper triangular matrices)

The Algebra and Calculus of Matrix Functions

  • Matrices can have functions as their elements instead of just scalars
  • If A and B are matrices:
  1. dAdt=[daij(t)dt]
  2. ddt(AB)=AdBdt+dAdtB
  3. abA(t)dt=[abaij(t)dt]

2.3: Systems of Linear Equations

System of Linear Equations

a11x1+a12x2++a1nxn=b1
a21x1+a22x2++a2nxn=b2

am1x1+am2x2++amnxn=bm

  • System Coefficients: aij

  • System Constants: bj

    • Scalars
    • Correspond to x1,x2,,xn (unknowns)
  • Homogeneous: If bi=0i

  • Solution: Ordered n-tuple with values for the unknowns (x1,x2,x3,,xn)

  • Solution Set: Set of all solutions to the system

  • If there are two equations with two unknowns, we have two lines, so there can only be the following solutions:

    1. No solution
    2. One solution (one intersection point)
    3. Infinitely many solutions (the lines overlap or are the same)
  • Similarly, with three equations and three unknowns, we have three planes, so there can only be the following solutions

    1. No solution
    2. One solution (Three planes intersect at one point)
    3. Infinitely Many Solutions (the three planes intersect at one line)
    4. Infinitely Many Solutions (the three planes are the same)
  • All systems can only have one of the three above solution possibilities (no solution, one solution, or infinitely many solutions)

  • Consistent: At least one solution to system

  • Inconsistent: No solution to a system

  • Matrix of Coefficients
    A=[a11a12a1na21a22a2nam1am2amn]

  • Augmented Matrix:
    A=[a11a12a1nb1a21a22a2nb2am1am2amnbm]

Vector Formulation

If we have the following:
a11x1+a12x2++a1nxn=b1
a21x1+a22x2++a2nxn=b2

am1x1+am2x2++amnxn=bm

The vector formulation:
[a11a12a1na21a22a2nam1am2amn][x1x2xn]=[b1b2bm]
x=[x1x2xn],    b=[b1b2bm]
Ax=b

  • Right-Hand Side Vector: b
  • Vector of Unknowns: x

Notation

  • The set of all ordered n-tuples of real numbers (c1,c2,c3,c4,,cn) is denoted with Rn
  • For scalar values that are reals:
    (x1,x2,,xn)[x1x2xn][x1x2xn]
  • Tuples, row vectors, and column vectors are basically interchangeable in this context

Differential Equations

dxdt=A(t)x(t)+b(t)

2.4: Row-Echelon Matrices and Elementary Row Operations

  • Method for solving a system by reducing a system of equations to a new system with the same solution set, but easier to solve

Row-Echelon Matrix (REF)

Let A be a m×n matrix with the following conditions

  1. All zero rows of A (if any) are grouped at the bottom
  2. The leftmost non-zero entry of every non-zero row is 1 (called the leading 1 or pivotal 1)
  3. The leading 1 of a non-zero row below the first row is to the right of the leading 1 in the row above it

[010001000010000000010000000000000000]

  • Values after the leading 1 can be any value
  • Pivotal Columns are columns where there is a leading 1

Elementary Row Operations

  1. Permute row i and j
  2. Multiply row i by a non-zero scalar k
  3. Add the multiple kRj of row j to Ri,    ij,  kR
  • AB denotes that B was obtained using one of the above operations
  • The elementary row operations are reversible

Reduced Row Echelon Form (RREF)

  • Same as REF but with only 0’s above every leading 1

Rank

  • Rank(A) = number of pivotal columns = number of leading 1’s = number of non zero rows in REF form
  • Every row equivalent REF has the same rank
  • Every RREF of a matrix is the same, but not all row equivalent REF of a matrix are the same

2.5: Gaussian Elimination

  • Gaussian Elimination: To solve a system, use elementary row operations to convert a matrix to REF. Convert matrix back into corresponding equations and solve.

  • Gauss-Jordan Elimination: Convert to RREF and solve.

  • Always either no solution, infinite solutions, or one solution

  • If there is a leading 1 in the last row, there are no solutions since the system is inconsistent

  • Homogeneous: b=0 Always has one solution or more.

  • Free variables: Correspond to a non-pivotal column, means there are infinite solutions to system

  • Non-Free variable: Correspond to a pivotal column

Theorem 2.5.9

Let A is a coefficient m×n matrix. A# is the augmented matrix

  1. If rank(A)<rank(A#), then the system is inconsistent
  2. If rank(A)=rank(A#), then the system is consistent
  • rank(A)=n if and only if there is only one solution
  • rank(A)<n if and only if there is an infinite amount of solutions (free variables exist)

Corollary 2.5.11

For a homogeneous system, if m<n, then there is an infinite amount of solutions.

Proof

We know rank(A)=rank(A#) since it’s a homogeneous system.
We also know rank(A)m for any system. If m<n, then rank(A)<n by transitivity.
By Theorem 2.5.9, the system has an infinite amount of solutions

Remark

The inverse is not true. There can still be an infinite amount of solutions if mn for a homogeneous system

2.6: The Inverse of a Square Matrix

When A and B are both n×n matrices:
AB=InBA=In
this chapter is about finding what B is and if B even exists.

Potential application of B: Solving a system Ax=b
Ax=b
(BA)x=Bb
x=Bb

We can solve for x simply by multiplying B and b. However, in practice this is slow, so it isn’t really used much.

Theorem 2.6.1

Theorem: There is only one matrix B for a corresponding A.

Proof:
AB=BA=In
AC=CA=In
C=CIn=C(AB)
C=(CA)B=InB=B
C=B
(Uniqueness proof). See these notes

AA1=A1A=In
If A1 exists, then it’s called the inverse and A is called invertible

  • Nonsingular Matrix: Sometimes called invertible
  • Singular Matrix: Sometimes called non-invertible or degenerate

Remark: A1 does not mean 1A

Theorem 2.6.5

Theorem: If A1 exists, then the n×n system of linear equations Ax=b has a unique solution x=A1b  bRn

Proof: Verify that x=A1b is a solution with direct substitution. To show that x is unique, assume we have solutions x1, x2, x3 and so on. So we have Ax1=Ax2=Ax3=b. But multiplying by A1 reveals the following
x1=x2=x3=x4=A1b
Thus there is only one solution since we know A1 is also unique.

Theorem 2.6.6

  • This theorem shows when a matrix is invertible and how to efficiently compute the inverse

Theorem: An n×n matrix A is invertible if and only if rank(A)=n

Proof:
Let’s prove A1 existsrank(A)=n first. If A1, then by Theorem 2.6.5, any n×n linear system has a unique solution. Thus by Theorem 2.5.9, we know that rank(A)=n

Now we prove the converse:
rank(A)=nA is invertible
We must show that there exists an n×n matrix X such that the following is true:
AX=In=XA

Given rank(A)=n, each column of A is pivotal and every row on REF of A is non-zero. Thus the RREF of A=In since A has rank(A)=n. Ax=b has one solution as well since rank(A)=n. Since there is only one solution, X, for AX=In. We can then use the Gauss-Jordan method to solve for X.

Now we show that XA=In as well:
InA=A
(AX)A=A
(AXA)A=0n
A(XAIn)=0n
A(XAIn) doesn’t necessarily imply XAIn=0 for all matrices, since two non zero matrices can be multiplied to yield 0. But in this case XAIn=0 due to the following reasons:

Let yi be the columns of the n×n matrix XAIn.
Ayi=0,   i=1,2,3,,n
rank(A)=n, so each system in the above has a unique solution. So since the system is homogeneous, each unique solution, yi, must be 0 (the trivial solution). Thus XAIn=0

XA=In

Corollary 2.6.7

Corollary: If Ax=b has a unique solution, then A1 exists

Gauss-Jordan Technique

  • Augment the matrix A with the ith column vector of the identity matrix to get the ith column of A1. To get the whole identity matrix, just augment A with the entire identity matrix and reduce to RREF. The result will be the identity matrix augmented with A1.

(A|In)(In|Y)
AY=In
Y=A1

Properties of the Inverse

If both A and B are invertible

  1. In1=In
  2. A1 is invertible and (A1)1=A
  3. AB is invertible and (AB)1=B1A1
  4. AT is invertible and (AT)1=(A1)T

Corollary

(A1A2Ak)1=Ak1Ak11A11

Theorem 2.6.12

Theorem: Let A and B be n×n matrices. If AB=In, then both A and B are invertible and B=A1

Proof
A(Bb)=Inb=b
For every b, Ax=b has the solution x=Bb, which implies rank(A)=n.

3.4: Summary of Determinants

Formulas for Determinants

  1. If A=[a11], then det(A)=a11

  2. If A=[a11a12a21a22], then det(A)=a11a22a12a21

  3. det(A)=ai1Ci1+ai2Ci2++ainCin

    det(A)=aj1Cj1+aj2Cj2++ajnCjn

Cij=(1)i+jMij
Mij is the determinant of the matrix obtained by deleting the ith row and jth column of A.

Example

A=[3527]
M=[7253]
C=[7253]

Properties of Determinants

Let A and B both be n×n matrices

  1. IF B is obtained by permuting two rows (or columns) of A, then det(B)=det(A)
  2. If B is obtained by multiplying any row (or column) of A by a scalar k, then det(B)=kdet(A)
  3. If B is obtained by adding a multiple of any row (or column) of A to another row (or column) of A, then
    det(B)=det(A)
  4. For any scalar k
    det(kA)=kndet(A)
  5. det(AT)=det(A)
  6. Let a1,a2,,an denote the row vectors of A. Let B=[a1,a2,bi,an]T and C=[a1,a2,cian]T. If the ith row vector of A is the sum of two vectors, ai=bi+ci, then det(A)=det(B)+det(C)
  7. If A has a row (or column) of zeroes, then det(A)=0
  8. If two rows (or columns) or A are scalar multiples of one another, then det(A)=0
  9. det(AB)=det(A)det(B)
  10. If A is invertible, then det(A)0 and det(A1)=1det(A)

Basic Theoretical Results

  1. The volume of a parallelepiped is |det(A)|, where A is a matrix with the 3 vectors of the parallelepiped

Theorem 3.2.5

  • An n×n matrix is invertible if and only if det(A)0
  • An n×n linear system Ax=b has a unique solution if and only if det(A)0

Corollary 3.2.6

An n×n homogeneous linear system Ax=0 has an infinite number of solutions if and only if det(A)=0

Adjoint Method

A1=1det(a)adj(A)
The adj(A) is the transpose of the cofactor of A

Cramer’s Rule

If det(A)0, then the unique solution to Ax=b is x=(x1,x2,x3,,xn), where xi=det(Bi)det(A) and Bi is obtained by replacing the ith column vector of A with b

4.2: Definition of a Vector Space

  • Vector Space: Nonempty set V with two operators
    • Addition
    • Multiplication By Scalars

Axioms

  1. The vector space is closed under both addition and multiplication by scalars
    vV,wV,  v+wV
    kR,vV,  kvV
  2. Commutative vV,wV,  v+w=w+v
  3. Associative vV,wV,uV,  (v+w)+u=w+(v+u)
  4. 0V,  u+0+u=u+0=u  uV
  5. vV  v  s.t.  v+v=0
  6. 1v=v  vV
  7. r(sv)=(rs)v
  8. (r+s)v=rv+sv
  9. r(v+w)=rv+rw

Theorem

Let V be a vectorspace

  1. The zero vector is unique in V
  2. r0=0
  3. 0v=0
  4. Every vV has a unique additive inverse, v
  5. If rv=0, then r=0v=0

Function Example

Matrix Example

Polynomial Example

4.3: Subspaces

Subspace: Let V be a vector space. A non-empty subset SV is called a subspace if S is also a vector space (closed under addition and multiplication by scalars).

Proposition

S is a subspaceS is closed under both addition and multiplication by scalars

Observation

If SV and S is a subspace of V, then 0S

Examples

Nullspace

The solution set of a homogeneous system is the nullspace

4.4: Spanning Sets

Let {v1,v2,v3,v4,,vk}V. We say v1,v2,vk spans V if every vector in V is a linear combination of v1,v2,v3,v4,,vk

c1v1+c2v2+c3v3+c4v4++ckvk=b,  bV
where b represents a vector in V

Definition

Given v1,v2,v3,v4,,vk, the set of all linear combinations is the span of {v1,v2,v3,v4,,vk}

span()={0}
span({v})={rv | rR}

Observation

v1,v2,vkS,  SV
span({v1,v2,vk}) is a subspace of V

Terminology

We can also say v1,v2,v3,vk spans a subspace, W, if W=span({v1,v2,,vk})

We say {v1,v2,,vk} is a spanning set of W

Example

4.5: Linear Dependence and Independence

Minimal Spanning Set

  • Minimal Spanning Set: The smallest set of vectors that spans a vector space

  • A minimal spanning set of V=R2 is {(1,0),(0,1)}
    span({(1,0),(0,1)})=span({(1,0),(0,1),(1,2)})=R2

Theorem 4.5.2

If you have a spanning set, you can remove a vector from a spanning set if it is a linear combination of the other vectors and still get a spanning set.

Linear Dependence/Independence

Let {v1,v2,,vk} be a non-empty subset in V. {v1,v2,,vk} is linearly dependent if (c1,c2,ck)Rk and at least one cj0 and c1v1+c2v2+ckvk=0

{v1,v2,,vk} is linearly independent if c1v1+c2v2+ckvk=0c1=0=c2=ck

Example

Theorem 4.5.6

A set of vectors (with at least two vectors) is linearly dependent there is at least one vector that is a linear combination of the other vectors

Proposition 4.5.8

  1. Any set of vectors with the zero vector is linearly dependent
  2. Any set of two vectors is linearly dependent if and only if the vectors are proportional

Corollary 4.5.14

Any nonempty, finite set of linearly dependent vectors contains a subset of linearly independent with the same linear span

Proof

By Theorem 4.5.6, there is a vector that is a linear combination of the other vectors. If we delete that, we still have same span. If the resulting subset is linearly independent, then we’re done. If the resulting subset is linearly dependent, then we can repeat the same process of removing the vector that is a linear combination.

Corollary 4.5.17

For a set of vectors v1,v2,v3,,vk where viRn, and A=[v1,v2,v3,,vk] with A having dimensions n×k

  1. If k>n, then the vectors are linearly dependent (since there is an infinite number of solutions due to free variables Corollary 2.5.11)
  2. If k=n, then the vectors are linearly dependent if and only if det(A)=0 (Corollary 3.2.6)
  3. If k<n, nothing can be concluded

Wronskian

Let f1,f2,fkCk1(I) be functions that are differentiable up to k1.
W[f1,f2,,fk](x)=|f1f2f3fkf1f2f3fkf1(k1)f2(k1)f3(k2)fk(k1)  |

Order matters
W[f1,f2,f3,,fk](x)=W[f2,f1,f3,,fk](x)

Theorem 4.5.23

If W[f1,f2,f3,,fk](x)0 for some x0 on I, then f1,f2,f3,,fk is linearly independent on I

Proof

Suppose the following, for all x in I
c1f1(x)+c2f2(x)+···+ckfk(x)=0

If we differentiate, we get the following:
c1f1+c2f2+c3f3++ckfk
c1f1+c2f2+c3f3++ckfk

c1f1(k1)+c2f2(k1)+c3f3(k2)++ckfk(k1)

This is a system, and if we get the determinant (Wronskin) to be nonzero, then there is only one solution (trivial) by Theorem 3.2.5

Remarks

  • If the Wronskian is zero, we don’t know if the functions are linearly dependent or independent
  • The Wronskian only needs to be nonzero at one point for us to conclude that the functions are independent

4.6: Bases and Dimension

Definition

A set {v1,v2,,vk} in a vector space V is a basis of V if

  1. {v1,v2,,vk} is linearly independent
  2. span{v1,v2,,vk}=V

A vector space is called finite dimensional if it admits a finite basis. Otherwise, V is infinite dimensional.

Note

All minimal spanning sets form a basis and all bases are minimal spanning sets

Example

Rn,Mk×n(R),Pn(R) are all finite dimensional

Example

P(R)C(k)(R)F(R) are all infinite dimensional vector spaces

Example

Rn     e1=[10000],e2=[01000],,en=[00001]
{e1,e2,e3,en} is the standard basis for Rn

Standard Basis: A set of vectors for a vector space where each vector has has zero in all of its components except one

Example

M2(R)     E11=[1000],E12=[0100],E21=[0010],E22=[0001]
a[1000]+b[0100]+c[0010]+d[0001]=[0000]
a=b=c=d=0

{E11,E12,E21,E22} is the *standard basis* for M2(R)

Example

{1,x,x2,x3,,xn} is the standard basis for Pn(R)

Observation

If {v1,v2,,vk} is a basis of V, then every vector vV can be written uniquely as v=c1v1+c2v2++ckvk

Proof

v=c1v1+c2v2++ckvk=v=d1v1+d2v2++dkvk
(d1c1)v1+(d2c2)v2++(dkck)vk=0

Since the vectors are linearly independent:
d1c1=0=d2c2=dkck
d1=c1,d2=c2,,dk=ck

Theorem 4.6.4

If a vector space, V, has a basis with exactly n vectors, then any set {w1,w2,,wk} of k>n vectors is linearly dependent in V

Proof

Let {v1,v2,,vn} be a basis of vector space, V

Let {u1,u2,,um} be a set of m arbitrary vectors that is a subset of V with m>n

NTS {u1,u2,,um} is linearly dependent.

Since {v1,v2,,vn} is a basis, it is a spanning set, so every vector in {u1,u2,,um} can be written as a linear combination of v1,v2,,vn.

There must exist aij such that
u1=a11v1+a21v2+an1vn
u2=a12v2+a22v2+an2vn

um=a1mv2+a2mv2+anmvn

NTS that c1u1+c2u2++cmum=0 to show that u1,u2,,um is linearly dependent.

Combine the last two equations:
c1(a11v1+a21v2+an1vn)+c2(a12v2+a22v2+an2vn)++cm(a1mv2+a2mv2+anmvn)=0

Rearrange:
(a11c1+a12c2+a1mcm)v1+(a21c1+a22c2+a2mcm)v1++(an1c1+an2c2+anmcm)vn

We know v1,v2,,vn is linearly independent, so the following is true:
(a11c1+a12c2+a1mcm)=0
(a21c1+a22c2+a2mcm)=0

(an1c1+an2c2+anmcm)=0

This forms an n×m matrix with n<m. So by Corollary 2.5.11, there is an infinite amount of solutions, and the vectors u1,u2,um is linearly dependent.

Corollary 4.6.5

All bases of a finite dimensional vector space have the same number of vectors

Proof

Let there be a basis with m vectors and another basis with n vectors.
If m>n, then by Theorem 4.6.4, then one of the set of vectors is linearly dependent (not a basis). If m<n, then the other set of vectors is linearly dependent. Thus m=n is the only way for both to be linearly independent.

Observation

If A is an invertible n×n matrix, then the columns of A form a basis of Rn

Proof

NTS every bRn belongs to a span of the columns of A

In other words, NTS Ax=b is consistent
A1b is a solution of Ax=b

Now NTS columns of A are linearly independent. NTS Ax=0 has only the trivial solution.
Ax=0
A1Ax=A10=0
c=0

Definition

If V is a finite dimensional vector space, then dim(V)=number of vectors in any basis of V

Convention:
dim({0})=0

Examples

dim(Rn)=n
dim(Pn(R))=n+1
dim(Mn×k)=nk

Corollary 4.6.6

If dim(V)=n, then any spanning set of V must have at least n vectors

Proof

If the spanning set had less than n vectors, there would be a basis with less than n vectors, which contradicts Corollary 4.6.5

Theorem 4.6.10

If dim(V)=n, then any set of n linear independent vectors in V form a basis of V

Proof

Let vV and {v,v1,v2,,vn} be a linearly dependent set by Theorem 4.6.4.
Then the following is true

c0v+c1v1+c2v2++vn=0

We know c00 since we can use a proof by contradiction. We know c1v1+c2v2++cnvn=0 is linearly independent, so if c0=, then c1=0=c2==cn, which is a contradiction.

v=1c0(c1v1+c2v2++cnvn)

Every vV can be written as a linear combination of the n linearly independent vectors in V, so they span V. Since the vectors span V and are linearly independent, they also form a basis.

Theorem 4.6.12

If dim(V)=n, then any spanning set with exactly n vectors is also a basis of V

Corollary 4.6.14

If W is a subspace of Vand V is a finite dimensional vector space, then dim(W)dim(V)
If W is a subspace of V and dim(W)=dim(V), then W=V

6.6: Linear Transformations

Appendix A: Review of Complex Numbers

  • Complex Number: Has a real part and an imaginary part
    z=a+ib
    Re(z)=a
    Im(z)=b

  • Conjugate: If we have z=a+ib, then the conjugate is z¯=aib
    z¯¯=z
    z¯z=zz¯=a2+b2

  • Modulus/Absolute Value: |z|=a2+b2
    |z|2=a2+b2=zz¯

Complex Valued-Functions

Complex valued functions are of the following form:
w(x)=u(x)+iv(x)

  • Euler’s Formula
    Derivation involves using Maclaurin expansion for ex
    eib=cosb+isinb

e(a+ib)x=eax(cosbx+isinbx)

xa+bi=ea+iblnx
xr=erlnx

Differentiation of Complex-Valued Functions

w(x)=u(x)+iv(x)
ddx(erx)=rerx
where r is a complex number
ddx(xr)=rxr1
where r is a complex number

7.1: The Eigenvalue/Eigenvector Problem

  • If A is an n×n matrix
    Av=λv
    The nontrivial solutions v are called eigenvalues of A. The corresponding non-zero vectors v are called eigenvectors of A

  • A way to formulate this is by interpreting A as a matrix of a linear transformation T:CnCn
    T(v)=Av

  • Geometrically, the linear transformation leaves the direction of v unchanged, but stretches v by a factor of λ.

Solution to the Problem

I is the identity matrix
(AλI)v=0
According to Corollary 3.2.6, nontrivial solutions exist only when
det((AλI)v)=0

  1. Find scalars λ with det(AλI)=0
  2. If λ1,λ2,,λk are the distinct eigenvalues obtained from above, then solve the k systems of linear equations to find the eigenvectors corresponding to each eigenvalue
  • Solve by solving the system (AλI)v=0

7.2: General Results for Eigenvalues and Eigenvectors

p(λ)=(1)n(xλ1)m1(xλ2)m2(xλ3)m3(xλk)mk
m1+m2++mk=n

Definition

The Eigenspace Ei is the set of all vectors v satisfying Av=λiv

The Eigenspace contains the zero vector

Theorem 7.2.3

  1. For each i, Ei is a subspace of Cn
  2. 1dim(Ei)mi
  • Algebraic Multiplicity: mi
  • Geometric Multiplicity: dim(Ei)

Theorem 7.2.5

Let λ1,λ2,λm be distinct eigenvalues corresponding to eigenvectors v1,v2,,vm

Eigenvectors corresponding to distinct eigenvalues are linearly independent

Note 1: By definition of linear independence v1,v2,,vm are distinct. However, there can exist more eigenvectors of the same A that are not linearly independent and correspond to non-distinct eigenvalues

Note 2: It is impossible for (non)distinct linearly dependent eigenvectors to correspond to distinct eigenvalues

Note 3: Eigenvectors corresponding to nondistinct eigenvectors can be either linearly independent or linearly dependent

Proof

Proof by induction
Base Case: {v1} is linearly independent
Inductive hypothesis: Suppose {v1,v2,,vk} is linearly independent

Need to show (NTS): {v1,v2,,vk,vk+1} is also linearly independent

Corollary 7.2.6

Let E1,E2,..,Ek be eigenspaces of the n×n matrix A
. In each eigenspace, choose a set of linearly independent eigenvectors. Then union of linearly independent sets is also linearly independent.

Proof

Proof by contradiction. Assume the union of linearly independent sets is linearly dependent.
c1v1+c2v2,,ckvk=0
=w1+w2++wk=0

Linearly dependence between wi implies there is a wj0. But since each set is linearly independent, wj=0 for all j. This is a contradiction.

Definition

An n×n matrix A with n linearly independent eigenvectors is nondefective

Any n linearly independent eigenvectors of A form an eigenbasis of A

A is defective if A has less than n linearly independent eigenvectors

Note: A cannot have more than n linearly independent eigenvectors, but A can still have more than n eigenvectors

Corollary 7.2.10

If an n×n has a n distinct eigenvalues, then it is nondefective.

Note: if A does not have n distinct eigenvalues, it may still be nondefective

Proof

Use Theorem 7.2.5

Theorem 7.2.11

For an n×n matrix A

A is nondefective(i)(dim[Ei]=mi)

or

m1+m2+mk=n

7.3: Diagonalization

Definition

Let A and B be n×n matrices. A and B are similar if there exists S such that B=S1AS

Theorem 7.3.3

Similar matrices have the same eigenvalues (including multiplicities)

They also have the same characteristic polynomial

Proof

det(BλI)=det(S1ASλI)=det(S1ASλS1S)
=det(S1(AλI)S)=det(S1)det(AλI)det(S)
=1det(S)det(AλI)det(S)=det(AλI)

Theorem 7.3.4

An n×n matrix A is similar to a diagonal matrix iff A is nondefective

Proof

First suppose A is a diagonal matrix
S1AS=D
AS=SD
[Av1,Av2,,Avn]=[λ1v1,λ2v2,,λnvn]
So v1,v2,vn are linearly independent. Thus A is nondefective

Conversely, suppose A is nondefective. Then
AS=A[v1,v2,,vn]=[λ1v1,λ2v2,,λnvn
AS=SD
where D=diag(λ1,λ2,,λn.
S is invertible since the columns of S is linearly independent. So the following is true
S1AS=D
A is similar to a diagonal matrix

Definition

A matrix is diagonalizable if it is similar to a diagonal matrix

Solving Systems of Differential Equations

dx1dt=a11x1+a12x2
dx2dt=a21x1+a22x2
the above can be represented as
x=Ax

where

x=[x1x2],     x=[x1x2],     A=[aij]

Let S=[v1,v2]

x=Sy
x=Sy
Sy=ASy
y=(S1AS)y
[y1y2]=[λ100λ2][y1y2]
y1=λ1y1,     y2=λ2y2
y1(t)=c1eλ1t,    y2(t)=c2eλ2t
x(t)=c1eλ1tv1+c2eλ2tv2

7.4: An Introduction to the Matrix Exponential Function

If A is an n×n matrix of constants, the matrix exponential function is as follows:
eAt=In+At+12!(At)2+13!(At)3++1k!(At)k

Properties of the Matrix Exponential Function

  1. If A and B are n×n matrices satisfying AB=BA
    e(A+B)t=eAteBt
  2. For all n×n matrices A, eAt is invertible and
    (eAt)1=e(A)t=eAt
    eAteAt=In

More results

If A=diag(d1,d2,dn) then eAt=diag(ed1t,ed2t,ednt)

Theorem 7.4.3

If A is not a diagonal matrix, but is diagonalizable
eAt=SeDtS1

1.2: Basic Terminology and Ideas

  • Definition of Linear Differential Equation
    a0(x)y(n)+a1(x)y(n1)++an(x)y=F(x)
    where a0,a1,,an and F are functions of x only
  • The order of the above equation is n
  • A nonlinear differential equation does not satisfy the above form

Examples of Linear Differential Equations

Order 3
y+e3xy+x3y+(cosx)y=lnx
Order 1
xy21+x2y=0

Examples of Nonlinear Differential Equations

y+x4cos(y)xy=ex2
y+y2=0
Both order 2

Solutions to Differential Equations

Definition 1.2.4

A function y=f(x) that is (at least) n times differentiable on an interval I is called a solution to the differential equation on I if the substitution y=f(x),y=f(x),,y(n)=f(n)(x) reduces the differential equation to an identity valid for all x in I

Definition 1.2.8

A solution to an n-th order differential equation on an interval I is called the general solution on I if the following is satisfied

  1. The solution contains n constants c1,c2,,cn
  2. All solutions to the differential equation can be obtained by assigning appropriate values to the constants

Not all differential equations have a general solution

Initial-Value Problems

An n-th order differential equation together with n auxiliary conditions of the form

y(x0)=y0,y(x0)=y1,,y(n1)(x0)=yn1
where y0,y1,,yn1 are constants

Theorem 1.2.12

For the initial value problem
y(n)+a1(x)y(n1)++an1(x)y+an(x)y=F(x)
y(x0)=y0,y(x0)=y1,,y(n1)(x0)=yn1

if a1,a2,,an,F are continuous on I, then there is a unique solution on I

Example

Prove that the general solution to the differential equation
y+ω2y=0   <x<
is y(x)=c1cosωx+c2sinωx

Solution

First, verify that y(x)=c1cosωx+c2sinωx is a solution to the differential equation on (,)

Then to show that every solution is of that form, use the Theorem that states that there is only one solution to an initial value problem.

Suppose y1=f(x) is a solution and is the unique solution to the IVP of the following
y1+ω2y1=0,  y1(0)=f(0),  y1(0)=f(0)

We can find c1 and c2 for y2=c1cosωx+c2sinωx and y2(0)=f(0), y2(0)=f(0)

c1=f(0),  c2=f(0)ω
y2(x)=f(0)cosωx+f(0)ωsinωx

Notice that both y1 and y2 solve the same IVP problem. Thus they must be the same.
y1(x)=y2(x)
y1(x)=f(x)y2(x)=f(x)=y1(x)

Since f(x) is an arbitrary solution to the differential equation
f(x)=f(0)cosωx+f(0)ωsinωx=c1cosωx+c2sinωx

1.4: Separable Differential Equations

Definition

A first-order differential equations is called separable if it can be written in the following form
p(y)dydx=q(x)

1.6: First Order Linear Differential Equations

Definition

a(x)dydx+b(x)y=r(x)

dydx+p(x)y=q(x)
First Order Linear differential equations can be represented as the above forms

Solving the Differential Equation

h(x) is called the integrating factor:
h(x)=h(x)p(x)=ep(x) dx

There can be multiple integrating factors, but we only need one (which means we only need one anti-derivative of p(x) to obtain the integrating factor

h(x)(dydx+p(x)y)=h(x)q(x)
h(x)dydx+h(x)p(x)y=h(x)q(x)

By the product rule

d(h(x)y(x))ddx=h(x)q(x)

h(x)y(x)=h(x)q(x) dx

y(x)=1h(x)h(x)q(x) dx

8.1: Linear Differential Equations

The mapping D:C1(I)C0(I) defined by D(f)=f is a linear transformation

Dk(f)=dkfdxk

L, a linear differential operator of order n
L=Dn+a1Dn1++an1D+an
Ly=y(n)+a1y(n1)++an1y+any

Note that in general, L1L2L2L1

Note that L1L2 means the composition of the linear transformations and can be written like L1L2. You CANNOT just treat each L1 and L2 as polynomials and multiply them together.

Example

L=D2+4xD3x
Find L(x2)

Solution

Ly=y+4xy3xy
L(x2)=2+8x23x3

Example

Find the kernel of L=D2x

Solution

Finding the kernel of L is synonymous with finding all functions that satisfy Ly=0
y2xy=0
Use integrating factor to get (ex2y)=0
ex2y=c
Ker(L)={cex2:cR}

Linear Differential Equations

Homogeneous Linear DE’s are of the following form:
y(n)+a1(x)y(n1)++an1(x)y+an(x)y=0

Nonhomogeneous Linear DE’s are of the following form:
y(n)+a1(x)y(n1)++an1(x)y+an(x)y=F(x)
Ly=F(x)

Theorem 8.1.3

Let a1,a2,,an and F be functions of x continuous on I. For any x0 in I, the initial value problem (IVP)
Ly=F(x)
y(x0)=y0,  y(x0)=y1,,y(n1)(x0)=yn1

has a unique solution on I

Theorem 8.1.4

The set of all solutions to the following nth order homogeneous linear DE on I is a vector space of dimension n
y(n)+a1(x)y(n1)++an1(x)y+an(x)y=0

Proof

Rewrite the above as Ly=0
We know that the kernel of any linear transformation from VW is a subspace of V from Chapter 6. So the solution space of the homogeneous linear DE is a vector space.

Use a proof by induction.

Note: Any set of n linearly independent solutions y1,y2,,yn to y(n)+a1(x)y(n1)++an1(x)y+an(x)y=0 is a basis of the solution space of the homogeneous linear DE

General Solution
y(x)=c1y1(x)+c2y2(x)+cnyn(x)

Example

Find all solutions of the form y(x)=erx for the DE y2y15y=0

Solution

y(x)=erx,  y(x)=rerx,  y(x)=r2erx
(r+3)(r5)=0
y1(x)=e3x,  y2(x)=e5x
The Wronskian, W[y1,y2](x)=8e2x0 for all x, so y1,y2 are linearly independent. From Theorem 8.1.4, we know y1,y2 form a basis for all solutions to the differential equation since dim(span{y1,y2})=2 and the dimension of the solution space of the DE is 2 as well.

Thus the general solution is the following:
y(x)=c1e3x+c2e5x

Theorem 8.1.6

Let y1,y2,y3,,yn are solutions of the nth order DE Ly=0.
If W[y1,y2,,yn](x0)=0 for some point on I, then y1,y2,,yn is linearly dependent on I

Theorem 8.1.8

Let y1,y2,,yn be linearly independent solutions to Ly=0 on I and let y=yp be a particular solution to Ly=F on I. Then every solution to Ly=F on I is of the form
y=c1y1+c2y2+cnyn+yp
for appropriate constants c1,c2,,cn

Proof

Lyp=F
Let ya=u be a solution to Lya=F
Lu=F
L(uyp)=0
If yb=uyp, then uyp is a solution to Lyb=0
uyp=yc=c1y1+c2+cnyn
since y1,,yn is linearly independent and solutions to Lyc=0.
u=c1y1+c2y2+cnynyc (Complementary function)+yp
y(x)=yc(x)+yp(x)

Theorem 8.1.10

If y=up and y=vp are particular solutions to Ly=f(x) and Ly=g(x), respectively, then y=up+vp is a solution to Ly=f(x)+g(x)

Proof

L(up+vp)=L(up)+L(vp)=f(x)+g(x)

8.2: Constant Coefficient Homogeneous Linear Differential Equations

For the differential equation
y(n)+a1y(n1)++an1y+any=0

if a1,a2,,an are constant, then we can write it as follows

P(D)y=0
P(D)=Dn+a1Dn1+an1D+an

  • P(D) is the polynomial differential operator

  • Auxiliary Polynomial
    P(r)=rn+a1rn1+..an1r+an

  • Auxiliary Equation
    P(r)=0

Theorem 8.2.2

If P(D) and Q(D) are polynomial differential operators, then
P(D)Q(D)=Q(D)P(D)

  • Polynomial differential operators are commutative
  • Note: When we write P(D)Q(D)f, we mean P(D)Q(D)f
  • Note: The polynomial differential operators commute because polynomials commute!
  • Note: You CAN treat the linear transformations as polynomials and multiply them together

Theorem 8.2.4

If P(D)=P1(D)P2(D)Pk(D), where Pi(D) is a polynomial differential operator, then, for each i, 1k, any solution to Pi(D)y=0 is also a solution to P(D)y=0

Lemma 8.2.5

Consider the differential operator (Dr)m, where m is a positive integer, and r is a real or complex number. For any uCm(I),

(Dr)m(erxu)=erxDm(u)

Theorem 8.2.6

The differential equation (Dr)my=0, where m is a positive integer and r is real or complex, has the following m solutions that are linearly independent
erx,xerx,,xm1erx

The above functions also form a basis of ker((Dλ)m) and a basis of the solution space of (Dλ)m(y)=0

Proof

Using the above lemma, we get
(Dλ)m(eλxxj)=eλxDm(xj)
j{0,1,2,,m1}
But since m>j, Dm(xj)=0, so
(Dλ)m(eλxxj)=eλx0=0
(Dλ)m(eλxxj)=0

Complex Roots of the Auxiliary Equation

If the roots of the auxiliary equation are complex, then the solutions are
e(a±bi)x,xe(a±bi)x,x2e(a±bi)x,,xm1e(a±bi)x
for real valued solutions use Euler’s formula
e(a+ib)x=eax(cosbx+isinbx)
f1=xke(a+bi)x=xkeax(cos(bx)+isin(bx))
f2=xke(abi)x=xkeax(cos(bx)isin(bx))
0km1

y1(x)=12(f1(x)+f2(x))=xkeaxcosbx
y2(x)=12i(f1(x)f2(x))=xkeaxsinbx

The are the real-valued solutions to the differential equation
eaxcosbx,eaxsinbx,xeaxcosbx,xeaxsinbx,,xm1eaxcosbx,xm1eaxsinbx

General Result

For the differential equation
P(D)y=0
(Dr1)m1(Dr2)m2(Drk)mky=0

  1. If ri is real, the following are linearly independent solutions
    erix,xerix,xmi1erix
  2. If ri is complex (ri=a+bi), then the following functions are linearly independent solutions corresponding to a±bi
    eaxcosbx,xeaxcosbx,,xmj1eaxcosbx
    eaxsinbx,xeaxsinbx,,xmj1eaxsinbx

Special Case

For the differental equation
(Dr1)(Dr2)(Drn)
the following are solutions
f1(x)=eλ1x,f2(x)=eλ2x,,fn(x)eλnx

Proof

f1,f2,,fn is linearly independent since W=S, where S is the solution space and
W=span(f1,f2,,fn)S
dim(W)=dim(S)=n

You can also use the Wronskian to show linear independence (with the vandermont determinant)

So the general solution of the differential equation is c1eλ1x+c2eλ2x++cneλnx

Example

Find the general solution to
D3(D2)2(D2+1)2y=0

Solution

P(r)=r3(r2)2(r2+1)2

r=0 (multiplicity 3)
y1(x)=1,y2(x)=x,y3(x)=x2
r=2 (multiplicity 2)
y1(x)=e2x,y2(x)=xe2x,y3(x)=x2e2x
r=±i (multiplicity 2)
y1(x)=sin(x),y2(x)=xsin(x),y3(x)=cos(x),y4(x)=xcos(x)

Example

Find the basis of the solution space for
y6y+25y=0

P(r)=r26+25
=r223+9+16
=(r3)2+16
=(r3)2(4i)2
=(r3(4i))(r3+(4i))

Roots are 3+4i and 34i, so the complex valued basis:
{e(3+4i)x,e(34i)x}
Real valued basis
f1=e(3+4i)x=e3x(cos4x+isin4x)
f2=e(34i)x=e3x(cos4xisin4x)

12(f1+f2)=e3xcos4x
12i(f1f2)=e3xsin4x

Basis: {e3xcos4x,e3xsin4x}

8.3: The Method of Undetermined Coefficients: Annihilators

According to Theorem 8.1.8, the general solution to the non-homogeneous differential equation
P(D)y=F(x)
is of the form
y(x)=yc(x)+yp(x)

The previous section showed how to find the solutions to yc, the homogeneous linear differential equation. This section will explore how to find yp, a particular solution to the non-homogeneous linear differential equation.

Insert Part about Annihilators Here

Table

Just use this table here
Table with results of annhiliator method

8.4: Complex-Valued Trial Solutions

Alternative Method for Solving the following constant coefficient differential equation
y+a1y+a2y=F(x)
where F(x)=xkeaxsinbx or F(x)=xkeaxcosbx

Theorem 8.4.1

If y(x)=u(x)+iv(x) is a complex-valued solution to
y+a1y+a2y=F(x)+iG(x)
then
u+a1u+a2u=F(x) and v+a1v+a2v=G(x)

Example

Find the general solution to
y+9y=5cos(2x)

Note that F(x)=5cos(2x)=Re(5e2ix)

z+9z=5e2ix
zp=zp(x)

Note that
yp(x)=Re(zp(x))

Since 2i is not a root of r2+9, use the first row of the table to get the following
zp(x)=Ae2ix
zp(x)=2iAe2ix
zp(x)=A(2i)2e2ix=4Ae2ix

4Ae2ix+9Ae2ix=5e2ix
(5A5)e2ix=05A5=0A=1

So
zp(x)=e2ix
yp(x)=Re(e2ix)=cos(2x)

Example

y+y6y=4cos(2x)
Consider complex version of F(x) of above
z+z6z=4e2ix
Consider homogeneous of above
z+z6z=0
(r2+r6)=(r2)(r+3)
r=2,r=3
z(x)=c1e2x+c2e3x,   c1,c2C

Find yp by finding zp. Note 2i is not a root of (r2)(r+3), so we get the following
zp(x)=Ae2ix,   AC
zp(x)=2iAe2ix
zp(x)=4e2ix
(4A+2iA6A)e2ix=4e2ix
(2i10)Ae2ix=4e2ix
(5+i)A=2
A=5+i13
So
zp(x)=5+i13e2ix
yp(x)=Re(zp(x))=Re(5+i13e2ix)=Re(113(5cos2xsin2x+i(cos(2x)+5sin(2x))))
yp(x)=113(5cos2xsin2x)
y(x)=yc(x)+yp(x)
y(x)=c1e2x+c2e3x113(5cos2xsin2x),    c1,c2C

Example

If we wanted to solve the following
y+y6y=4sin2x
we can just use the above zp value and just take the Im(zp) value since
4sin(2x)=Im(4e2ix)

8.6: RLC Circuits

Components of Electric Circuit

  • Voltage Source
    • E(t)
  • Resistor
    • Resistance (R) measured in Ohms Ω
    • Voltage Drop: Vdrop=IR
  • Capacitor
  • Capacitance (C) measured in Farads (F)
    • Voltage Drop: Vdrop=qC
  • Inductor
    • Inductance (L) measured in Henrys (H)
    • Voltage Drop: Vdrop=L I(t)

Representing the Circuit as a DE

Consider a circuit with a voltage source, one resistor, one inductor, and one capacitor
IR+qC+LI=E(t)

I=I(t),    q=q(t),   q(t)=I(t),    I(t)=q(t)

qI+qc+Lq=E(t)

q+RLq+qLC=E(t)L

The above is a 2nd order linear differential equation with constant coefficients.

q(t)=qc(t)+qp(t)

If E(t)=0
q+RLq+qLC=0
r2+RLr+1LC=0

r=R±R24LC2L

  1. Underdamped if R2<4L/C
    qc=c1e(R2L+i4LCR2t)+c2e(R2Li4LCR2t)
    qc=eR2Lt(c1cosμt+c2sinμt)
    μ=4LCR22L

  2. Critically Damped if R2=4LC
    qc=c1eRt2L+c2teRt2L

  3. Overdamped if R2>4LC

qc=eRt2L(c1eμt+c2eμt)

μ=R24LC2L
In all three cases, limtqc(t)=0

So after some time, q(t)=qp(t)

qp(t) is also known as the steady state charge

9.1: First-Order Linear Systems

A system of differential equations of the form is called a first-order linear system

dx1dt=a11(t)x1(t)+a12(t)x2(t)++a1n(t)xn(t)+b1(t)
dx2dt=a21(t)x1(t)+a22(t)x2(t)++a2n(t)xn(t)+b2(t)

dxndt=an1(t)x1(t)+an2(t)x2(t)++ann(t)xn(t)+bn(t)

where aij(t) and bi(t) are specified functions on an interval I

If b1=b2=bn=0, then the system is called homogeneous

Note that any n-th order linear differential equation can be replaced by an equivalent system of first-order differential equation

Example

Convert the following system of differential equations to a first-order system
dxdtty=cost,   d2ydt2dxdt+x=et

Let x1=x, x2=y, x3=dydt=dx2dt

dx1dt=tx2+cost
dx2dt=x3
dx3dt=x1+dx1dt+et
dx3dt=x1+(tx2+cost)+et

9.2: Vector Formulation

The following first-order system
dx1dt=a11(t)x1(t)+a12(t)x2(t)++a1n(t)xn(t)+b1(t)
dx2dt=a21(t)x1(t)+a22(t)x2(t)++a2n(t)xn(t)+b2(t)

dxndt=an1(t)x1(t)+an2(t)x2(t)++ann(t)xn(t)+bn(t)
can be written as follows
x(t)=A(t)x(t)+b(t)

x(t)=[x1(t)x2(t)xn(t)],   x(t)=[x1(t)x2(t)xn(t)]
A(t)=[a11(t)a12(t)a1n(t)a21(t)a22(t)a2n(t)an1(t)an2(t)ann(t)],    b(t)=[b1(t)b2(t)bn(t)]

x, x, b are column n-vector functions.

Let Vn(I) be the set of all column n vector functions on interval, I

x,x,bVn(I)

Theorem 9.2.1

Vn(I) is a vector space.

Definition 9.2.2

If x1(t),x2(t),xn(t) are all vectors in Vn(I)
then the Wronskian
W[x1,x2,xn](t)=det([x1(t),x2(t),xn(t)])

Theorem 9.2.4

If W[x1,x2,xn](t)(t0) is nonzero at any point t0 in I, the vector valued functions, x1(t),x2(t),,xn(t) are linearly independent on I

Example

Show that x1(t)=[etet] and x2(t)=[etet] are linearly independent.

W[x1,x2]=|etetetet|=2e2t0

9.3: General Results for First-Order Linear Differential Systems

Theorem 9.3.1

Let A(t) and b(t) be continuous on I.
The initial value problem
x(t)=A(t)x(t)+b(t),   x(t0)=x0 has a unique solution on I

Homogeneous Vector Differential Equations

x(t)=A(t)x(t)

Theorem 9.3.2

Let A(t) be an n×n matrix continuous on I. The set of all solutions of the homogeneous vector differential equation x(t)=A(t)x(t) is a vector space of dimension n

  • Any set of n linear independent solutions to the homogeneous vector differential equation is called a fundamental solution set of x=Ax
  • The corresponding matrix X(t)=[x1,x2,,xn] is a fundamental matrix
  • The fundamental solution set is just a basis of the solution space of x=Ax

Theorem 9.3.4

Let A(t) be an n×n matrix function that is continuous on I. If x1,x2,x3,,xn is a linearly independent set of solutions to x=Ax on I, then
W[x1,x2,,xn]0
at every point in t in I

  • This means to see if x1,x2,,xn is a fundamental set, we only need to compute the Wronksian at one point. If W[x1,,xn](t0)0, then the solutions are linearly independent (and form a basis/fundamental set), but if W[x1,,xn](t0)=0, the solutions are linearly dependent on I.
  • The general solution to x=Ax is just the linear combination of the elements of the basis

Proof

Show the contrapositive: If W[x1,,xn](t0)=0 at some point t0 in I, then x1,,xn is linearly dependent.

If W[x1,xn](t0)=0 ,then x1(t0),x2(t0),,xn(t0) is linearly dependent by 4.5.17.

Then there exists c1,c2,,cn, not all zero, such that
c1x1(t0)+c2x2(t0)++cnxn(t0)=0

Let x(t)=c1x1(t)+c2x2(t)++cnxn(t). Then x(t) is the unique solution to
x=A(t)x(t),   x(t0)=0

We know that x(t)=0 is a solution to the IVP above, so by uniqueness
x(t)=c1x1(t)+c2x2(t)++cnxn(t)=0

But not all c1,c2,,cn are zero, so x1,x2,,xn are linearly dependent on I.

Example

x=Ax
A=[1221]
x1(t)=[etcos2tetsin2t],    x2(t)=[etsin2tetcos2t]

Verify that {x1,x2} is a fundamental set of solutions for the vector DE and write the general solution to the vector DE

Solution

  1. Find x1 and x2 and validate that x1=Ax1 and x2=Ax2
  2. Then use the Wronksian to show that the wronskian is never zero, so {x1,x2} is linearly independent and a fundamental set of solutions

Nonhomogeneous Vector Differential Equations

Let A(t) be a matrix function that is continuous on I and let {x1,,xn} be a fundamental set on I for x(t)=A(t)x(t). If xp(t) is a particular solution to the nonhomogeneous vector differential equation
x(t)=A(t)x(t)+b(t)

on I, then every solution to the above vector DE is of the form
x(t)=c1x1+c2x2++cnxn+xp

9.4: Vector Differential Equations: Nondefective Coefficient Matrix

Theorem

If A(t) is constant and diagonalizable n×n matrix, then it is straightforward to find a basis/fundamental set for x=x.

Let v1,v2,,vn be n linearly independent eigenvectors with Avj=λjvj (eigenvectors are distinct but not necessarily eigenvalues). Then {eλ1tv1,eλ2tv2,,eλntvn} is a fundamental set of solutions.

Proof

Since A is diagonalizable, there is an invertible matrix n×n S such that
S1AS=D
where D is a diagonal matrix.

A=SDS1
x=ddtx=Ax
ddtx=SDS1x
S1ddt(x)=D(S1x)
ddt is a linear transformation so,
ddt(S1x)=D(S1x)
Let y(t)=S1x(t)
ddty(t)=Dy
y1=λ1y1,y2=λ2y2,,yn=λnyn
y1=c1eλ1t,y2=c2eλ2t,,yn=cneλnt,   c1,c2,,cnR
y(t)=[c1eλ1tc2eλ2tcneλnt]
x(t)=Sy=S[c1eλ1tc2eλ2tcneλnt]
x(t)=[c1eλ1tv1c2eλ2tv2cneλntvn]

Example

Find a fundamental set of solutions of x=Ax
A=[1222]

Solution

|λ+122λ2|=(λ+1)(λ2)4=(λ3)(λ+2)=0
Roots: 3,2

E3(A)=nullspace[4221]=span{[12]}
E2(A)=nullspace[1224]=span{[21]}

S=[1221]
General Solution:
x(t)=S[c1e3tc2e2t]
Fundamental Set:
{e3t[12],e2t[21]}