Part II Dynamical Systems Michaelmas Term 2014 November 14, 2014

Part II Dynamical Systems Michaelmas Term 2014
lecturer: Professor Peter Haynes (phh1@cam.ac.uk)
November 14, 2014
These lecture notes, plus examples sheets and any other extra material will be made
available at http://www.damtp.cam.ac.uk/user/phh/dynsys.html.
Note that in 2014-15 this is being lectured as a D course. There is no change in the
syllabus from 2013-14, but the 2015 examination questions will be of D-course type.
Informal Introduction
Course Structure
Informal Introduction
Section 1: Introduction and Basic Definitions [3]
Section 2: Fixed Points of Flows in R2 [3]
Section 3: Stability [2]
Section 4: Existence and Stability of Periodic Orbits in R2 [5]
Section 5: Bifurcations of Flows [3]
Section 6: Fixed Points and Bifurcations of Maps [2]
Section 7: Chaos [6]
Books
There are many excellent texts. The following are those listed in the Schedules.
• P.A. Glendinning Stability, Instability and Chaos [CUP].
A very good text written in clear language.
• D.K. Arrowsmith & C.M. Place Introduction to Dynamical Systems [CUP].
Also very good and clear, covers a lot of ground.
1
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
2
• P.G. Drazin Nonlinear Systems [CUP].
Covers a great deal of ground in some detail. Good on the maps part of the course.
Could be the book to go to when others fail to satisfy.
• J. Guckenheimer & P. Holmes Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields [Springer]
Comprehensive treatment of most of the course material and beyond. The style is
mathematically more sophisticated than of the lectures.
• D.W. Jordan & P. Smith Nonlinear Ordinary Differential Equations [OUP].
A bit long in the tooth and not very rigorous but has some very useful material
especially on perturbation theory.
Other books:
• S.H.Strogatz Nonlinear Dynamics and Chaos [Perseus Books, Cambridge, MA.]
An excellent informal treatment, emphasising applications. Inspirational!
• R.Grimshaw An Introduction to Nonlinear Ordinary Differential Equations [CRC
Press].
Very good on stability of periodic solutions. Quite technical in parts.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
3
Motivation
A ’dynamical system’ is a system, whose configuration is described by a state space, with
a mathematically specified rule for evolution in time. Time may be continuous, in which
the rule for evolution might typically be a differential equation, or discrete, in which case
the rule takes the form of a map from the state space to itself. The study of dynamical
systems originates in Newtonian dynamics, e.g. planetary systems, but is relevant to any
system in physics, biology, economics, etc. where the notion of time evolution is relevant.
In this course we shall be concerned with nonlinear dynamical systems, i.e. the rule for time
evolution takes the form of a nonlinear equation. Of course the evolution of some simple
systems such as a pendulum following simple harmonic motion can be expressed in terms
of linear equations: but to know how the period of the pendulum changes with amplitude
the linear equations are not adequate – we must solve a nonlinear equation. Solving a
linear system usually requires a simple set of mathematical tasks, such as determining
eigenvalues. Nonlinear systems have an amazingly rich structure, and most importantly
they do not in general have analytical solutions, or at least none expressible in terms of
elementary (or even non-elementary) functions. Thus in general we rely on a geometric
approach, which allows the determination of important characteristics of the solution
without the need for explicit solution. (The geometry here is the geometry of solutions
in the state space.) Much of the course will be taken up with such ideas.
We also study the stability of various simple solutions. A simple special solution (e.g. a
steady or periodic state) is not of much use if small perturbations destroy it (e.g. a pencil
balanced exactly on its point). So we need to know what happens to solutions that start
near such a special solution. This involves linearizing, which allows the classification of
fixed and periodic points (the latter corresponding to periodic oscillations). We shall also
develop perturbation methods, which allow us to find good approximations to solutions
that are close to well-understood simple solutions.
Many nonlinear systems depend on one or more parameters. Examples include the simple
equation x˙ = µx − x3 , where the parameter µ can take positive or negative values. If
µ > 0 there are three stationary points, while if µ < 0 there is only one. The point µ = 0
is called a bifurcation point, and we shall see that we can classify bifurcations and develop
a general method for determining the solutions near such points.
Many of the systems we shall consider are correspond to second order differential equations, and we shall see that these have relatively simple long-time solutions (fixed and
periodic points, essentially). In the last part of the course we shall look at some aspects of
third order (and time-dependent second order) systems, which can exhibit “chaos”. These
systems are usually treated by the study of maps (of the line or the plane) which can be
related to the dynamics of the differential system. Maps can be treated in a rigorous
manner and there are some remarkable theorems (such as Sharkovsky’s on the order of
appearance of periodic orbits in one-humped maps of the interval) that can be proved.
A simple example of a continuous-time dynamical system is the Lotka-Volterra system
describing two competing populations (e.g. r=rabbits, s=sheep):
r˙ = r(a − br − cs),
s˙ = s(d − er − f s)
where a, b, c, d, e, f are (positive in this example) constants. This is a second order system
which is autonomous (time does not appear explicitly). The system lives in the state
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
4
space or phase space (r, s) ∈ [0, ∞) × [0, ∞). We regard r, s as continuous functions of
time and the dynamical system is said to describe a flow in the state space, which takes
a point describing the configuration at one time to that describing it at a later time. The
solutions follow curves in the phase space called trajectories.
Typical analysis looks at fixed points. These are at (r, s) = (0, 0), (r, s) = (0, d/f ),
(r, s) = (a/b, 0) and a solution with r, s 6= 0 as long as bf 6= ce. Assuming, as can
be proved, that at long times the solution tends to one of these, we can look at local
approximations near the fixed points. Near (0, d/f ), write u = s−d/f , then approximately
r˙ = r(a − cd/f ), u˙ = −du − der/f , so the solution tends to this point (r = u = 0) if
a/c < d/f (so this fixed point is stable), but not otherwise. The concept of stability is
more involved than naive ideas would suggest and so we will be considering the nature
of stability. We use bifurcation theory to study the change in stability as parameters are
varied. As the stability of fixed points changes the nature of the phase portraits, i.e.
patterns of solution curves or trajectories, changes. For the Lotka-Volterra system there
are three distinct phase portraits possible, depending on the parameters. (Exercise.)
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
5
Other Lotka-Volterra models have different properties, for example the struggle between
sheep s and wolves w:
w˙ = w(−a + bs), s˙ = s(c − dw)
This system turns out to have periodic orbits.
In 2 dimensions periodic orbits are common for topological reasons, so it will be useful to
investigate their stability. Consider the system
x˙ = −y + µx(1 − x2 − y 2 ),
y˙ = x + µy(1 − x2 − y 2 )
In polar coordinates r, θ, x = r cos θ, y = r sin θ we have
r˙ = µ(r − r3 ),
θ˙ = 1
The case µ = 0 is special since there are infinitely many periodic orbits. This is nonhyperbolic or not structurally stable. Any small change to the value of µ makes a qualitative
change in the phase portrait.
The stability of periodic orbits can be studied in terms of maps. If a solution curve (e.g.
in a 3-D phase space) crosses a plane at a point xn and then crosses again at xn+1 this
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
6
defines a map of the plane into itself (the Poincar´e map).
Maps also arise naturally as approximations to flows, e.g. the equation x˙ = µx−x3 can be
approximated using Euler’s method (with xn = x(n δt)) to give xn+1 = xn (µδt + 1) − x3n δt.
Poincar´e maps for 3D flows can have many interesting properties including chaos. A
famous example is the the Lorenz equations x˙ = σ(y − x), y˙ = rx − y − xz, z˙ = −bz + xy
(and the corresponding 2-D Poincar´e map). For maps, even 1D maps such as the logistic
map xn+1 = µxn (1 − xn ) can have chaotic behaviour.
The Matlab Demo ’lorenz’ shows trajectories of the Lorenz equations.
P.H.Haynes
1
1.1
Part II Dynamical Systems Michaelmas Term 2014
7
Introduction and Basic Definitions
Elementary concepts
We need some notation to describe our equations.
Define a State Space (or Phase Space) E ⊆ Rn (E is sometimes denoted by X). Then
the state of the system is denoted by x ∈ E. The state depends on the time t and the
(ordinary) differential equation gives a rule for the evolution of x with t:
x˙ ≡
dx
= f (x, t) ,
dt
(1)
where f : E × R → E is a vector field.
∂f
If
≡ 0 the equation is autonomous. The equation is of order n. N.B. a sys∂t
tem of n first order equations as above is equivalent to an nth order equation in a
single dependent variable. If dn x/dtn = g(x, dx/dt, . . . , dn−1 x/dtn−1 ) then we write
y = (x, dx/dt, . . . , dn−1 x/dtn−1 ) and y˙ = (y2 , y3 , . . . , yn , g).
Non-autonomous equations can be made (formally) autonomous by defining y ∈ E ×R by
y = (x, t), so that y˙ = g(y) ≡ (f (y), 1). (We shall assume autonomous unless otherwise
stated.)
Example 1 Second order system x¨ + x˙ + x = 0 can be written x˙ = y, y˙ = −x − y, so
(x, y) ∈ R2 .
In this course we adopt a geometric viewpoint: rather than solving equations in terms
of “elementary” (a.k.a. tabulated) functions, look for general properties of the solutions.
Since almost all equations cannot be solved in terms of elementary functions, this is more
productive!
1.2
Initial Value Problems
Typically, seek solutions to (1) understood as an initial value problem:
Given an initial condition x(t0 ) = x0 (x0 ∈ E, t0 ∈ I ⊆ R), find a differentiable function
x(t) for t ∈ I which remains in E for t ∈ I and satisfies the initial condition and the
differential equation.
For an autonomous system we can alternatively define the solution in terms of a flow φt :
Definition 1 (Flow) φt (x) s.t. φt (x0 ) is the solution at time t of x˙ = f (x) starting at x0 when t = 0 is called the flow through x0 at t = 0. Thus φ0 (x0 ) = x0 ,
φs (φt (x0 )) = φs+t (x0 ) etc. (Continuous semi-group). We sometimes write φft (x0 ) to
identify the particular dynamical system leading to this flow.
Does such a solution exist? And is it unique?
Existence is guaranteed for many sensible functions by the **Cauchy-Peano theorem**:
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
8
Theorem 1 (Cauchy-Peano). If f (x, t) is continuous and |f | < M in the domain D :
{|t − t0 | < α, |x − x0 | < β}, then the initial value problem above has a solution for
|t − t0 | < min(α, β/M ).
But uniqueness is guaranteed only for stronger conditions on f .
Example 2 Unique solution: x˙ = |x|, x(t0 ) = x0 . Then x(t) = x0 et−t0 (x0 > 0),
x(t) = x0 et0 −t (x0 < 0), x(t) = 0 (x0 = 0). Here f is not differentiable, but it is
continuous.
1
Example 3 Non-unique solution: x˙ = |x| 2 , x(t0 ) = x0 . We still have f continuous.
Solving gives x(t) = (t + c)2 /4 (x >√0) or x(t) = −(c − t)2 /4 (x < 0). So for x0 > 0, for
example, we have x(t) = (t − t0 + 4x0 )2 /4 (t > t0 ). However for x0 = 0 we have two
solutions: x(t) = 0 and x(t) = (t − t0 )|(t − t0 )|/4 both valid for all t and both matching
the initial condition at t = t0 .
1
Why are these different? Because in second case derivatives of |x| 2 are not bounded at
the origin. To guarantee uniqueness of solutions need stronger property than continuity;
function to be Lipschitz.
Definition 2 (Lipschitz property). A function f defined on a subset of Rn satisfies a
Lipschitz condition at a point x0 with Lipschitz constant L if ∃(L, a) such that ∀x, y
with |x − x0 | < a, |y − x0 | < a, |f (x) − f (y)| ≤ L|x − y|.
Note that Differentiable → Lipschitz → Continuous.
We can now state the result (discussed in Part IA also):
Theorem 2 (Uniqueness theorem). Consider an initial value problem to the system (1)
with x = x0 at t = t0 . If f satisfies a Lipschitz condition at x0 then the solution φt−t0 (x0 )
exists and is unique and continuous in a neighbourhood of (x0 , t0 )
Note that uniqueness and continuity do not mean that solutions exist for all time!
Example
4 (Finite time blowup). x˙ = x3 , x ∈ R, x(0) = 1. This is solved by x(t) =
√
1/ 1 − 2t, so x → ∞ when t → 12 .
This does not contradict earlier result [why not?] .
From now on consider differentiable functions f unless stated otherwise.
1.3
Trajectories and Flows
Consider the o.d.e. x˙ = f (x) with x(0) = x0 , or equivalently the flow φt (x0 )
Definition 3 (Orbit). The orbit of φt through x0 is the set O(x0 ) ≡ {φt (x0 ) : −∞ <
t < ∞}. This is also called the trajectory through x0 .
Definition 4 (Forwards orbit). The forwards orbit of φt through x0 is O+ (x0 ) ≡
{φt (x0 ) : t ≥ 0}; backwards orbit O− defined similarly for t ≤ 0.
Note that flows and maps can be linked by considering xn+1 = φδt (xn ).
P.H.Haynes
1.4
Part II Dynamical Systems Michaelmas Term 2014
9
Invariant and Limit Sets
Work by considering the phase space E, and the flow φt (x0 ), considered as a trajectory
(directed line) in the phase space. we are mostly interested in special sets of trajectories,
as long-time limits of solutions from general initial conditions. These are called invariant
sets.
Definition 5 (Invariant set). A set of points Λ ⊂ E is invariant under f if x ∈ Λ ⇒
O(x) ∈ Λ. (Can also define forward and backward invariant sets in the obvious way).
Clearly O(x) is invariant. Special cases are;
Definition 6 (Fixed point). The point x0 is a fixed point (equilibrium, stationary
point, critical point) if f (x0 ) = 0. Then x = x0 for all time and O(x0 ) = x0 .
Definition 7 (Periodic point). A point x0 is a periodic point if φT (x0 ) = x0 for some
T > 0, but φt (x0 ) 6= x0 for 0 < t < T . The set {φt (x0 ) : 0 ≤ t < T } is called a periodic
orbit through x0 . T is the period of the orbit. If a periodic orbit C is isolated, so that
there are no other periodic orbits in a sufficiently small neighbourhood of C, the periodic
orbit is called a limit cycle.
Example 5 (Family of periodic orbits). Consider
−y 3
x˙
=
x3
y˙
This has solutions of form x4 + y 4 = const., so all orbits except the fixed point at the
origin are periodic.
Example 6 (Limit cycle). Now consider
−y + x(1 − x2 − y 2 )
x˙
=
x + y(1 − x2 − y 2 )
y˙
Here we have r˙ = r(1 − r2 ), where r2 = x2 + y 2 . There are no fixed points except the
origin and there is a unique limit cycle r = 1.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
10
Definition 8 (Homoclinic and heteroclinic orbits). If x0 is a fixed point and ∃y 6= x0
such that φt (y) → x0 as t → ±∞, then O(y) is called a homoclinic orbit. If there
are two fixed points x0 , x1 and ∃y 6= x0 , x1 such that φt (y) → x0 (t → −∞), φt (y) →
x1 (t → +∞) then O(y) is a heteroclinic orbit. A closed sequence of heteroclinic orbits
is called a heteroclinic cycle (sometimes also called heteroclinic orbit!)
When the phase space has dimension greater than 2 more exotic invariant sets are possible.
Example 7 (2-Torus). let θ1 , θ2 be coordinates on the surface of a 2-torus, such that
θ˙1 = ω1 , θ˙2 = ω2 . If ω1 , ω2 are not rationally related the trajectory comes arbitrarily close
to any point on the torus.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
11
Example 8 (Strange Attractor). Anything more complicated than above is called a strange
attractor. Examples include the Lorenz attractor.
Try the Matlab lorenz demo again. Note the term ’attractor’ is justified by the fact that
trajectories tend to a set which occupies a ’small’ part of phase space (a part which has
dimension less than 3). The term ’strange’ is justified by the complex structure of the
attractor which has a fractal nature.
We have to be careful in defining how invariant sets arise as limits of trajectories. Not
enough to have definition like “set of points y s.t. φt (x) → y as t → ∞”, as that does
not include e.g. limit cycles. Instead use the following:
Definition 9 (Limit set). The ω-limit set of x, denoted by ω(x) is defined by
ω(x) ≡ {y : φtn (x) → y for some sequence of times t1 , t2 , . . . , tn , . . . → ∞}. Can also
define α-limit set by sequences → −∞.
The ω-limit set ω(x) has nice properties when O(x) is bounded: In particular, ω(x) is:
(a) Non-empty [every sequence of points in a closed bounded domain has at least one
accumulation point] (b) Invariant under f [Obvious from definition] (c) Closed [think
about limit of a sequence of points in ω(x)] and bounded (d) Connected [Assume not,
consider two points in the two separate parts of ω(x), then a sequence where alternate
members tend to each of the two points. Sufficiently far in the sequence, the parts of the
orbit between alternate members then contain points that lie outside of ω(x).]
1.5
Topological equivalence and structural stability
What do we mean by saying that two flows (or maps) have essentially the same (topological/geometric) structure? Or that the structure of a flow changes at a bifurcation?
Definition 10 (Topological Equivalence). Two flows φft (x) and φgt (y) are topologically
equivalent if there is a homeomorphism h(x) : E f → E g (i.e. a continuous bijection
with continuous inverse) and time-increasing function τ (x, t) (i.e. a continous, monotonic
function of t) with
φft (x) = h−1 ◦ φgτ ◦ h(x) and τ (x, t1 + t2 ) = τ (x, t1 ) + τ (φft1 (x), t2 )
In other words it is possible to find a map h from one phase space to the other, and a map
τ from time in one phase space to time in the other, in such a way that the evolution of
the two systems are the same. Clearly topological equivalence maps fixed points to fixed
points, and periodic orbits to periodic orbits – though not necessarily of the same period.
A stronger condition is topological conjugacy, where time is preserved, i.e. τ (x, t) = t.
Example 9 The dynamical systems
r˙ = −r
θ˙ = 1
and
ρ˙ = −2ρ
ψ˙ = 0
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
12
are topologically equivalent with h(0) = 0, h(r, θ) = (r2 , θ + ln r) for r 6= 0 in polar
coordinates, and τ (x, t) = t. To show this, integrate the ODEs to get
φft (r0 , θ0 ) = (r0 e−t , θ0 + t),
φgt (ρ0 , ψ0 ) = (ρ0 e−2t , ψ0 )
and check
h ◦ φft = (r02 e−2t , θ0 + ln r0 ) = φgt ◦ h
(Note that in this case the two systems are also topologically conjugate.)
Example 10 The dynamical systems
r˙ = 0
θ˙ = 1
and
r˙ = 0
θ˙ = r + sin2 θ
are topologically equivalent. This should be obvious because the trajectories are the same
and so we can put h(x) = x. Then stretch timescale.
Definition 11 **(Structural Stability)** .The vector field f [system x˙ = f (x) or flow
f
φ
Pt (x)] is structurally stable if ∃ǫ > 0 s.t. f +δ is topologically equivalent to f ∀δ(x) with |δ|+
i |∂δ/∂xi | < ǫ.
Examples: The first system above is structurally stable. The second is not (since the
periodic orbits are destroyed by a small perturbation r˙ 6= 0).
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
13
Flows in R2
2
2.1
Linearization
In analyzing the behaviour of nonlinear systems the first step is to identify the fixed
points. Then near these fixed points, behaviour should approximate linear. In fact near
a
∂f
fixed point x0 s.t. f (x0 ) = 0, let y = x − x0 ; then y˙ = Ay + O(|y|2 ), where Aij = ∂xji x=x0
is the linearization of f about x0 . The matrix A is also written Df , the Jacobian
matrix of f at x0 . We hope that in general the flow near x0 is topologically equivalent
to the linearized problem. This is not always true, as shown below.
2.2
Classification of fixed points
Consider general linear system x˙ = Ax, where A is a constant matrix. We need the
2
eigenvalues λq
1,2 of the matrix, given by λ − λTrA + DetA = 0. This has solutions
λ = 21 TrA ±
1
(TrA)2
4
− DetA. We can then classify the roots into classes.
• Saddle
sign.
< 0).Roots are real andof opposite
point(DetA
λ1 0
0 3
−2 0
: λ1 λ2 < 0.)
; (canonical form
,
E.g.
0 λ2
1 0
0 1
• Node ((TrA)2 > 4DetA > 0). Roots are real and either both positive (TrA > 0:
unstable,
repelling
attracting node).
< 0: stable,
node), orboth negative (TrA
2 0
λ1 0
0
1
E.g.
: λ1 λ2 > 0.)
;(canonical form
,
0 λ2
−1 −3
0 1
• Focus (Spiral) ((TrA)2 < 4DetA). Roots are complex and either both have positive
real part (TrA > 0: unstable, repelling focus), or both negative real part (TrA < 0:
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
14
stable,
attractingfocus).
λ ω
2 1
−1 1
: eigenvalues λ ± iω.)
;(canonical form
,
E.g.
−ω λ
−2 3
−1 −1
Degenerate cases occur when two eigenvalues
are equal ((TrA)2 = 4DetA
6=0) giving
1 1
1 0
.
or Improper nodes e.g.
either Star/Stellar nodes, e.g.
0 1
0 1
In all these cases the fixed point is hyperbolic.
Definition 12 (Hyperbolic fixed point). A fixed point x of a dynamical system is hyperbolic iff all the eigenvalues of the linearization A of the system about x have non-zero real
part.
This definition holds for higher dimensions too.
Thus the nonhyperbolic cases, which are of great importance in bifurcation theory, are
those for which at least one eigenvalue has zero real part. These are of three kinds:
• A = 0. Both eigenvalues are zero.
• DetA = 0.
Here one eigenvalue is zero and we have a line of fixed points. e.g.
0 0
0 −1
• TrA = 0, DetA > 0 (Centre).
Here the eigenvalues are ±iω and trajectories are
0 2
.
closed curves, e.g.
−1 0
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
15
All this can be summarized in a diagram
To find canonical form, find the eigenvectors of A and use as basis vectors (possibly
generalised if eigenvalues equal), when eigenvalues real. For complex eigenvalues in R2
we have two complex eigenvectors e, e∗ so use {Re(e), Im(e)} as a basis. This can help
in drawing trajectories. But note classification is independent of basis.
Centres are special cases in context of general flows; but Hamiltonian systems have centres
generically. These systems have the form x˙ = (Hy , −Hx ) for some H = H(x, y). At a
fixed point ∇H = 0, and the matrix
A=
Hxy
Hyy
−Hxx −Hxy
⇒ TrA = 0.
Thus all fixed points are saddles or centres. Clearly also x˙ · ∇H = 0, so H is constant on
all trajectories, i.e. trajectories are contours of H(x, y)
2.3
Effect of nonlinear terms
For a general nonlinear system x˙ = f (x), we start by locating the fixed points, x0 , where
x0 = 0. Then what does the linearization of the system about the fixed point x0 tell us
about the behaviour of the nonlinear system?
We can show (e.g. Glendinning Ch. 4) that if
(i) x0 is hyperbolic; and
(ii)the nonlinear corrections are O(|x − x0 |2 ),
then the nonlinear system and the linearized system are topologically conjugate.
We thus discuss separately hyperbolic and non-hyperbolic fixed points.
2.3.1
Stable and Unstable Manifolds
For the linearized system we can separate the phase space into different domains corresponding to different behaviours in time.
Definition 13 (Invariant subspaces). The stable, unstable and centre subspaces
of the linearization of f at the fixed point x0 are the three linear subspaces E u , E s , E c ,
spanned by the subsets of (possibly generalised) eigenvectors of A whose eigenvalues have
real parts < 0, > 0, = 0 respectively.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
16
Note that a hyperbolic fixed point has no centre subspace.
These concepts can be extended simply into the nonlinear domain for hyperbolic fixed
points. We suppose that the fixed point is at the origin and that f is expandable in a
Taylor series. We can write x˙ = Ax + f (x), where f = O(|x|2 ). We need the Stable (or
Invariant) Manifold Theorem.
Theorem 3 (Stable (or Invariant) Manifold Theorem). Suppose 0 is a hyperbolic fixed
point of x˙ = f (x), and that E u , E s are the unstable and stable subspaces of the linearization
u
s
of f about 0. Then ∃ local stable and unstable manifolds Wloc
(0), Wloc
(0), which
u
s
u
s
have the same dimension as E , E and are tangent to E , E at 0, such that for x 6= 0
but in a sufficiently small neighbourhood of 0,
u
Wloc
= {x : φt (x) → 0 as t → −∞}
s
Wloc
= {x : φt (x) → 0 as t → +∞}
Proof: rather involved; see Glendinning, p.96. The trick is to produce a near identity
change of coordinates. Suppose that for each x, x = y + z, where y ∈ E u and z ∈ E s .
The linearized stable manifold is therefore y = 0 and the linearized unstable manifold
is z = 0. We look for the stable manifold in the form y = S(z); then make a change
of variable ξ = y − S(z) so that the transformed equation has ξ = 0 as an invariant
manifold. The function S(z) can be expanded as a power series, and the idea is to check
that the expansion can be performed to all orders, giving a finite (i.e. non-zero) radius of
convergence. The unstable manifold can be constructed in a similar way.
s
The local stable manifold Wloc
can be extended to a global invariant manifold W s by
s
following the flow backwards in time from points in Wloc
, Correspondingly W u loc can also
be extended to a global invariant manifold.
It is easy to find approximations to the stable and unstable manifolds of a saddle point
in R2 . The stable(say) manifold must tend to the origin and be tangent to the stable
subspace E s (i.e. to the eigenvector corresponding to the negative eigenvalue). (It is
often easiest though not necessary to change to coordinates such that x = 0 or y = 0
is tangent to the manifold). Then for example if we want to find the manifold (for 2D
flows just a trajectory) that is tangent to y = 0 at the origin for the system x˙ = f (x, y),
y˙ = g(x, y), write y = p(x); then
g(x, p(x)) = y˙ = p′ (x)x˙ = p′ (x)f (x, p(x)),
which gives a nonlinear ODE for p(x). In general this cannot be solved exactly, but we
can find a (locally convergent) series expansion in the form p(x) = a2 x2 + a3 x3 + . . ., and
solve term by term.
x
x˙
. This can be solved exactly to give x =
=
Example 11 Consider
−y + x2
y˙
x0 et , y = 13 x20 e2t + (y0 − 13 x20 )e−t or y(x) = 31 x2 + (y0 − 31 x20 )x0 x−1 . Two obvious invariant
curves are x = 0 and y = 13 x2 , and x = 0 is clearly the stable manifold. The linearization
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
17
1
0
and so the unstable manifold must be tangent to
about 0 gives the matrix A =
0 −1
y = 0; y = 31 x2 fits the bill. To find constructively write y(x) = a2 x2 + a3 x3 + . . .. Then
dy
dy
= x˙
= (2a2 x + 3a3 x2 + . . .)x = −a2 x2 − a3 x3 + . . . + x2
dt
dx
Equating coefficients, find a2 = 13 , a3 = 0, etc.
x − xy
x˙
; there is no simple form for the unstable
=
Example 12 Now
−y + x2
y˙
manifold (stable manifold is still x = 0). The unstable manifold has the form y = ax2 +
2
bx3 + cx4 + . . ., where [exercise] a = 13 , b = 0, c = 45
, etc. Note that this infinite series (in
2
powers of x ) has a finite radius of convergence since the unstable manifold of the origin
is attracted to a stable focus at (1, 1).
2.3.2
Nonlinear terms for non-hyperbolic cases
We now suppose that there is at least one eigenvalue on the imaginary axis. Concentrate
on R2 , generalization not difficult. There are two possibilities:
(i) A has eigenvalues ±iω. The linear system is a centre. The nonlinear systems have
different forms for different r.h.s.’s
P.H.Haynes
• Stable focus:
Part II Dynamical Systems Michaelmas Term 2014
• Unstable focus:
−y − x3
x − y3
• Nonlinear centre:
−y + x3
x + y3
18
−y − 2x2 y
x + 2y 2 x
(ii) A has one zero eigenvalue, other e.v.non-zero, e.g. (a) x˙ = x2 , y˙ = −y [Saddle-node],
(b) x˙ = x3 , y˙ = −y [Nonlinear Saddle].
(iii) Two zero eigenvalues. Here almost anything is possible. Change to polar coords.
Find lines as r → 0 on which θ˙ = 0. Between each of these lines can have three different
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
19
types of behaviour. (See diagram).
2.4
Sketching phase portraits
This often involves some good luck and good judgement! Nonetheless there are some
guidelines which if followed will give a good chance of success. The general procedure is
as follows:
• 1. Find the fixed points, and find any obvious invariant lines e.g.x = 0 when x = xh(x, y)
etc.
• 2. Calculate the Jacobian and hence find the type of fixed point. (Accurate calculation
of eigenvalues etc. for nodes may not be needed for sufficiently simple systems - just find
the type.) Do find eigenvectors for saddles.
• 3. If fixed points non-hyperbolic get local picture by considering nonlinear terms.
˙ are zero.
• 4. If still puzzled, find nullclines, where x˙ or y˙ (or r˙ or θ)
• 5. Construct global picture by joining up local trajectories near fixed points (especially
saddle separatrices) and put in arrows.
• 6. Use results of Section 4 to decide whether there are periodic orbits.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
20
1 − y −x
x(1 − y)
x˙
.
. Jacobian A =
=
Example 13 (worked example). Consider
2x −1
−y + x2
y˙ 0 ∓1
. TrA2 = 1 < DetA so stable
Fixed points at (0, 0) (saddle) and (±1, 1); A =
±2 −1
foci. x = 0 is a trajectory, x˙ = 0 on y = 1 and y˙ ≶ 0 when y ≷ x2 .
P.H.Haynes
3
3.1
Part II Dynamical Systems Michaelmas Term 2014
21
Stability
Definitions of stability
If we find a fixed point, or more generally an invariant set, of a dynamical system we want
to know what happens to the system under small perturbations away from the invariant
set. We also want to know which invariant sets will be approached at large times. If in
some sense the solution stays “nearby”, or the set is approached after long times, then
we call the set stable. There are several differing definitions of stability. We will consider
stability of whole invariant sets (and not just of points in those sets). This shortens the
discussion.
Consider an invariant set Λ in a general (autonomous) dynamical system described by a
flow φt . (This could be a fixed point, periodic orbit, torus etc.) We need a definition of
points near the set Λ:
Definition 14 (Neighbourhood of a set Λ). For δ > 0 the neighbourhood Nδ (Λ) = {x :
∃y ∈ Λ s.t. |x − y| < δ}
We also need to define the concept of a flow trajectory ’tending to’ Λ.
Definition 15 (flow tending to Λ). The flow φt (x) → Λ iff min|φt (x)−y| → 0 as t → ∞
y∈Λ
Definition 16 (Lyapunov stability)[LS]. The set Λ is Lyapunov stable if ∀ǫ > 0 ∃δ >
0 s.t. x ∈ Nδ (Λ) ⇒ φt (x) ∈ Nǫ (Λ) ∀t ≥ 0. (“start near, stay near”).
Definition 17 (Quasi-asymptotic stability)[QAS]. The set Λ is quasi-asymptotically
stable if ∃δ > 0 s.t.x ∈ Nδ (Λ) ⇒ φt (x) → Λ as t → ∞. (“get arbitrarily close
eventually”).
Definition 18 (Asymptotic stability)[AS]. The set Λ is asymptotically stable if it is
both Lyapunov stable and quasi-asymptotically stable.
Example 14 (The origin is LS but not QAS).
All limit sets are circles or the origin.
x˙
y˙
=
−y
x
.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
Example 15 (QAS but not LS).
Point r = 1, θ = 0 is a saddle-node.
r˙
θ˙
=
r(1 − r2 )
sin2 2θ
22
.
y − µx
x˙
(µ > 0). Here y = y0 e−2µt , x = x0 e−µt +y0 µ−1 (e−µt −
=
Example 16
−2µy
y˙
e−2µt ). Thus for t ≥ 0, |y| ≤ |y0 |, |x| ≤ |x0 | + 14 µ−1 |y0 |, and so x2 + y 2 ≤ (x20 + y02 )(1 +
µ−1 + µ−2 /16). This proves Lyapunov stability. Furthermore the solution clearly tends to
the origin as t → ∞.
This example is instructive because for µ sufficiently small the solution can grow to
large values before eventually decaying. To require that the norm of the solution decays
monotonically is a stronger result, only applicable to a small number of problems.
If an invariant set is not LS and not QAS we say it is unstable (or according to some
books, nonstable).
There may be more than one asymptotically stable limit set. In that case we want to
know what parts of the phase space lead to which limit sets being approached. Then we
define the basin of attraction (or domain of stability):
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
23
Definition 19 If Λ is an asymptotically stable invariant set the basin of attraction
of Λ, B(Λ) ≡ {x : φt (x) → Λ as t → ∞}. If B(Λ) = Rn then Λ is globally attracting
(or globally stable). Note that B(Λ) is an open set.
When there are many fixed points the basin of attraction can be quite complicated.
When Λ is an isolated fixed point (x0 , say) we can investigate its stability by linearizing
the system about x0 (see previous section of notes).
Theorem 4 (Stability of hyperbolic fixed points). If 0 is a hyperbolic stable focus or
stable node then it is asymptotically stable. If 0 is a hyperbolic fixed point with at least
one eigenvalue with Reλ > 0, then it is unstable.
3.2
Lyapunov functions
We can prove much about stability of a fixed point (which, for convenience, will be taken to
be at the origin) if we can find a suitable positive function V of the independent variables
that is zero at the origin and decreases monotonically under the flow φt . Then under
certain reasonable conditions we can show that V → 0 so that the appropriately defined
distance from the origin of the solution similarly tends to zero. This is a Lyapunov
function, defined precisely by
Definition 20 (Lyapunov function). Let E be a closed connected region of Rn containing
the origin. A function V : Rn → R which is continuously differentiable except perhaps at
the origin is a Lyapunov function for a flow φ if (i) V(0) = 0, (ii) V is positive definite (V(x) > 0 when 0 6= x), and if also (iii) V(φt (x)) ≤ V(x) ∀x ∈ E (or equivalently
if V˙ ≤ 0 on trajectories).
Then we have the following theorems:
Theorem 5 (Lyapunov’s First Theorem [L1]). Suppose that a dynamical system x˙ = f (x)
has a fixed point at the origin. If a Lyapunov function exists,as defined above, then the
origin is Lyapunov stable.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
24
Theorem 6 (Lyapunov’s Second Theorem [L2]). If in addition V˙ < 0 for x 6= 0 then V
is a Strict Lyapunov function and the origin is asymptotically stable.
An important point is that level sets (e.g. contours in 2-D) of V(x) can be used to define
neighbourhoods of the origin. In particular, for sufficiently small ǫ, there is an α such
that V(x) < α implies that |x| < ǫ and correspondingly there is a β such that V(x) > β
implies that |x| > ǫ.
Proof of First Theorem: We want to show that for any sufficiently small neighbourhood
U of the origin, there is a neighbourhood V s.t. if x0 ∈ V , φt (x0 ) ∈ U for all positive t.
Let U = {x : |x| < ǫ} ⊆ E and let α = min{V(x) : |x| = ǫ}. Clearly α > 0 from the
definition of V. Now consider the set U1 = {x ∈ U : V(x) < α}. Certainly U1 contains
the origin, but furthermore there is a δ > 0 such that V = {x : |x| < δ} ⊆ U1 . Then
x ∈ V implies that V(φt (x)) < α ∀t ≥ 0, since V does not increase along trajectories.
Hence (φt (x)) does not leave U1 and hence it does not leave U , as required.
Proof of Second Theorem: [L1] implies that x remains in the domain U . For any initial point x0 6= 0, V(x0 ) > 0 and V˙ < 0 along trajectories. Thus V(φt (x0 )) decreases
monotonically and is bounded below by 0. Then V → α ≥ 0; suppose α > 0. Then
|x| is bounded away from zero (by continuity of V), and so W ≡ V˙ < −b < 0. Thus
V(φt (x0 )) < V(x0 ) − bt and so is certainly negative after a finite time. Thus there is a
contradiction, and so α = 0. Hence V → 0 as t → ∞ and so |x| → 0 (again by continuity).
This proves asymptotic stability.
To prove results about instability just reverse the sense of time. Sometimes we can
demonstrate asymptotic stability even when V is not a strict Lyapunov function. For this
need another theorem, La Salle’s Invariance Principle:
Theorem 7 (La Salle’s Invariance Principle). If V is a Lyapunov function for a flow φ
then ∀x such that φt (x) is bounded ∃c s.t. ω(x) ⊆ Mc ≡ {x : V(φt (x)) = c ∀t ≥ 0}.(Or,
˙
φt (x) → an invariant subset of the set {y : V(y)
= 0}.)
Proof : choose a point x and let c = inf t≥0 V(φt (x)). By assumption φt (x) remains finite
and ω(x) is not empty. Then if y ∈ ω(x), ∃ a sequence of times tn s.t. φtn (x) → y as
n → ∞, then by continuity of V and since V˙ ≤ 0 on trajectories we have V(y) = c. We
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
25
need to prove that y ∈ Mc (i.e. that V(φt (y)) = c for t ≥ 0). Suppose to the contrary
that ∃s s.t. V(φs (y)) < c. Thus for all z sufficiently close to y we have V(φs (z)) < c. But
if z = φtn (x) for sufficiently large n we have V(φtn +s (x)) < c, which is a contradiction.
This proves the theorem.
As a corollary we note that if V is a Lyapunov function on a bounded domain D and
˙
the only invariant subset of {y : V(y)
= 0} is the origin then the origin is asymptotically
stable.
The following are examples of the use of the Lyapunov theorems.
−x + xy 2
x˙
. We
=
Example 17 (Finding the basin of attraction). Consider
−2y + yx2
y˙
can ask: what is the best condition on x which guarantees that x → 0 as t → ∞? Consider
V(x, y) = 12 (x2 + b2 y 2 ) for constant b.Then V˙ = −(x2 + 2b2 y 2 ) + (1 + b2 )x2 y 2 . We can
then show easily that
√
V˙ < 0 if V < (3 + 2 2)b2 /2(1 + b2 ) [Check by setting z = by and using polars for (x, z)].
The domain of attraction of the origin certainly includes the union of these sets over all
values of b.
y
x˙
with k > 0. We can
=
Example 18 (Damped pendulum). Here
−ky − sin x
y˙
choose V = 12 y 2 + 1 − cos x; clearly V is positive definite provided we identify x and x + 2π,
and V˙ = −ky 2 ≤ 0. So certainly the origin is Lyapunov stable by [L1]. But we cannot use
[L2] since V˙ is not negative definite. Nonetheless from La Salle’s principle we see that the
set Mc is contained in the set of complete orbits satisfying y = 0. The only such orbits
are (0, 0) (c = 0) and (π, 0) (c = 2) So these points are the only possible members of ω(x).
Since the origin is Lyapunov stable we conclude that for all points x s.t. V < 2 the only
member of ω(x) is the origin and so this point is asymptotically stable.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
26
Special cases are gradient flows.
Definition 21 A system is called a gradient system or gradient flow if we can write
x˙ = −∇V (x).
In this case we have V˙ = −|∇V |2 ≤ 0, with V˙ < 0 except at the fixed points which
have |∇V | = 0. Thus we can use La Salle’s principle to show that Ω = {ω(x) : x ∈ E}
consists only of the fixed points. Note that this does NOT mean that all the fixed
points are asymptotically stable (see diagram). These ideas can be extended to more
general systems of the form x˙ = −h∇V , where h(x) is a strictly positive continuously
differentiable function. [Proof: exercise].
3.3
Bounding functions
Even when we cannot find a Lyapunov function in the exact sense, we can sometimes
find positive definite functions V s.t. V˙ < 0 outside some neighbourhood of the origin.
We call these bounding functions. They are used to show that x remains in some
neighbourhood of the origin.
Theorem 8 Let V : E ⊆ Rn → R be a continuously differentiable function, such that
for each k > 0, Vk = {x ∈ E : V(x) < k} is a simply connected bounded domain with
Vk ⊂ Vk′ if k < k ′ . Then if there is a simply connected compact domain D ⊂ E such that,
˙
for all x ∈
/ D, V(x)
< −δ < 0 for some δ > 0, and there is a κ > 0 such that D ⊂ Vκ
then all orbits eventually enter and remain inside the set Vκ .
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
Proof: exercise (see diagram).
27
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
28
Existence and stability of periodic orbits in R2
4
Example 19 Damped pendulum with torque. Consider the system θ¨ + k θ˙ + sin θ = F ,
k > 0, F > 0. We would like to know whether there is a periodic orbit of this equation.
˙ Then V˙ =
We can find a bounding function of the form V = 21 p2 + 1 − cos θ (p = θ).
pp˙ + θ˙ sin θ = p(F − kp). Thus V˙ < 0 unless 0 < p < F/k. Maximum value of V on
boundary of this domain is Vmax = 12 ( Fk )2 +2. By analogy with previous result on bounding
functions we see that all orbits eventually enter and remain in the region V < Vmax .
What happens within this region? We can look for fixed points: these are at p = 0,
sin θ = F . So there are 2 f.p.’s if F < 1, no f.p.’s if F > 1. In the first case one fixed
point is stable (node or focus depending on k) and the other is a saddle. In the second
case what can happen? Either there is a closed orbit or ?possibly? a space filling curve.
There are some nontrivial theorems that we can use to answer this.
4.1
The Poincar´
e Index
Closed curves can be distinguished by the number of rotations of the vector field f as the
curve is traversed. This property of a curve in a vector field is very useful in understanding
the phase portrait.
Definition 22 (Poincar´e Index). Consider a vector field f = (f1 , f2 ). At any point the
direction of f is given by ψ = tan−1 (f2 /f1 ) (with the usual conventions). Now let x
traverse a closed curve C. ψ will increase by some multiple (possibly negative) or 2π.
This multiple is called the Poincar´
e Index IC of C.
This index can be put in integral form. We have
I
I
I
1
1
1
f2
f1 df2 − f2 df1
−1
=
IC =
dψ =
d tan
2π C
2π C
f1
2π C f12 + f22
In fact the index is most easily worked out by hand. There are several results that are
easily proved about Poincar´e indices that makes them easier to calculate.
1. The index takes only integer values, and is continuous when the vector
field has no zeroes. It therefore is the same for two curves which can be deformed
into each other without crossing any fixed point.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
29
2. The index of any curve not enclosing any fixed point is zero. This is because
it can be shrunk to zero size.
3. The index of a curve enclosing a number of fixed points is the sum of the
indices for curves enclosing the fixed points individually. This is because
the curve can be deformed into small curves surrounding each fixed point together
with connecting lines along which the integral cancels.
4. The index of a curve for x˙ = f (x) is the same as that of the system
x˙ = −f (x). Proof: Consider effect on integral representation of the change f → −f .
5. The index of a periodic orbit is +1. The vector field is tangent to the orbit at
every point.
6. The index of a saddle is −1, and of a node or focus +1. By inspection, or
by noting that in complex notation a saddle can be written, in suitable coordinates
as x˙ = x, y˙ = −y or z˙ = z¯. For a node or focus, curve can be found such that
trajectories cross in same direction.
7. Indices of more complicated, non hyperbolic points can be found by
adding the indices for the simpler fixed points that may appear under
perturbation. This is because a small change in the system does not change the
index round a curve where the vector field is smooth. e.g. index of a saddle-node
x˙ = x2 , y˙ = −y is zero, since indices of saddle and node cancel.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
30
The most important result that follows from the above is is that any periodic orbit contains
at least one fixed point. In fact the total number of nodes and foci enclosed by a periodic
orbit must exceed the total number of saddles enclosed by the orbit by one. Proof: simple
exercise.
4.2
Poincar´
e-Bendixson Theorem
This remarkable result, which only holds in R2 , is very useful for proving the existence of
periodic orbits.
Theorem 9 (Poincar´e-Bendixson). Consider the system x˙ = f (x), x ∈ R2 , and suppose
that f is continuously differentiable. If the forward orbit O+ (x) remains in a compact
(closed and bounded) set D (i.e. once the orbit enters D then it stays in D) containing
no fixed points then ω(x) contains a periodic orbit.
We can apply this directly to the pendulum equation with F > 1 to show that there is at
least one (stable) periodic orbit. However we cannot rule out multiple periodic orbits.
The proof of the Poincar´e-Bendixson Theorem is quite complicated. Before starting it is
useful to establish a preliminary result concerning multiple intersections of a trajectory
with a transversal. A transversal is a line or curve (typically not closed) that trajectories
cross from one side to the other. A traversal can be defined in a neighbourhod of any
point that is not a fixed point. (See diagram.)
In R2 traversals have the important property that, if a trajectory has multiple intersections with the transversal, then successive intersections move monotonically along the
transversal. (See diagram.) Note that this property has no counterpart in higher dimensions.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
31
Proof of the Poincar´e-Bendixson Theorem: We first note that since O+ (x) remains in a
compact set D, then ω(x) is non-empty, with ω(x) ⊂ D. .
Choose y ∈ ω(x) and then z ∈ ω(y). Note that y ∈ D and hence z ∈ D and that neither
are fixed points (by assumption). We show first that O+ (y) is a periodic orbit. Pick a
local transversal Σ through z, then O+ (y) makes intersections with Σ arbitrarily close to
z. [Choose a set of points on O+ (y) that come arbitrarily close to z. Given continuity
of f (x) in the neighbourhood of z the trajectory through each point must have a nearby
intersection with Σ.]
If O+ (y) intersects Σ at z then O+ (y) is closed. [ z ∈ O+ (y), if next intersection of O+ (y)
is not at z then successive intersections must move away from z by the monotonicity
property. This is inconsistent with z ∈ ω(y).]
If O+ (y) does not intersect Σ at z then suppose one intersection is at y1 and the next
is at y2 . But y1 ∈ ω(x) and y2 ∈ ω(x), implying that there is an intersection of O+ (x)
with Σ that is arbitrarily close to y1 , a later intersection that is arbitrarily close to y2
and later to that, an intersection that is arbitrarily close to y1 . But this is inconsistent
with the monotonicity property, so y1 = y2 and O+ (y) is closed.
Furthermore z = y1 = y2 , otherwise z cannot be in ω(y). So z ∈ O+ (y) and hence
z ∈ ω(x) (because the latter is invariant). Hence O+ (y) ⊂ ω(x). In fact O+ (y) = ω(x),
ˆ∈
since if y
/ O+ (y) then it cannot be the case that there is an infinite sequence of points
ˆ . (But note also that ω(x) may be different for
in O+ (x) that come arbitrarily close to y
different x.)
Example
20 Consider2 the system 2
x − y + 2x + axy − x(x + y 2 )
x˙
. In polar coordinates we get
=
y + x + 2xy − ax2 − y(x2 + y 2 )
y˙
√
r˙ = √
r + 2r2 cos θ − r3 , θ˙ = 1 − ar cos θ. Then r˙ >√0 for r < √
2 − 1, and r˙ < 0 for
r > 2 + 1. Thus the trajectories enter the annulus 2 − 1 ≤ r ≤ 2 + 1. For any fixed
2
points we have 1 + 2r cos θ − r2 = 0, 1 − ar cos
x = 1/a,
p
√θ = 0. Hence
√ r = 1 + 2/a.
So there are fixed points in the annulus only if 2 − 1 < √
1 + 2/a < √
2 + 1. The first
inequality
is always satisfied, the second requires 1/a < 1 + 2, i.e. a > 2 − 1. Thus if
√
a < 2 − 1 then there must be a periodic orbit in the annulus.
P.H.Haynes
4.3
Part II Dynamical Systems Michaelmas Term 2014
32
Dulac’s criterion and the divergence test
Suppose that x˙ = f (x) has a periodic orbit C. Then f is tangent to C at every point and
since there are no fixed points on C we have that ρ(x)f (x) · n = 0 everywhere on C, where
ρ is a C 1 function and n is the unit outward normal. Then
I
Z
ρf · ndℓ =
∇ · (ρf )dA = 0
C
A
Thus unless ρ and f are such that ∇ · (ρf ) takes different signs within the periodic orbit,
we have a contradiction. This is is a special case of
Theorem 10 (Dulac’s negative criterion). Consider a dynamical system x˙ = f , x ∈ Rn .
If there is a C 1 scalar function ρ(x), such that ∇·(ρf ) < 0 everywhere (or > 0 everywhere)
in some domain E ⊆ Rn , then there are no invariant sets of dimension n−1 wholly within
E and enclosing finite volume.
Proof: suppose there is such a set S enclosing a volume V ⊂ E. Then since S is invariant
ρ(x)f (x).n = 0, where n is the outward normal to S. But
Z
Z
ρf .ndℓ dS = 0 =
∇ · (ρf ) dV.
S
V
But ∇ · (ρf ) is one-signed by assumption and so the integral on the right-hand side cannot
be zero.
Often it is only necessary to take ρ = 1. If conditions of the theorem satisfied in R2 then
there are no periodic orbits in E, if in R3 there are no invariant 2-tori, etc.
Example 21 (Lorenz system). Here we are in R3 and equations are

  
σy − σx
x˙
 y˙  =  rx − y − zx  .
−bz + xy
z˙
Hence ∇ · f = −σ − 1 − b < 0. Thus this system has no invariant sets of dimension 2 that
enclose non-zero volume. Note that periodic orbits, which are not of dimension 2 and do
not enclose a volume, are not excluded.
Example 22 Return to Example 19 (damped pendulum with torque) and consider the
phase space −∞ < p < ∞, 0 ≤ θ < 2π to be the surface of a cylinder. We have seen
that when F > 1 there are no fixed points and so, by Poincar´e-Bendixson, there must be
a periodic orbit in the region V < Vmax . However ∇ · f = −k < 0 and so, by Dulac’s
criterion, there are no periodic orbits not encircling the cylinder. Orbits that do encircle
the cylinder (orbits of ’rotational’ type) are not ruled out, since they have no ‘interior’
and so the theorem does not apply. But now suppose there are two periodic orbits (both
necessarily encircling the cylinder). Consider the integral of ∇ · f over the region between
the two orbits (in an extension of the standard form of Dulac’s criterion) to obtain a
contradiction. Thus there is a unique periodic orbit, of rotational type, when F > 1.
Since it is unique and trajectories enter the region V < Vmax it must be stable.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
33
Example
These take the general form
23 (Predator-Prey equations).
x(A − a1 x + b1 y)
x˙
, where a1 , a2 ≥ 0. The lines x = 0, y = 0 are invariant.
=
y(B − a2 y + b2 x)
y˙
Are there periodic orbits in the first quadrant? Consider ρ = (xy)−1 . Then ∇ · ρf =
∂
∂
(y −1 (A − a1 x + b1 y) + ∂y
(x−1 (B − a2 y + b2 x) = −a1 y −1 − a2 x−1 < 0. So there are no
∂x
periodic orbits unless a1 = a2 = 0. In this case we potentially have an infinite number of
periodic orbits, which are contours of U (x, y) = A ln y − B ln x + b1 y − b2 x, provided these
are closed curves. [When is this true?]
Another useful result excluding periodic orbits relies on the fact that on such orbits the
vector field f is everywhere tangent to the orbit and is never zero.
Theorem 11 (Gradient criterion). Consider a dynamical system x˙ = f where f is defined
throughout a simply connected domain E ⊂ R2 . If there exists a positive function ρ(x)
such that ρf = ∇ψ for some single valued function ψ, then there are no periodic orbits
lying entirely within E.
H
H
Proof: if ρf = ∇ψ then for a periodic orbit C, C ρf · dℓ = C ∇ψ · dℓ = 0. But this is
impossible as ρf · dℓ has the same sign everywhere on C.
2x + xy 2 − y 3
x˙
. Hard to apply Dulac.
=
Example 24 Consider the system
−2y − xy 2 + x3
y˙
But in fact exy f = ∇(exy (x2 − y 2 )) and so there are no periodic orbits.
4.4
Near-Hamiltonian flows
Many systems of importance have a Hamiltonian structure; that is they may be written
in the form
Hy
x˙
=
−Hx
y˙
Then it is easy to see that dH
= xH
˙ x + yH
˙ y = 0 so that the curves H = const. are
dt
invariant. If these curves are closed then they are periodic orbits, but
not limit cycles
Hxy
Hyy
.
since they are not isolated. The Jacobian of f at the fixed point is
−Hxx −Hxy
Thus the trace is always zero and fixed points are either saddles or (nonlinear) centres.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
34
Example 25 Consider x¨ + x − x2 . Writing y = x,
˙ we have an equation in Hamiltonian
form, with H = 21 (y 2 + x2 ) − 13 x3 . There are two fixed points, at y = 0, x = 0, 1, which
are a centre and a saddle respectively. Using symmetry about the x-axis we can construct
the phase portrait.
Writing a system in Hamiltonian form with perturbations is a useful approach to finding
conditions for periodic orbits. Suppose we have the system
Hy + g1 (x, y)
x˙
(2)
=
−Hx + g2 (x, y)
y˙
then we can see that
dH
= g2 x˙ − g1 y˙ .
(3)
dt
H
H
If there is a closed orbit C, we have from the above that C dH = C g2 dx−g1 dy = 0. If we
can show that this line integral cannot vanish, we can deduce that there are no periodic
orbits.
y
x˙
(ǫ > 0). Choose the same H
=
Example 26 Consider
−x + x2 + ǫy(a − x)
y˙
H
as in the previous example 25; then for a periodic orbit C we must have C y 2 (a−x) dt = 0.
We can see from the equation that the extrema of x are reached when y = 0, and that
the fixed points of the system are still at (0, 0) (focus/node) and (1, 0) (saddle) (Exercise:
verify this). The index results show that no periodic orbit can enclose both fixed points,
so the maximum value of x on a periodic orbit is 1. Thus if a > 1 there are no periodic
orbits (however small ǫ). (Note that x˙ = y implies that if x > 1 somewhere on an orbit
then (1, 0) must be enclosed by that orbit.)
If the flow is nearly Hamiltonian, and the value of H is such that the Hamiltonian orbit is
closed, we can derive an approximate relation for the rate of change of H. From equation
(3) we have exactly dH/dt = g2 x˙ − g1 y.
˙ If g1 , g2 are very small, then H changes very
slowly and trajectories are almost closed. If we average over a period of the Hamiltonian
flow, then the effect of fluctuations in the r.h.s. around the (almost-closed) orbit are
averaged out and we deduce an equation for the slow variation of H:
dH
≈ F(H) = hg2 x˙ − g1 yi
˙ = ∆H/P (H)
dt
(4)
where the brackets denote the average over a period of the Hamiltonian flow, with the
quantities to be averaged evaluated for the Hamiltonian flow itself. ∆H is the predicted
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
35
(small) change in H over one period and P (H) is the period of an orbit in the Hamiltonian
flow as a function of the value that the Hamiltonian takes on that orbit. (The error in
assuming that the gi = 0 when calculating x˙ as a function of x (and H) leads to errors
of order |gi |2 .) If the reduced system described by this equation has a fixed point this
corresponds to a periodic orbit of the nearly-Hamiltonian flow. It has no fixed points then
the nearly-Hamiltonian flow has no periodic orbits. This method is known as the energy
balance or Melnikov method though the latter term is usually used for the application of
the method to non-autonomous perturbations to Hamiltonian flows.
Example 27 Consider the same example as above but now withqǫ ≪ 1. Then for the
Hamiltonian flow (with ǫ = 0) we have orbits described by y = ± 2H − x2 + 23 x3 (with
H here corresponding to the value of the Hamiltonian function a particular orbit). The
equation (4) for the slow variation of H is then
R xmax
1
I
(a − x)(2H − x2 + 23 x3 ) 2 dx
2ǫ
dH
∆H
ǫ
xmin
2
,
=
=
(a − x)y dt =
Rx
1
dt
P (H)
P (H)
2 max (2H − x2 + 2 x3 )− 2 dx
xmin
3
where xmin , xmax are the extrema of x on the orbit. The denominator of the r.h.s. of this
equation is an explicit expression for P (H). The integrals are written in terms of x by
noting that, for this system, ydt = dx. Setting the l.h.s.=0 gives the possible values of
H, for a specified value of a, for which the perturbed system has a periodic orbit. The
integrals in general cannot be done exactly, except for the special case where H = 1/6, for
which the corresponding orbit of the unperturbed Hamiltonian system is the homoclinic
orbit passing through (1,0). The corresponding value of dH/dt is < 0 for a < 1/7 and > 0
for a > 1/7. For H small, i.e. for a small closed orbit around the fixed point at (0,0), it
is clear that dH/dt has the same sign as a. These results suggest that there is no stable
periodic orbit for a < 0 (trajectories spiral slowly into the origin), that for 0 < a < 1/7
there is a stable periodic orbit and that for a > 1/7 there is no periodic orbit (trajectories
move slowly outward to the fixed point on the homoclinic orbit) and then rapidly outward
along an unbounded trajectory of the unperturbed system. (We see from this that the lower
limit a = 1, deduced earlier, for the existence of periodic orbits the bound a = 1 is not a
very good estimate at least at small ǫ!)
P.H.Haynes
4.5
4.5.1
Part II Dynamical Systems Michaelmas Term 2014
36
Stability of Periodic Orbits
Floquet multipliers and Lyapunov exponents
While individual points on a periodic orbit are not fixed points and it therefore does not
make sense to consider their stability, we can consider the stability of the whole orbit as
an invariant set. We can develop a theory (Floquet theory) for determining whether an
orbit is asymptotically stable to infinitesimal disturbances. Consider again x˙ = f (x) (in
ˆ (t). Letting x = x
ˆ + ξ(t), and linearizing,
Rn ) and suppose there is a periodic orbit x = x
we find
ξ˙ = A(t)ξ , where A = Dfx=ˆx(t) .
(5)
This is a linear ordinary differential equation with periodic coefficients, and there is much
theory concerning it. We want to know what happens to an initial disturbance ξ(0) after
one period P of the original orbit. Integrating equation (5) from t = 0 to t = P , we can
exploit linearity to write the relation ξ(P ) = Fξ(0), where F is a matrix that depends only
on Df on the orbit and not on ξ(0). The eigenvalues of this matrix are called Floquet
multipliers. One of them is always unity for an autonomous system because equation
(5) is always solved by ξ(t) = f (ˆ
x(t)). (This represents a perturbation along the orbit.)
Another way to find the Floquet multipliers is via a map. We construct a local transversal
ˆ (0). Then all trajectories close enough to the periodic orbit intersect
subspace Σ through x
Σ in the same direction as the periodic orbit. Successive intersections of trajectories with
ˆ
Σ define a map (the Poincar´
e [Return] Map Φ : Σ → Σ. If z0 is the intersection of x
with Σ then Φ(z0 ) = z0 so that z0 is a fixed point. Linearizing about this point we have
Φ(z) = z0 + (DΦ)(z − z0 ), where DΦ is an (n − 1) × (n − 1) matrix. Then the Floquet
multipliers can be defined as the eigenvalues of DΦ. This method suppresses the trivial
unit eigenvalue described above. It is easy to prove that the choice of intersection with
the periodic orbit does not affect the eigenvalues of DΦ [Exercise].
We can define the stability of the orbit to small perturbations in terms of the multipliers
µi , i = 1, 2, . . . , (n − 1).
Definition 23 (Hyperbolicity). A periodic orbit is hyperbolic if none of the µi lie on
the unit circle.
Then we have the theorem on stability analogous to that for fixed points:
Theorem 12
ˆ (t) is asymptotically stable (a sink) if all the µi satisfy |µi | < 1.
(i) A periodic orbit x
(ii) If at least one of the µi has modulus greater than unity then the orbit is unstable (i.e.
not Lyapunov stable).
Proof: very similar to that for fixed points: simple exercise.
Definition 24 (Floquet exponents). The Floquet Exponents λi of a periodic orbit are
defined as λi = P −1 log |µi |, where the µi are the Floquet multipliers and P is the period.
These are a measure of the rate of separation of nearby orbits. The term ’Lyapunov
exponent’ is sometimes used for this special case of periodic orbits, and is widely used for
the extension to any type of trajectory, i.e. the Lyapunov exponent measures the rate of
separation of nearby trajectories.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
37
In R2 there is only one non-trivial µ, which must be real and positive. Recall the proof of
Poincar´e-Bendixson Theorem – the monotonicity property requires that a that a trajectory
close to a periodic orbit either moves systematically towards it (µ < 1) or systematically
away from it (µ > 1).
R
P
In this case we have µ = exp 0 ∇ · f (ˆ
x(t)) dt .
ˆ (0), with the four
Proof: consider an infinitesimal rectangle of length δs and width δξ at x
corners of the rectangle following trajectories
for t > 0. Then A(0) = δξδs. By standard
R
˙ =
result
for
conservation
of
area,
A
f
·
ndS
∼ ∇ · f × A. Thus A(P )/A(0) = µ =
∂A
R
P
exp 0 ∇ · f (ˆ
x(t) dt , as required.
*Remark*: when we are in Rn , n ≥ 3 the µi may be complex; this leads to a wide variety
of possible ways in which periodic orbits can lose stability:
• µ passes through +1. This is similar to bifurcations of fixed points (saddle-node,
pitchfork etc). (See later.)
• µ passes through e±2ikπ/n (k, n coprime). This leads to a new orbit that has a period
approximately n times the original. In particular when n = 2 we have twice the
period.
• µ passes through e±2iνπ , ν irrational. the new solution is a 2-torus.
Example 28 (Damped Pendulum with Torque) Here we have ∇ · f = −k, and so µ < 1
and the periodic orbit that we have already shown to exist is thus stable.
4.6
Example – the Van der Pol oscillator
This much studied equation can be derived from elementary electric circuit theory, incorporating a nonlinear resistance. It takes the form
¨ + (X 2 − β)X˙ + X = 0
X
which is the equation for a damped oscillator. If β > 0 then we have negative damping
for small X, positive damping for large X. Thus we expect that oscillations grow to finite
P.H.Haynes
38
Part II Dynamical Systems Michaelmas Term 2014
amplitude and then stabilize. It is convenient to scale the equation by writing X =
so that
x¨ + β(x2 − 1)x˙ + x = 0 .
√
βx,
(6)
This equation is a special case of the Li´
enard equation x¨ + f (x)x˙ + g(x) = 0. For
analysis Rit is convenient to use the Li´
enard Transformation. We write y = x˙ + F (x);
x
F (x) = 0 f (s)ds. Then in terms of x, y we have
y − F (x)
x˙
(7)
=
−g(x)
y˙
For the Van der Pol system, we have g(x) = x, F (x) = β( 31 x3 −x). If β > 0 then it is easily
seen that the (only) fixed point, at the origin, is unstable. It is hard to apply Poincar´eBendixson, however, since the distance from the origin does not decrease monotonically
at large distances. However, we expect that there is a stable limit cycle. We can use
qualitative methods to show this, but first we look at the cases of small and large β.
(a) β ≪ 1. In this case the system is nearly Hamiltonian, with H = 12 (x2 + y 2 ), and
√
g1 =√−F (x), g2 = 0. Hence the Hamiltonian flow is parametrized by x = 2H sin t,
y = 2H cos t, and the period P (H) is 2π. Thus the energy balance method yields
Z 2π
Z 2π
Z 2π
4
4H 2
H2
2 x
(2H sin2 t−
sin4 t)dt = 2πβ(H− )
∆H = −
yF
˙ (x)dt = β
(x − )dt = β
3
3
2
0
0
0
Clearly ∆H > 0 if H < 2 and ∆H < 0 if H > 2, so the equation (4) has a stable fixed
point at H = 2, corresponding to a stable periodic orbit with x ≈ 2 sin t.
(b) β ≫ 1. If x2 is not close to unity then the damping term is very large, so we might
expect x˙ = O(β −1 ); in fact if we write Y = yβ −1 we get
x˙
β(Y − 13 x3 + x)
=
−xβ −1
Y˙
so Y˙ is very small and Y varies only slowly. Either |x|
˙ ≫ 1 or else Y ≈ 31 x3 − x.
Furthermore the latter, which defines a curve in the x, Y plane, is ’stable’ only if the
gradient of the curve is positive, i.e. only if x2 > 1 in this particular case.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
39
So the trajectory follows a branch of the curve Y = 31 x3 − x with positive gradient (the
slow manifold) until it runs out, and then moves quickly (with x˙ ≫ Y˙ ) to another branch
of the curve with positive gradient. The geometry of the curve allows a periodic orbit.
Periodic behaviour of this kind is called a relaxation oscillation.
Assuming that there is such a periodic orbit (to be proved below) we can find an approximation to the period. Almost all the time is taken on the slow manifold, so the period
P is given by
Z B
Z 2
Z 2
Z −1
3 dY
3
β
(1 − x2 )
− dY = 2β
P =2
dt = 2
=2
dx = β(3 − 2 ln 2)
˙
x
x
− 32 Y
− 32
A
−2
We now prove that there is a periodic orbit for all positive β. As a preliminary step note
the general pattern of trajectories by considering the four regions of the plane bounded by
the nullclines. These regions are (I): x > 0, y > F (x), (II): x > 0, y < F (x), (III) x < 0,
y < F (x) and (IV) x < 0, y > F (x). If the trajectory starts in region I then x is increasing
and y is decreasing. Thus trajectory must cross the curve y = F (x) and enter region II.
Now both x and y are decreasing, and y˙ decreases in magnitude as x decreases. Thus
trajectory must cross into region III, where y is increasing and x is decreasing. Hence
the trajectory enters region IV and, continuing the argument, eventually enters region I
again.
To construct a proof first consider the function Vk (x, y) = 21 {x2 + (y − k)2 }.
V˙ k = x(y − F (x)) − yx + kx = −xF (x) + kx = −β( 31 x4 − x2 ) + xk.
Note in particular that V˙ 0 = −β( 13 x4 − x2 ) so that V0 is increasing along trajectories for
√
√
|x| < 3 and is decreasing for |x| > 3. We deduce that V˙ 0 > 0 if V0 < 23 and hence that
trajectories starting in x2 + y 2 < 3 eventually enter the region x2 + y 2 > 3.
We now show that there is a region (containing x2 + y 2 < 3) that trajectories cannot
leave. See diagram below. (x2 + y 2 < 3 is shown as a dotted circle.)
√
Consider a trajectory starting at A (−x0 , y0 ) with x0 > 3 > 0 and y0 > 0. Choose√y0
sufficiently large that y − F (x) > 0 stays
√ positive until the trajectory reaches x = − 3
˙
at B. Note that V < 0 for −x0 < x < − 3, so B lies within the circle passing through A
and centred at O.√ (The points B2 and C2√are marked as lying on this circle.) But y˙ > 0
for −x0 < x < − 3, so B lies above B1 ( 3, y0 ).
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
B
y=y 0
A
C
B3
B2
C3
B1
40
C1
D3
y=y 1
C2
D
O
E1
x=x 0
x=−x0
x=
y=−y0
x=
√
√
Now consider − 3 < x < 3. V˙ 0 > 0 in this region, so the trajectory moves outward
across circles centred on the origin. In particular the entire trajectory in this region lies
outside the circular arc B1 C1 . Hence x˙ > y0 − 23 β in this region. The change in V0 along
√
√
the trajectory from x = − 3 to x = 3 is given by
√
Z x=√3
Z x=√3
1
4 3β
2
2
1 4
1 4
.
∆V0 =
β(x − 3 x )dt ≤
β(x − 3 x )dx =
√
y0 − 23 β x=−√3
5(y0 − 32 β)
x=− 3
√
Hence√at x = 3 the trajectory lies within the circle centred on the origin with radius (x20 +
y02 + 8 3β/{5(y0 − 32 β)})1/2 , shown by the circular arc B3 C3 D3 . Hence the intersection D
√
of the trajectory with x = x0 lies below the point D3 (x0 , y1 ) where y12 = y02 +8 3β/{5(y0 −
2
β)}.
3
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
41
Now consider the circle centred on the y axis and passing through D3 and E1 (x0 , −y0 ).
This circle is a contour of the function Vk (x, y) where k = 12 (y1 − y0 ). Using the previously
derived expression V˙ k < 0 in x > 0 provided that β( 13 x40 − x20 ) > kx0 . Since k =
√
1
(y − y0 ) < 2 3β/{5y0 (y0 − 32 β)}, this condition is satisfied provided that ( 13 x20 − 1)x0 >
2√ 1
2 3β/{5y0 (y0 − 23 β)}, i.e. it becomes easier to satisfy as y0 becomes larger.
From the above we deduce that there is a continuous curve defined by the trajectory
ABCD, the line DD3 (note that x˙ > 0 on this line) and the circle D3 E1 that trajectories
cannot cross from the ’inside’. Now complete this curve to a closed curve by reflecting
about the origin. We deduce that no trajectories leave the enclosed region. It follows from
Poincar´e-Bendixon that there is at least one closed orbit in this enclosed region, with the
orbit enclosing the region x2 + y 2 < 3.
We can also prove that there is a unique closed orbit C0 . Assume that there is a second
closed orbit C1 , shown with C0 in the diagram below. In each case only half the orbit is
shown, with the other half obtained by reflection in the origin.
B1
A1
y=y 1
y=y 0
B0
A0
y=y 1
y=y 0
O
x=
C0
y=−y0
C1
y=−y1
x=
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
42
If C0 is closed then the value of the function V0 (x, y) must be the same at√A0 as at C
√0 .
By previous results we have that V0 increases
on
the
section
of
orbit
with
−
3
<
x
<
3
√
and decreases on the section with x > 3. Thus we have
V0 |B0 − V0 |A0 = V0 |B0 − V0 |C0 > 0.
Now consider the change in V0 along the orbit C1 . For the part of the orbit A1 B1
Z B0
Z B1
Z B1
dx
dx
˙
˙
V˙ 0
<
= V0 | B0 − V0 | A 0
V0
V0 dt =
V0 | B1 − V0 | A 1 =
y − F (x)
y − F (x)
A0
A1
A1
where the inequality follows from the fact that V˙ 0 is a function of x alone, and that, for
given x, y − F (x) is larger on C1 than on C0 . For the part of the orbit B1 C1
V 0 | B1 − V 0 | C1 =
Z
B1
C1
V˙ 0 dt =
Z
B1
C1
dy
V˙ 0
>
x
Z
B0
C0
dy
V˙ 0
= V 0 | B0 − V 0 | C0
x
where the inequality follows from the fact that V˙ 0 /x is an increasing function of x and
that for given y such that −y0 < y < y¯0 , the value of x is greater on C1 than on C0 ,
with the integral along C1 also having positive contributions from −y1 < y < −y0 and
y¯0 < y < y¯1 . It follows that
V 0 | B1 − V 0 | C1 > V 0 | B0 − V 0 | C0 = V 0 | B0 − V 0 | A 0 > V 0 | B1 − V 0 | A 1 .
Hence V0 |C1 < V0 |A1 and the orbit C1 is not closed.
P.H.Haynes
5
Part II Dynamical Systems Michaelmas Term 2014
43
Bifurcations
5.1
Introduction
We return now to the notion of dynamical systems depending on one or more parameters
µ1 , µ2 , . . .. We are interested in parameter values for which the system is not structurally
stable. Recall the definitions:
Definition 25 (Topological equivalence). Two vector fields f g and associated flows φf ,
φg are topologically equivalent if ∃ a homeomorphism (1-1, continuous, with continuous inverse) h : Rn → Rn , and a map τ (t, x) → R, strictly increasing on t, s.t.
τ (t + s, x) = τ (s, x) + τ (t, φfs (x)) , and φgτ (t,x) h(x) = h(φft (x))
(Structural stability). The vector field f is structurally stable if for all twice differentiable vector fields v ∃ǫv > 0 such that f is topologically equivalent to f + ǫv for all
0 < ǫ < ǫv .
It turns out that if for a given f (x, µ) we vary the set of parameters µ then we will have
structural stability in general except on certain sets in µ-space with dimension less than
the entire space. We define a bifurcation point as a point in µ-space where f is not
structurally stable. A bifurcation (change in structure of the solution) will occur when
µ is varied to pass through such a point. A bifurcation diagram is a plot of e.g the
location of the fixed points and the amplitudes of the periodic orbits as functions of µ.
5.2
5.2.1
Stationary bifurcations in R2
One-dimensional bifurcations
Stationary bifurcations occur when one eigenvalue of the Jacobian at a given fixed point
is zero. These are best understood initially in one dimension. Suppose we have an 1-D
dynamical system x˙ = f (x, µ), and that when µ = 0 the equation has a fixed point at the
origin, which is non-hyperbolic. Thus we have f (0, 0) = ∂fx (0, 0) = 0 (subscripts denote
partial derivatives). There are three possible types of bifurcation involving one parameter.
The first is the generic case, and the others occur under more restrictive conditions. In
each case we can sketch the bifurcation diagram.
√
√
1. Saddle-Node Bifurcation. x˙ = µ − x2 . We have x = + µ (stable) and x = − µ
(unstable) when µ > 0, a saddle-node at 0 when µ = 0 and no fixed points for µ < 0.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
44
2. Transcritical Bifurcation. x˙ = µx − x2 . There are fixed points at x = 0, µ which
exchange stability at µ = 0.
√
3. Pitchfork Bifurcation. x˙ = µx ∓ x3 . Fixed point at x = 0, also at x = ± ±µ
when ±µ > 0. The (. . . − x3 ) case is called supercritical, the other case subcritical;
in the supercritical case the bifurcating solutions are both stable, in the subcritical
case the bifurcating solutions are both unstable.
More insight is gained by considering variations of two independent parameters. Suppose
we have
x˙ = µ1 + µ2 x − x2 .
This includes both families (1) and (2) as special cases. Fixed points are at
p
µ2 ± µ22 + 4µ1
x=
provided that µ22 + 4µ1 > 0
2
There is a single non-hyperbolic fixed point on the parabola µ22 + 4µ1 = 0, and no fixed
point if µ22 + 4µ1 < 0.
Clearly passing through any point of the parabola yields a saddle-node bifurcation. To see
a transcritical bifurcation, it is necessary for the path in parameter space to be tangential
to the parabola. e.g. if we vary µ2 at fixed µ1 then in general the only bifurcations are
saddle-nodes, except when µ1 = 0.
We say that the saddle-node bifurcation is codimension 1 (i.e. in µ-space bifurcation set
has a dimension one less than that of the entire parameter space). The o.d.e. x˙ = µ − x2
is a universal unfolding of the saddle node x˙ = −x2 (i.e. it captures the structure in
µ-space around the bifurcation point). We can also say that x˙ = µ − x2 is the normal
form for the saddle-node bifurcation, in the sense that generic vector fields near the
saddle-node can be reduced to this form by a near-identity diffeomorphism.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
45
Example 29 (Reduction to normal form.) Consider the two-parameter family
x˙ = µ1 + µ2 x − x2
and try a change of variable y = x − α, so that
y˙ = (µ1 + µ2 α − α2 ) + y(µ2 − 2α) − y 2
Choosing α = µ2 /2 gives y˙ = (µ1 + µ22 /4) − y 2 , which is in the normal form. The more
general two-parameter system
x˙ = µ1 + µ2 x − Cx2
can be reduced scaling time so that T = Ct to
µ1 µ2
dx
=
+ x − x2
dT
C
C
and so to the normal form.
We can now treat the case of general f (x, µ) provided that (as we assume) f can be
expanded in a Taylor series in both x and µ near the non-hyperbolic (bifurcation) point
(0, 0). At this point we already have f (0, 0) = 0 = fx (0, 0). Now suppose, what is
generally true, that fxx 6= 0, fµ 6= 0 at this point. Now we expand f in a double Taylor
series about (0, 0):
f (x, µ) = f (0, 0) + xfx (0, 0) + µfµ (0, 0)
+
µ2
x2
fxx (0, 0) + xµfxµ (0, 0) + fµµ (0, 0) + O(x3 , x2 µ, xµx2 , µ3 ) .
2
2
Rearranging, we have
x2
f (x, µ) = (µfµ + O(µ )) + x(µfxµ ) + fxx + O(x3 , x2 µ, µx2 , µ3 )
2
2
= µ1 + µ2 x +
x2
fxx + O(x3 , x2 µ, µx2 , µ3 )
2
and this is in the correct form to be reduced to the standard saddle-node equation. Note
that in the second form the µi denote parameters that can be freely varied. The coefficient
of the x2 term is not regarded as freely varying since we have made the assumption that
this is non-zero.
Following this approach allows us to identify the two special cases mentioned previously,
non-generic in the space of all problems, but nonetheless of considerable importance.
• Transcritical bifurcation. If the system is such that f (0, µ) = 0 for all µ (or
can be put into this form by a change of variable), then fµ (0, 0) = 0, and we have
instead of the above
f (x, µ) = µ2 x +
x2
fxx + O(x3 , x2 µ, µx2 )
2
with all the higher order terms vanishing when x = 0. This is in the normal form
for a transcritical bifurcation.
P.H.Haynes
46
Part II Dynamical Systems Michaelmas Term 2014
• Pitchfork bifurcation. If the system has a symmetry; that is if the equation is
unchanged under an operation on the space variables whose square is the identity,
then simple bifurcations are pitchforks. In R the only such operation is x → −x; for
the equations to be invariant f must be odd in x. Then expanding in the same way
we get f (x, µ) = µ2 x + 13 fxxx x3 + O(x5 , µx3 , . . . ). (In higher dimensions symmetries
can take more complicated form. For example, the system x˙ = µx−xy , y˙ = −y +x2
has symmetry under x → −x, y → y).
The saddle-node bifurcation is robust under small changes of parameters as shown above.
But transcritical and pitchfork bifurcations depend on the vanishing of terms, and therefore change under small perturbations.
x˙ = ǫ + µx − x2
ε <0
ε =0
ε >0
S/N
x˙ = ǫ + µx − x3
S/N
Same as panel below
ε <0
S/N
ε >0
x˙ = µx + ǫx2 − x3
ε <0
S/N
ε =0
ε =0
T/C
ε >0
T/C
S/N
The most general ’unfolding’ of the pitchfork bifurcation takes two parameters:
x˙ = ǫ1 + µx + ǫ2 x2 − x3
(Diagram: exercise)
P.H.Haynes
5.2.2
Part II Dynamical Systems Michaelmas Term 2014
47
Bifurcations in R2
A fixed point for a 2-D dynamical system is non-hyperbolic if there is at least one purely
imaginary (or zero) eigenvalue.
Consider the situation where x˙ = f (x, µ) has a non-hyperbolic fixed point at the origin,
for some value µ0 of µ. There are four cases:
• (i) λ1 = 0, Re(λ2 ) 6= 0. This is a simple, or steady-state bifurcation, and is
essentially the same as the 1D examples shown above. We show this below when
we discuss the centre manifold.
• (ii) λ1,2 = ±iω, Re(λj 6= 0, j 6= 1, 2. This is an oscillatory or Hopf bifurcation,
and leads to the growth of oscillations.
• (iii)
form of matrix A is either
and(iv) There are two zero eigenvalues. Canonical
0 1
0 0
(Takens-Bogdanov bifur(double-zero bifurcation), or
0 0
0 0
cation). They have quite different properties and are not seen generically as they
need two separate conditions on the parameters to be satisfied.
Note that there are extra technical requirements on the way the eigenvalues change with
µ (e.g. for (i) we must have dλ1 /dµ 6= 0 at µ = µ0 ). We shall look at only (i) and (ii) in
detail.
5.3
The Centre Manifold
Consider first the simple bifurcation identified above. By analogy with the hyperbolic
case, when µ = µ0 the linear system has a subspace on which the solutions decay (or
grow) exponentially, and another in which the dynamics is non-hyperbolic (the centre
eigenspace as defined below). For example, for a saddle node we have x˙ = x2 , y˙ =
−y, so solutions decay on the y-axis and the dynamics is non-hyperbolic on the x-axis.
By analogy with the stable and unstable manifolds and their relation to the stable and
unstable subspaces in the hyperbolic case, we might expect a manifold to exist (the
centre manifold), which is tangent to the centre eigenspace at the origin, and on which
the dynamics correspond to that in the centre eigenspace.
Example 30 Consider the non-hyperbolic system
x2 + xy + y 2
x˙
.
=
−y + x2 + xy
y˙
The linear system has y˙ = −y and so trajectories approach y = 0, which is the centre
eigenspace. In this space x˙ = x2 . It turns out that there is an invariant manifold tangent
to y = 0 at the origin on which x˙ ∼ x2 , (the Centre Manifold or CM).
This is formalised in the Centre Manifold Theorem below, but note that CM’s are not like
stable and unstable manifolds in that they are not unique (see Example Sheet 2, Q1).
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
48
We can find the CM in this example, assuming existence, by expansion. Suppose it is
of form y = p(x) ≡ a2 x2 + a3 x3 + a4 x4 + . . .. Then y˙ = −p + x2 + xp = xp
˙ x =
2
2
(x + xp + p )px . This o.d.e. for p(x) cannot be solved in general, but substitute the
expansion for p, equate coefficients and get a2 = 1, a3 = −1, a4 = 0. Thus the CM is
given by p(x) = x2 − x3 + O(5)∗ . The dynamics on the CM is then given by replacing y
by p in the equation for x:
˙
x˙ = x2 + xp + p2 = x2 + x3 − x4 + O(6)∗ + x4 − 2x5 + O(6) = x2 + x3 − 2x5 + O(6)∗ ,
and so close to the origin we have x˙ ∼ x2 as expected so that we have a saddle-node for
the nonlinear system.
∗
Notation: The notation O(m) is used to mean terms that are of total power m in the
relevant variables, e.g. if the variables are a and b then then terms a3 , a2 b, ab2 and b3 are
all O(3).
The existence of a centre manifold is guaranteed under appropriate conditions by the
Centre Manifold Theorem:
Theorem 13 (Centre Manifold Theorem). Given a dynamical system x˙ = f (x) in
Rn with a non-hyperbolic fixed point at the origin O, let E c be the (generalised) linear
eigenspace corresponding to eigenvalues of A = Df |0 with zero real part (the centre subspace), and E h the complement of E c (the hyperbolic subspace). Choose a coordinate
system (c, h), c ∈ E c , h ∈ E h and write the o.d.e. in the form
c˙
C(c, h)
.
=
H(c, h)
h˙
Then ∃ a function p : E c → E h with graph h = p(c), called the centre manifold which
has properties:
• (i) is tangent to E c at 0;
• (ii) is locally invariant under f ;
• (iii) dynamics is topologically equivalent to


C(c,p(c))
c˙

=  ∂H h
h˙
∂h 0
• (iv) p(c) can be approximated by a polynomial in c in some neighbourhood of O.
Thus in Example 29, c = x, h = y and p(x) = x2 − x3 + . . .; local dynamics is equivalent
to x˙ = x2 + xp(x) + p(x)2 , y˙ = −y
This is very helpful for non-hyperbolic systems, but we believe that such reductions ought
to be possible near, and not just at, bifurcation points. We can use the centre manifold
theorem by means of a trick.
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
49
To our original system x˙ = f (x, µ), which now has a hyperbolic fixed point at 0, adjoin the
equations µ˙ = 0. We now have a system in R(n+m) , where m is the number of parameters.
In this new system the terms giving the linearized growth rates as functions µ will be
nonlinear, and so we are at a non-hyperbolic fixed point of the extended system, and can
use the CM theorem to reduce the system.
Example 31 Consider the system
µ + x2 + xy + y 2
x˙
.
=
2µ − y + x2 + xy
y˙
Regarding µ as a variable we have
 
  
x
0 1
0
x˙


 µ˙  =  0 0
µ  + nonlinear terms.
0
y
0 2 −1
y˙
The centre (generalised) eigenspace is y = 2µ, which is a plane in R3 (note that in the
linearised system all initial conditions tend to this plane), and the CM is of the form
y = p(x, µ) = 2µ + a20 x2 + a11 xµ + a02 µ2 + . . . ,
we have
y˙ = 2µ − p + x2 + xp = −
∂p
∂p
(µ + x2 + xy + y 2 ) +
· 0 , or
∂x
∂µ
2µ + (2µ + a20 x2 + a11 xµ + a02 µ2 + . . .)(x − 1) + x2
= µ + x2 + x(2µ + a20 x2 + a11 xµ + a02 µ2 + . . .)+
.
+ (2µ + a20 x2 + a11 xµ + a02 µ2 + . . .)2 (2a20 x + a1 µ + . . .)
Equating coefficients, we find [x2 ] : a20 = 1, [xµ] : a11 = 0, [µ2 ] : a02 = 0. Thus the CM
is given by y = p(x, µ) = 2µ + x2 + O(3), and the dynamics on the extended CM is given
by
x˙ =µ + x2 + x(2µ + . . .) + 4µ2 + . . .
µ˙ =0
This is clearly (cf. the one-parameter families above) a saddle-node bifurcation when µ =
0.
The corresponding general (and important) result is that for a dynamical system in Rn ,
with a simple bifurcation at µ0 (i.e. the system has a single zero eigenvalue at µ = µ0 ,
and (wlog) all eigenvalues are in the left-hand half plane for µ = µ−
0 , and there is just
one eigenvalue in the right-hand half-plane for µ = µ+
),
there
is
a
centre
manifold, the
0
dynamics is essentially one-dimensional and the bifurcation is generically a saddle-node.
P.H.Haynes
5.4
Part II Dynamical Systems Michaelmas Term 2014
50
Oscillatory/Hopf Bifurcations in R2
The simplest model of a Hopf (oscillatory) bifurcation is given by the system (in polar
coordinates) r˙ = µr − r3 , θ˙ = 1. For µ < 0 we have a stable focus, and for µ > 0 an
√
unstable focus and a stable periodic orbit r = µ. This is a supercritical Hopf. If
instead we have r˙ = µr + r3 then the periodic orbit exists when µ < 0, and is unstable
(subcritical Hopf).
It turns out that generically all dynamical systems can be put into this form in the
neighbourhood of a Hopf bifurcation. We need a technical definition of this bifurcation.
Definition 26 (Hopf bifurcation). Suppose we have a dynamical system x˙ = f (x, µ) =
(f (x, µ), g(x, µ)) which at µ = 0 has a fixed point at 0, and has linearization A satisfying
detA > 0, TrA = 0 [so that the linearization has eigenvalues λ1,2 (0) = ±iω], and that
d(Re(λ1,2 )/dµ > 0 at µ = 0. Then provided a constant γ, defined as the value at µ = 0 of
1
fxxx + gyyy + fxyy + gxxy + ω −1 (fxy (fxx + fyy ) − gxy (gxx + gyy ) − fxx gxx + fyy gyy )
16
is not equal to zero, then there is a Hopf bifurcation at µ = 0 and there is a stable limit
cycle for µ = 0+ if γ < 0 (supercritical Hopf bifurcation) and an unstable limit cycle
for µ = 0− if γ > 0 (subcritical Hopf bifurcation).
Remark: because A is not singular at µ = 0, there is no change in the number of fixed
points for small |µ|. The exact form of γ above is complicated, but you will not be
required to remember it or derive it and examples will typically consider equations that
are in an easily recognizable form. Rather than derive the formula (see §8.9 of Glendinning
if interested) we show how the equation can be brought into the normal form at µ = 0 by
a near identity diffeomorphism.
0 −ω
. Then writing (in
Take µ = 0 and choose canonical coordinates so that A =
ω
0
these coordinates) z = x + iy, the linear part of the problem is z˙ = iωz. Then the full
equation must take the form
z˙ = iωz + α1 z 2 + α2 zz ∗ + α3 z ∗2 + O(3) .
Define a new complex variable ξ by ξ = z + a1 z 2 + a2 zz ∗ + a3 z ∗2 . We try to choose a1,2,3
so that ξ˙ = iωξ + O(3). In fact
ξ˙ = z(1
˙ + 2a1 z + a2 z ∗ ) + z˙∗ (a2 z + 2a3 z ∗ ) so correct to O(3),
iω(z + a1 z 2 + a2 zz ∗ + a3 z ∗2 ) = (iωz + α1 z 2 + α2 zz ∗ + α3 z ∗2 )(1 + 2a1 z + a2 z ∗ )
+ (−iωz ∗ + α1∗ z ∗2 + α2∗ zz ∗ + α3∗ z 2 )(a2 z + 2a3 z ∗ )
P.H.Haynes
Part II Dynamical Systems Michaelmas Term 2014
51
Equating coefficients of the quadratic terms we get
iωa1 = α1 + 2iωa1 ; iωa2 = α2 + iωa2 − iωa2 ; iωa3 = α3 − 2iωa3
and clearly these equations can be solved. Thus in the transformed system there are
no quadratic terms. Attempting the same procedure at cubic order, we find that all
the cubic terms can be removed except the term ∝ z 2 z ∗ [Exercise]. Thus after all the
transformations have been completed we are left with the equation
z˙ = iωz + νz 2 z ∗ + h.o.t., or r˙ = Re(ν)r3 , θ˙ = ω + Im(ν)r2
and γ ∝ Re(ν).
To find the canonical equation when µ 6= 0 we can either use the same ideas on the
extended CM that we used for the simple bifurcation, or just add the relevant linear
terms; which can be shown to yield the same result in non-degenerate cases.
Note that the normal form is appropriate only when r is sufficiently small. But note that
since the constant part of θ˙ is of order unity there is a neighbourhood of the origin in
which there are no fixed points.
Example 32 Find the nature of the Hopf bifurcation for the system z˙ = (µ + iω)z +
αz 2 + β|z|2 .
Clearly bifurcation point is at µ = 0, so choose this value and, guided by above analysis,
choose ξ = z + az 2 + b|z|2 . Then
ξ˙ = (iωz + αz 2 + β|z|2 )(1 + 2az + bz ∗ ) + (−iωz ∗ + α∗ z ∗2 + β ∗ |z|2 )bz
= iωz + (α + 2ia)z 2 + β|z|2 + |z|2 z(2aβ + αb + bβ ∗ ) + other cubic
= iω(ξ − az 2 − b|z|2 ) + (α + 2ia)z 2 + β|z|2 + |z|2 z(2aβ + αb + bβ ∗ ) + other cubic
Now choose iωa = −α : iωb = β. Then quadratic terms vanish, and
= iωξ + (iω)−1 (|β|2 − αβ)|ξ|2 ξ + other cubic + O(4)
so that Re(ν) = −Im(αβ)/ω.
In fact it can be shown by successive transformations that the canonical form for the
dynamics on the CM for a Hopf bifurcation is z˙ = zF (|z|2 ), where F is a complex valued
function with F (0) = iω. This allows the treatment of degenerate situations for which
Re(ν) = 0.
For higher dimensional systems the CM theorem can be invoked to show that in the
neighbourhood of a Hopf bifurcation there is a 2-dimensional (extended) CM on which
the dynamics can be put into the above form.
P.H.Haynes
5.5
Part II Dynamical Systems Michaelmas Term 2014
52
*Bifurcations of periodic orbits*
(a) Homoclinic bifurcation
This is the simplest ‘global’ bifurcation and occurs when the stable and unstable manifolds
of a saddle point intersect at a critical value µ0 of the parameter.
(b) ’Andronov bifurcation’
A different type of bifurcation arises when a saddle-node develops on a periodic orbit
(“Andronov bifurcation”).]
In each of the above cases, periodic orbits are created (or destroyed) as a result of the
bifurcation.