Fundamental Matrix Solution

Let Φ(t) denote the fundamental solution matrix or state transition matrix (STM) that contains n linearly independent solutions of eqn (4) with the initial conditions Φ(0)=I where I is the n×n unit matrix.

From: Encyclopedia of Vibration , 2001

PARAMETRIC EXCITATION

S.C. Sinha , A. David , in Encyclopedia of Vibration, 2001

Floquet Theory

Consider the linear periodic system given by eqn (4). Let Φ(t) denote the fundamental solution matrix or state transition matrix (STM) that contains n linearly independent solutions of eqn (4) with the initial conditions Φ(0)=I where I is the n×n unit matrix. Then the following statements hold:

1.

Φ t + T = Φ T Φ t 0 t T and, consequently

2.

Φ t + jT = Φ T j Φ t 0 t T , j = 2 , 3

3.

x t = Φ t x 0 t 0 These results imply that, if the solution is known for the first period, it can be constructed for all time t. The matrix Φ(T) is called the Floquet transition matrix (FTM). The next statement considers the stability of eqn (4).

4.

Let ζ i(i=1, …, n) denote the eigenvalues of Φ(T). System (4) is asymptotically stable if all ζ i's lie inside the unit circle of the complex plane. The system is unstable if at least one of the eigenvalues of the FTM has magnitude greater than one. The eigenvalues ζ i are called the Floquet multipliers. The above statements briefly summarize the results of Floquet theory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851000382

Handbook of Dynamical Systems

Wolf-Jürgen Beyn , ... Björn Sandstede , in Handbook of Dynamical Systems, 2002

Remark

The Floquet multipliers are the eigenvalues of the monodromy matrix V(1), where V (t) is the fundamental solution matrix of the homogeneous linear equation, that is, V (t) satisfies

V ( t ) = T 0 f x ( x 0 ( t ) , α 0 ) V ( t ) V ( 0 ) = I .

Due to periodicity, V(1) always has an eigenvalue equal to 1, called the trivial multiplier. For the numerical computation of Floquet multipliers see Fairgrieve and Jepson [27] and Doedel et al. [25].

Above we have assumed α to be fixed. However, in practice we use Keller's method (see Section 2.2) to trace out a branch of periodic solutions. In particular, this allows calculation past folds along such a branch. In the current function space application, Keller's continuation equation takes the form

(7) 0 1 ( x ( t ) x k 1 ( t ) ) * x ˙ k 1 ( t ) d t + ( T T k 1 ) T ˙ k 1 + ( α α k 1 ) α ˙ k 1 = Δ s .

The complete computational formulation then consists of Equations (4)–(7), which are to be solved for x(·), T, and α. These equations correspond to a (generalized) boundary value problem (BVP), and are solved by a numerical boundary value technique. The most widely used discretization method for boundary value problems in ordinary differential equations is the method of orthogonal collocation with piecewise polynomials. In particular, orthogonal collocation is used in software such as COLSYS (Ascher et al. [3]), AUTO (Doedel [17], Doedel et al. [20]), COLDAE (Ascher and Spiteri [2]), and CONTENT (Kuznetsov and Levitin [52]). Its high accuracy (de Boor and Swartz [14]) and its known mesh-adaption techniques (Russell and Christiansen [67]) make this method particularly suitable for difficult problems.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874575X0280025X

Handbook of Dynamical Systems

David Cai , ... Kenneth T.R. McLaughlin , in Handbook of Dynamical Systems, 2002

Remark

In this section we have replaced t in the defocusing NLS equation with -t, and the equation becomes

i q t + q x x 2 | q | 2 q = 0.

The above Riemann-Hilbert problem (7.2)-(7.4) is one formulation of the integral equations of inverse scattering theory, as mentioned in Section 3. Indeed, if we set

(7.6) ψ ( λ , x , t ) m ( λ , x , t ) e i λ x σ 3 ,

then it turns out that for λ \ , ψ is a fundamental matrix solution of differential Equation (3.3), normalized by the following two conditions:

(7.7) ψ e x λ σ 3 I , as x + ,

(7.8) ψ e i x λ σ 3 remains bounded as x .

The jump relation (7.3) expresses the fact that while ψ (λ, x, t) satisfying (7.7)-(7.8) is defined a priori for Im λ 0 only, it turns out that ψ(λ, x, t) has continuous boundary values for λ ,

ψ ± ( λ , x , t ) = lim ɛ 0 ψ ( λ ± i ɛ , x , t ) .

Since ψ+ and ψ represent two fundamental matrix solutions of (3.3), they must be related, i.e.,

ψ + ( λ , x , t ) = ψ ( λ , x , t ) v ( λ , t ) ,

for some jump matrix v which is independent of x.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874575X02800339

Periodic Solutions of Singular Perturbation Problems1

E.R. RANG , in International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, 1963

A Formula for Periodic Solutions

We included here a formula for periodic solutions of a nonhomogeneous linear system with periodic coefficients which was found helpful in the analysis. Consider the vector equation

(10) d z d t = P ( t ) z + r ( t ) ,

when P(t), r(t) are real and periodic with period T 0 and continuous for all real t. Let Φ(t ) be a fundamental matrix solution of the homogeneous system

(11) d z d t = P ( t ) z .

Then by direct calculation, one finds that equation (10) has a unique periodic solution of period T 0 which can be written in the form

(12) z ( t ) = [ Φ 1 ( t + T 0 ) Φ 1 ( t ) ] 1 t t + T 0 Φ 1 ( τ ) r ( τ ) d τ ,

providing that the matrix [Φ(t + T 0) — Φ(t)] is nonsingular, that is, the homogeneous equation (11) has no nontrivial periodic solution of period T 0.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123956514500386

Periodically forced discontinuous systems

Michal Fečkan , Michal Pospíšil , in Poincaré-Andronov-Melnikov Analysis for Non-Smooth Systems, 2016

I.1.4 Nonlinear planar applications

Here we consider the following piecewise-nonlinear planar problem

(I.1.30) ε x . = ω 1 ( y δ ) + ε g 1 ( x , y , t + α , ε , μ ) y . = ω 1 x + ε g 2 ( x , y , t + α , ε , μ ) if y > 0 , x . = η x + ω 2 ( y + δ ) + x 2 + ( y + δ ) 2 [ ax b ( y + δ ) ] + ε g 1 ( x , y , t + α , ε , μ ) y . = ω 2 x + η ( y + δ ) + x 2 + ( y + δ ) 2 [ bx a ( y + δ ) ] + ε g 2 ( x , y , t + α , ε , μ ) if y < 0

with assumptions

(I.1.31) η , δ , ω 1 , ω 2 , ω , a > 0 , b R , ω 2 η b a > 0 , η a > δ 2 .

Note the dependence on ε in the notation (I.1.30) ε . Hence (I.1.30)0 refers to the unperturbed system.

Due to linearity, the first part of (I.1.30)0 can be easily solved, e.g. via a matrix exponential. For the starting point ( x 0 , y 0 ) = 0 , δ + η a and t ∈ [0, t 1], the solution is

(I.1.32) γ 1 ( t ) = η a sin ω 1 t , δ + η a cos ω 1 t .

Time t 1 of the first intersection with discontinuity boundary Ω 0 = { ( x , y ) R 2 | y = 0 } and the point (x 1, y 1) of this intersection are obtained from the relations h(γ 1(t 1)) = 0 for h(x, y) = y and (x 1, y 1) = γ 1(t 1), respectively:

t 1 = 1 ω 1 arccos a η δ , ( x 1 , y 1 ) = η a δ 2 , 0 .

After transformation x = r cos θ, y + δ = r sin θ in the second part of (I.1.30)0 , we get

r . = η r a r 3 θ . = ω 2 + b r 2

from which one can see that the second part of (I.1.30)0 possesses a stable limit cycle/circle with the center at (0, −δ) and radius η a , which intersects boundary Ω0. Now it is obvious that (x 1, y 1) is a point of this cycle and the direction of rotation remains the same as in Ω + = { ( x , y ) R 2 | y > 0 } . Therefore γ 2(t) is a part of the circle, given by

(I.1.33) γ 2 ( t ) = ( x 1 cos ω 3 ( t t 1 ) + δ sin ω 3 ( t t 1 ) , δ x 1 sin ω 3 ( t t 1 ) + δ cos ω 3 ( t t 1 ) )

for t ∈ [t 1, t 2], where ω 3 = ω 2 η b a . Equation h(γ 2(t 2)) = 0 together with the symmetry of γ 2(t) give the couple of equations

x 1 cos ω 3 ( t 2 t 1 ) + δ sin ω 3 ( t 2 t 1 ) = x 1 , δ x 1 sin ω 3 ( t 2 t 1 ) + δ cos ω 3 ( t 2 t 1 ) = 0 .

From these we obtain

t 2 = 1 ω 3 π + arccot δ 2 + x 1 2 2 δ x 1 + t 1 .

Point (x 2, y 2) is the second intersection point of the limit cycle and Ω0, i.e.

( x 2 , y 2 ) = γ 2 ( t 2 ) = η a δ 2 , 0 .

Next, solution γ(t) continues in Ω+ following the solution of the first part of (I.1.30)0 . Thus we have

(I.1.34) γ 3 ( t ) = ( x 2 cos ω 1 ( t t 2 ) δ sin ω 1 ( t t 2 ) , δ x 2 sin ω 1 ( t t 2 ) δ cos ω 1 ( t t 2 ) )

for t ∈ [t 2, T]. Period T obtained from the identity γ 3(T) = (x 0, y 0) is

T = 1 ω 1 arccos a η δ + t 2 .

The next theorem is due to Diliberto (cf. [1517 ]), and we shall use it to find the fundamental matrix solution of the variational equation.

Theorem I.1.10

Let γ(t) be a solution of the differential equation x . = f ( x ) , x R 2 . If γ(0) = p, f(p) ≠ 0 then the variational equation along γ(t),

V . = D f ( γ ( t ) ) V ,

has the fundamental matrix solution Φ(t) satisfying det Φ(0) = ‖f(p)‖2, given by

Φ ( t ) = [ f ( γ ( t ) ) , V ( t ) ]

where [λ 1, λ 2] stands for a matrix with columns λ 1 and λ 2, and

V ( t ) = a ( t ) f ( γ ( t ) ) + b ( t ) f ( γ ( t ) ) , a ( t ) = 0 t 2 κ ( γ ( s ) ) f ( γ ( s ) ) + div f ( γ ( s ) ) b ( s ) ds , b ( s ) = f ( p ) 2 f ( γ ( t ) ) 2 e 0 t div f ( γ ( s ) ) ds , div f ( x ) = f 1 ( x ) x 1 + f 2 ( x ) x 2 , div f ( x ) = f 2 ( x ) x 1 + f 2 ( x ) x 2 , κ ( γ ( t ) ) = 1 f ( γ ( t ) ) 3 f 1 ( γ ( t ) ) f . 2 ( γ ( t ) ) f 2 ( γ ( t ) ) f . 1 ( γ ( t ) ) .

Lemma I.1.11

Assuming (I.1.31), unperturbed system (I.1.30)0 has fundamental matrices X 1, X 2 and X 3 satisfying (I.1.14), (I.1.15) and (I.1.17), respectively, given by

X 1 ( t ) = cos ω 1 t sin ω 1 t sin ω 1 t cos ω 1 t , X 2 ( t ) = a η [ λ 1 , λ 2 ] , X 3 ( t ) = X 1 ( t t 2 )

where

λ 1 = U ( δ x 1 + δ x 1 W + x 1 2 W ~ ) + V ( δ 2 + x 1 2 W δ x 1 W ~ ) U ( δ 2 x 1 2 W + δ x 1 W ~ ) + V ( δ x 1 + δ x 1 W + x 1 2 W ~ ) , λ 2 = U ( x 1 2 + δ 2 W + δ x 1 W ~ ) + V ( δ x 1 + δ x 1 W δ 2 W ~ ) U ( δ x 1 δ x 1 W + δ 2 W ~ ) + V ( x 1 2 + δ 2 W + δ x 1 W ~ ) , U = sin ω 3 ( t t 1 ) , V = cos ω 3 ( t t 1 ) , W = e 2 η ( t t 1 ) , W ~ = b a ( 1 W ) ,

and saltation matrices

S 1 = 1 δ ( ω 1 + ω 3 ) ω 1 x 1 0 ω 3 ω 1 , S 2 = 1 δ ( ω 1 + ω 3 ) ω 3 x 1 0 ω 1 ω 3

defined by (I.1.16), (I.1.20), respectively.

Proof

Matrices X 1(t) and X 3(t) are derived easily because of the linearity of function f +(x, y). Using

(I.1.35) f + ( x 1 , y 1 ) = ω 1 δ ω 1 η a δ 2 , f ( x 1 , y 1 ) = ω 3 δ ω 3 η a δ 2 , f + ( x 2 , y 2 ) = ω 1 δ ω 1 η a δ 2 , f ( x 2 , y 2 ) = ω 3 δ ω 3 η a δ 2 ,

saltation matrices are obtained directly from their definitions. Since (I.1.30)0 is 2-dimensional and one solution of the second part is already known – the limit cycle, we can apply Theorem I.1.10 to derive the fundamental solution of this part. So we get a matrix

X ~ 2 ( t ) = ω 3 x 1 U + δ V U ( δ W + x 1 W ~ ) + V ( x 1 W δ W ~ ) δ U x 1 V U ( x 1 W + δ W ~ ) + V ( δ W + x 1 W ~ )

such that

X ~ 2 1 ( t 1 ) = a η ω 3 δ x 1 x 1 δ

and det X ~ 2 ( t 1 ) = f ( x 1 , y 1 ) 2 = η a ω 3 2 . If X 2(t) has to satisfy (I.1.15), then clearly X 2 ( t ) = X ~ 2 ( t ) X ~ 2 1 ( t 1 ) .

Now, we can verify the basic assumptions.

Proposition I.1.12

Assuming (I.1.31), unperturbed system (I.1.30)0 has a T-periodic solution with initial point ( x 0 , y 0 ) = 0 , δ + η a , defined by (I.1.2) with branches γ 1(t), γ 2(t) and γ 3(t) given by (I.1.32), (I.1.33) and (I.1.34), respectively. Moreover, conditions H1), H2) and H3) are satisfied.

Proof

Condition H1) was already verified. Since ∇h(x, y) = (0, 1) for all ( x , y ) R 2 and (I.1.35) holds, also condition H2) is fulfilled.

Now suppose that dim N ( I P ~ ξ ( x 0 , 0 , μ , α ) ) > 1 . We recall that f + ( x 0 , y 0 ) N ( I P ~ ξ ( x 0 , 0 , μ , α ) ) . Since N ( I P ~ ξ ( x 0 , 0 , μ , α ) ) is linear, there is a vector

v ¯ N ( I P ~ ξ ( x 0 , 0 , μ , α ) )

such that v ¯ , f + ( x 0 , y 0 ) = 0 . Then we can write v ¯ = ( 0 , v ) * . Using formula (I.1.18) for P ~ ξ we look for the image of v ¯ under the mapping P ~ ξ ( x 0 , 0 , μ , α ) . We subsequently obtain

S 1 X 1 ( t 1 ) v ¯ = v ω 1 a η ω 1 x 1 + δ 2 ( ω 1 + ω 3 ) x 1 δ ω 3 , X 2 ( t 2 ) S 1 X 1 ( t 1 ) v ¯ = v ω 1 a η δ 2 x 1 ( ω 1 + ω 3 ) x 1 ω 1 Z δ ω 1 Z ~ δ ( ω 1 + ω 3 ) + δ ω 1 Z x 1 ω 1 Z ~

where Z = e 2 η ( t 2 t 1 ) and Z ~ = b a ( 1 Z ) are values of W and W ~ at t = t 2,

S 2 X 2 ( t 2 ) S 1 X 1 ( t 1 ) v ¯ = v ω 3 a η δ 2 x 1 ( ω 1 + ω 3 ) ( δ 2 ω 1 x 1 + η ω 3 a x 1 ) Z + δ ω 1 Z ~ δ ( ω 1 + ω 3 ) + δ ω 1 Z x 1 ω 1 Z ~

and finally

X 3 ( T ) S 2 X 2 ( t 2 ) S 1 X 1 ( t 1 ) v ¯ = v δ x 1 ω 1 + ω 3 ω 3 + δ x 1 ω 1 + ω 3 ω 3 Z ω 1 ω 3 Z ~ Z .

Since Z exp { 2 η ω 3 π } < 1 , it is obvious that v ¯ = P ~ ξ ( x 0 , 0 , μ , α ) v ¯ if and only if v ¯ = ( 0 , 0 ) * . Hence the verification of condition H3) is finished.

Because, in general, the formula for A(t) is rather awkward, we move to examples with concrete parameters.

Example I.1.13

Consider system (I.1.30)ε with

(I.1.36) a = b = δ = 1 , μ = 2 , ω 1 = 1 , ω 2 = 5 , g ( x , y , t , ε , μ ) = ( sin ω t , 0 ) * if y > 0 , ( 0 , 0 ) * if y < 0 .

Then we have ω 3 = 3, T = 2π, initial point ( x 0 , y 0 ) = ( 0 , 1 + 2 ) , saltation matrices

S 1 = 1 4 0 3 , S 2 = 1 4 3 0 1 3

and

P ~ ξ ( x 0 , y 0 , 0 , μ , α ) = 1 1 + 5 3 e 2 π 0 e 2 π .

Therefore R 1 = 1 + 5 3 e 2 π , e 2 π 1 * and ψ = 1 e 2 π , 1 + 5 3 e 2 π * R 2 .

After some algebra we obtain

M ( α ) = 1 3 e 2 π ω 2 1 [ ( ω A + B ) sin ω α + ( ω C + D ) cos ω α ]

for M(α) = Mω (α) of (I.1.26), where

(I.1.37) A = 4 2 sin 3 4 π ω + 3 e 2 π 2 + 2 sin 5 4 π ω + 3 e 2 π 3 sin ( 2 π ω ) , B = 5 3 e 2 π 2 + 3 2 e 2 π cos 3 4 π ω + 4 2 cos 5 4 π ω + 5 + 3 e 2 π cos ( 2 π ω ) , C = 3 e 2 π 3 4 2 cos 3 4 π ω 2 + 3 2 e 2 π cos 5 4 π ω + 3 3 e 2 π cos ( 2 π ω ) , D = 2 + 3 2 e 2 π sin 3 4 π ω + 4 2 sin 5 4 π ω + 5 + 3 e 2 π sin ( 2 π ω ) .

For ω > 0, ω ≠ 1, M(α) has a simple root if and only if (ωA + B)2 + (ωC + D)2 > 0. Since A and C are 8-periodic functions,

B 2 + D 2 5 + 3 e 2 π + 2 + 3 2 e 2 π + 4 2 + 5 + 3 e 2 π 2 + 5 + 3 e 2 π + 2 + 3 2 e 2 π + 4 2 2 1 2 = 3 1 + 2 9 e 4 π + 30 e 2 π + 25 6739 ,

and according to Figure I.1.2, we have the estimate

Figure I.1.2. Graphs of the functions A 2 + C 2 and the left-hand side of (I.1.38)

( ω A + B ) 2 + ( ω C + D ) 2 ω A 2 + C 2 B 2 + D 2 400 ω 6739 ,

and one can see that for ω ≥ 17, the T-periodic orbit in the perturbed system (I.1.30) ε persists for all ε ≠ 0 small. It can be proved numerically (see Figure I.1.2) that

(I.1.38) 1 | ω 2 1 | ( ω A + B ) 2 + ( ω C + D ) 2 > 0

for ω ∈ (0, 17). We conclude:

Corollary I.1.14

Consider (I.1.30) ε with parameters (I.1.36). Then 2π-periodic orbit persists for all ω > 0 and ε ≠ 0 small.

Example I.1.15

Consider system (I.1.30) ε with

(I.1.39) a = b = δ = 1 , η = 2 , ω 1 = 1 , ω 2 = 5 , g ( x , y , t , ε , μ ) = μ 1 ( sin ω t , 0 ) * if y > 0 , μ 2 ( x + y , 0 ) * if y < 0 .

Consequently, the Poincaré-Andronov-Melnikov function of (I.1.26) is

M ( α ) = μ 1 1 3 e 2 π ω 2 1 [ ( ω A + B ) sin ω α + ( ω C + D ) cos ω α ] + μ 2 E

where A, B, C, D are given by (I.1.37) and

E = 2 975 739 223 e 2 π .

Function M(α) possesses a simple root if and only if

(I.1.40) | μ 2 | < 1 3 e 2 π | ω 2 1 | ( ω A + B ) 2 + ( ω C + D ) 2 E | μ 1 | .

Applying Theorem I.1.6 we obtain the next result.

Corollary I.1.16

Consider (I.1.30) ε with parameters (I.1.39). If μ 1, μ 2 and ω satisfy (I.1.40), then the 2π-periodic orbit persists for ε ≠ 0 small.

Remark I.1.17

Inequality (I.1.40) means that if the periodic perturbation is sufficiently large (with respect to the non-periodic part of the perturbation), then the T-periodic trajectory persists. Note that the right-hand side of (I.1.40) can be estimated from above by

c 1 ω 2 + c 2 ω + c 3 | ω 2 1 |

for appropriate constants c 1, c 2, c 3, which tends to 0, if ω tends to +∞. Hence the bigger frequency ω, the bigger |μ 1| is needed for fixed μ 2 ≠ 0 for persistence of the T-periodic orbit after Theorem I.1.6.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128042946500035

Exponential integrators for coupled self-adjoint non-autonomous partial differential systems

Enrique Ponsoda , Sergio Blanes , in Applied Mathematics and Computation, 2014

3.2 The non-autonomous case

Suppose we are interested in approximating the solution of (3) on a time mesh, i.e. u ( x , t l ) for t l = t 0 + lh , l = 1 , 2 , , L and h = T / L . This requires to compute numerical approximations to

Y n ( t l ) , Y n ( t l ) , n = 1 , 2 , , n 0 , l = 1 , 2 , , L .

This can be achieved by solving numerically the following matrix equations

V n ( t ) = M ( t , n ) V n ( t ) , V n , 0 = V ˆ n ( t l - 1 ) , t [ t l , t l - 1 ] W n ( t ) = M ( t , n ) W n ( t ) , W n , 0 = W ˆ n ( t l - 1 )

l = 1 , 2 , , L , n = 1 , 2 , , n 0 , and where V ˆ n ( t l - 1 ) , W ˆ n ( t l - 1 ) are the approximate solutions from the previous step.

To solve numerically this problem using standard methods, requires to apply the integrator repeatedly for each value of l and n. In general, the computational cost increases considerably with n, and what is even worst, the error also grows considerably with n and can make the numerical series expansion divergent. We will propose an algorithm which circumvent these two main troubles.

Let us now consider the IVPs (6) and (7), where M ( t , n ) is given by (8). If Φ n ( t , t 0 ) denotes the fundamental matrix solution of the homogeneous matrix equation

Φ n ( t , t 0 ) = M ( t , n ) Φ n ( t , t 0 ) , Φ n ( t , t ) = I R 2 r × 2 r ,

then, the solution of the homogeneous Eqs. (6),(7) can be written in the form

V n ( t ) = Φ n ( t , 0 ) V n ( 0 ) , W n ( t ) = Φ n ( t , 0 ) W n ( 0 ) .

Note that the dependence on n is given by Φ n because the initial conditions, V n ( 0 ) and W n ( 0 ) , remain the same for all n 1 .

If we denote by V l , n , l = 0 , 1 , , L , the values of V n ( t ) , at a mesh t 0 = 0 , t 1 = h , , t L = Lh , with h = T / L , and Φ n , l Φ n ( t l + h , t l ) , then

V l + 1 , n = Φ n , l V l , n , l = 0 , 1 , , L - 1 .

At this point it is convenient to analyze the structure of the fundamental matrix solution. It is easy to prove that, for this problem, the matrix, Φ n , l , takes the form

Φ n , l = ν n , l 1 n τ n , l n ξ n , l η n , l ,

where ν n , l , τ n , l , ξ n , l , η n , l are R r × r -valued functions bounded by a constant which does not depend of n, and Φ n , l = I + O ( h ) in the limit h 0 (or L ).

Then, for the integration over a finite time interval, t [ 0 , T ] , with h = T / L , we have that

Φ n = l = 0 L - 1 Φ n , l = A 1 n B nC D ,

where A , B , C , D are R r × r -valued functions which can also be bounded by functions depending on T , P ( t ) , Q ( t ) , but not on n. As a result, standard explicit methods like explicit Runge–Kutta methods are not appropriate for solving this problem because for a fixed value of h, the numerical solution grows polynomially with n and the series solution will not converge so, one has to use implicit methods.

The matrices Φ n , l for l = 0 , 1 , , L - 1 , n = 1 , 2 , , n 0 , have to be numerically approximated, and this can lead to exceedingly costly algorithms. If we take the same time step, h, for all values of n, the matrices P ( t ) , Q ( t ) need to be computed only once on each interval (if the same quadrature rule is used for all values of n), i.e. the same values P ( t l + c i h ) , Q ( t l + c i h ) , can be used for all n . However, as we will see, the performance of most standard methods rapidly deteriorate with n. In addition, implicit methods usually need to compute the inverse of a matrix (the problem is linear) and the algorithm has to repeat this computation L · n 0 times.

In the following, we show that some exponential methods have many advantages for the numerical integration of this problem. The exponential methods we consider in this work are explicit methods, but closely related to implicit methods for linear problems [6]. They show a better performance for large values of n (they converge to the exact solution in the limit when P ( t ) , Q ( t ) are constant) and there is a recursive algorithm which allows to compute all matrices Φ n , l , n = 2 , 3 , , n 0 from Φ 1 , l .

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S0096300314007152