On uniform-ultimate boundedness and periodicity results of solutions to certain second order non-linear vector di ﬀ erential equations

In this paper, we employ the second method of Lyapunov to examine su ﬃ cient conditions for the uniform-ultimate boundedness of solutions and existence of at least one periodic solution to the following second order vector di ﬀ erential equation:


Introduction
The goal of this paper is to establish some results on the uniform-ultimate boundedness of solutions and existence of at least one periodic solution to the following second order nonlinear differential equation: X + F (X,Ẋ)Ẋ + H(X) = P (t, X,Ẋ).
(1.1) Eq. (1.1) can be written as a system of first order differential equations: (1.2) where X, Y : R + → R n , R + = [0, ∞), R = (−∞, ∞); H : R n → R n ; P : R + × R n × R n → R n ; F is an n × n continuous symmetric positive definite matrix function depending on the arguments displayed explicitly and the dots indicate differentiation with respect to variable t. To ensure that solution of Eq. (1.1) exists, we assume the continuity of the functions F, H and P . Furthermore, we assume that functions F, H and P satisfy Lipschitz condition with respect to their respective arguments.
In the past five decades or more, the study of qualitative behaviour of solutions to second order and higher order scalar or vector linear and nonlinear differential equations have been studied by many authors. In literature, some methods such as integral test, frequency domain and direct method of Lyapunov have been employed to study qualitative behaviour of solutions of some differential equations. However, the direct method (also called second method) of Lyapunov has been found and established to be an effective method and mostly used among others. (See, [1] - [40]).
Our findings in literature shows that, Loud [22] gave some conditions for the convergence of solutions of the second order scalar differential equation: where c is a positive constant. Later, Ezeilo [16] considered an n-dimensional form of Eq. (1.3) i.e.Ẍ + CẊ + G(X) = P (t, X,Ẋ), (1.4) where C is a real n × n constant matrix. The author established some results on the ultimate boundedness and convergence of solutions of (1.4).
On uniform-ultimate boundedness and periodicity results of ... 759 Also, Tejumola [31] considered a certain second order matrix differential equation of the form:Ẍ + AẊ + H(X) = P (t, X,Ẋ), with A being a constant n × n symmetric matrix; X, H and P are n × n continuous matrices. He established some criteria for the stability of the trivial solution when H(0) = 0 and P ≡ 0, ultimate boundedness of all solutions and the existence of periodic solution when P 6 = 0. Later, Afuwape and Omeike [6] considered a more general second order differential equation of the form:Ẍ and established a convergence result for this equation by imposing certain conditions on vectors F (Ẋ), G(X) and P (t, X, Y ).
Furthermore, Omeike et.al. [25] used an incomplete Lyapunov function supplemented with a signum function to establish the boundedness of solutions of Eq. (1.1). In a recent paper, Adeyanju [5] proved some results on the stability and boundedness of solutions to (1.1) using a complete Lyapunov function. The works of Omeike et.al. [25], Adeyanju [5] and above listed papers gave us motivation for the present work.

Preliminary Results and Definition
In this section, we provide some basic results that are useful in proving our main results. Lemma 2.1. ( [17], [30], [33] ) Let A be a real n × n symmetric matrix, then for any X ∈ R n we have, where δ a and ∆ a are respectively, the least and greatest eigenvalues of the matrix A. 32]). Let H(0) = 0 and assume that the matrices A and J h (X) are symmetric and commute for all X ∈ R n . Then, [39]) Suppose that there exists a Lyapunov function V (t, X) defined on 0 ≤ t ≤ R, kXk ≥ R, (where R may be large) which satisfies:

Ultimate Boundedness Result
Given that H(0) = 0, H(X) 6 = 0 whenever X 6 = 0 and J h = J h (X) denotes the Jacobian matrix ( ∂h i ∂x i ) of H(X) in Eq. (1.1), then we have the following theorem.
On uniform-ultimate boundedness and periodicity results of ... 761 Theorem 3.1. Suppose that all the basic assumptions imposed on F (X, Y ) and H(X) hold and in addition, given that for any arbitrary X, Y ∈ R n : (i) matrix J h (X) is symmetric and positive definite such that its eigenvalues λ i (J h (X)) (i = 1, 2, 3, .., n) satisfy: where α, δ and are positive constants such that, (iii) there exist some positive finite constants m 1 and m 2 such that vector P (t, X, Y ) satisfies: Then all the solutions of Eq. (1.1) or system 1.2 are uniformly bounded and uniform-ultimately bounded.
Proof. The proof of this theorem rests on the Lyapunov function It is clear that when X = 0 and Y = 0, the function defined by 3.4 vanishes. By applying Lemma 2.1 and Lemma 2.2 to 3.4, we obtain Similarly, using Lemma 2.1, Lemma 2.2 and the fact that Hence, The time derivative of the function V (t) along the solution path of the equation being studied is given by: where I is an n × n identity matrix. This derivative can be written aṡ The second term in U 2 can be expressed as follows where k 1 > 0 is a constant whose value will be given later. By Lemma 2.1 and assumption (ii) of Theorem 3.1, we have On uniform-ultimate boundedness and periodicity results of ... 763 Therefore, Also using Lemma 2.1, we have Thus, and lastly by 3.3, we have where δ 4 = max{α; (1 + δ)}. Thus, To conclude the proof of the theorem, we follow the same pattern or argument established in the proof of Theorem 1 of ( [7], [23]) or Yoshizawa approach in [37].
So, suppose in the inequality 3.7 we choose {k X k Now, we prove that there exists a positive constant K such that k X k 2 + k Y k 2 ≤ K, for t ≥ T (T > 0), for any solution (X(t), Y (t)) of system 1.2. However, by 3.8 and 3.5 we have for any solution (X(t), Y (t)) of 1.2 that there exists t 1 > 0 such that for all t ≥ 0, then, by 3.8, we havė for all t ≥ 0, then V (X(t), Y (t)) → −∞ as t → ∞, this is a contradiction to 3.5. Therefore, from 3.5 there exists a constant K 1 > δ 7 such that max kX(t)k 2 +kY (t)k 2 =δ 2 7 V (X(t), Y (t)) < min In what follows, we establish that the solution (X(t), Y (t)) of 1.2 must satisfy the inequality Otherwise, from 3.9 there exist t 2 and t 3 , t 1 < t 2 < t 3 , such that for t 2 ≤ t ≤ t 3 . By 3.8, the inequality 3.14 means that V (t 2 ) > V (t 3 ) and this is a contradiction to the claim that V (t 2 ) < V (t 3 ) (t 2 < t 3 ) that was obtained from 3.10, 3.12 and 3.13.
Thus, (X(t), Y (t)) must satisfy 3.11. This completes the proof of the theorem.
2 Our next theorem deals with the case where vector H(X) is not necessary differentiable.
Theorem 3.2. Suppose the basic assumptions of Theorem 3.1 hold but in place of condition (i) and inequality 3.2 of condition (ii) we have, (i) there exists an n×n real continuous operator A(X, Y ) for any vectors X, Y ∈ R n such that (3.15) and its eigenvalues λ i (A(X, Y ))(i = 1, 2, 3..., n) satisfy where a and 1 are positive constants; (ii) α > 3 . Proof. It is good to note here that, the proof of this theorem is similar to the proof of Theorem 3.1 except for some little modifications. Hence, we may refer to certain part of the Proof of Theorem 3.1.
First, we define a scalar function V (t) = V (t, X, Y ) by where both α and a are as defined above. Obviously, when X = 0 and Y = 0, V (t) defined by 3.18 becomes zero. As in the proof of Theorem 3.1, one can easily verify that (3.19) for certain positive constants δ 8 and δ 9 .
Differentiating 3.18 with respect to t along 1.2 we obtain Setting Y = 0 (and note that H(0) = 0) in 3.15 and using the result iṅ V (t) above, we have ´is a matrix function and I is an n × n identity matrix. Again like in the proof of Theorem 3.1, we can re-writeV (t) in the following waẏ By Lemma 2.1, assumption (ii) of Theorem 3.1 and 3.16, we have k BY k 2 ≤ (2 1 + α ) 2 k Y k 2 , where B is the matrix defined earlier. Therefore, On uniform-ultimate boundedness and periodicity results of ... 767 where k 2 > 0 is also a constant whose value will be determined later.

Existence of a periodic solution
Theorem 4.1. Further to all the conditions of Theorem 3.1, we shall assume that vector P (t, X, Y ) is ω−periodic in t, that is, P(t + ω, X, Y ) = P (t, X, Y ). Then, there exists at least one ω-periodic solution (X(t), Y (t)) of 1.2.
Proof. The proof of this theorem is similar to the proof of Theorem 4.1. 2

Conclusion
By constructing suitable complete Lyapunov functions which serve as basic tools, we are able to establish sufficient conditions that guarantee the uniform-ultimate boundedness of solutions to a certain class of second order vector differential equations when H(X) is differentiable and when it is not necessarily differentiable. Also, conditions for the existence of at least a periodic solution for the equation considered is established for the two cases of H(X).

Statements and Declarations
We declare that there is no competing interests concerning this paper.