On the Gauss—Newton method for solving equations

We use a combination of the center—Lipschitz condition with the Lipschitz condition condition on the Fr´echet—derivative of the operator involved to provide a semilocal convergence analysis of the Gauss-Newton method to a solution of an equation. Using more precise estimates on the distances involved, under weaker hypotheses, and under the same computational cost, we provide an analysis of the Gauss— Newton method with the following advantages over the corresponding results in [8]: larger convergence domain; ﬁ ner error estimates on the distances involved, and an at least as precise information on the location of the solution


Introduction
In this study we are concerned with the problem of approximating a solution x of equation F 0 (x) T F (x) = 0, (1.1) where F is a Fréchet-differentiable operator defined on X = IR n , with values on Y = IR m (m ≥ n).
A large number of problems in applied mathematics and also in engineering are solved by finding the solutions of certain equations.For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems.For the sake of simplicity, assume that a time-invariant system is driven by the equation ẋ = T (x), for some suitable operator T , where x is the state.Then the equilibrium states are determined by solving equation (1.1).Similar equations are used in the case of discrete systems.The unknowns of engineering equations can be functions (difference, differential, and integral equations), vectors (systems of linear or nonlinear algebraic equations), or real or complex numbers (single algebraic equations with single unknowns).Except in special cases, the most commonly used solution methods are iterative-when starting from one or several initial approximations a sequence is constructed that converges to a solution of the equation.Iteration methods are also applied for solving optimization problems.In such cases, the iteration sequences converge to an optimal solution of the problem at hand.Since all of these methods have the same recursive structure, they can be introduced and discussed in a general framework.
We are seeking least-square solutions of (1.1).That is we solve the minimization problem: We use the famous Gauss-Newton method to generate a sequence approximating a solution x of (1.2).
There is an extensive literature on the local as well as the semilocal convergence analysis of Newton-type methods under various conditions in the more general setting when X and Y are Banach spaces [1]- [10].
In particular, in the case of Gauss-Newton method (1.3), Li et al. provided a semilocal convergence analysis in [8] using the concept of the Lipschitz condition.
Recently, we have successfully used in [1]- [3] a combination of Lipschitz and center-Lipschitz conditions instead of only Lipschitz conditions as in to provide a finer local and semilocal convergence analysis for Newton-type methods, when F is an isomorphism.The main idea is derived from the observation that more precise upper bounds on the norms k F 0 (x) −1 F 0 (x ) k can be obtained if the needed center-Lipschitz condition is used: (which is commonly used in [4]- [10]).
It turns out that these ideas can be used to study the semilocal convergence of the Gauss-Newton method (1.3).In particular, we provide a semilocal convergence analysis with the following advantages over the work by Li et al. [8]: 1. Weaker sufficient convergence conditions; 2. Larger convergence domain; 3. Finer estimates on the distances involved, which implies that fewer iterations are needed to achieve a desired error tolerance.
4. An at least as precise information is provided on the uniqueness of the solution.
The above improvements are also obtained under the same computational cost since the computation of the Lipschitz condition with constant (see (1.4)) requires that of the center-Lipschitz condition with constant 0 (see (1.5)).
In particular, we assume there exist > 0 such that the Lipschitz condition k F 0 (x) − F 0 (y) k≤ k x − y k, holds for all x, y ∈ D. (1.4)In view (1.4), there exists 0 > 0 such that center-Lipschitz condition Note that in [8], the same Lipschitz constant is used in (1.4) and (1.5).However, holds in general, and a can be arbitrarily small [1]- [3].Let us provide a simple example where strict inequality holds in (1.6).Let instead of the stronger hypotheses (1.4) used in [8], leads us to the advantages as stated in the abstract of this study of our approach over the corressponding ones in [8].

Semilocal convergence analysis of the Gauss-Newton method (1.3)
Let IR m×n be the set of all m × n matrices, and A + be the generalized inverse of A ∈ IR m×n .Then, when m ≥ n, and A is of full rank, we have We need the lemmas: [8], [9], [10] Let A, E ∈ IR m×n .Assume B = A + E, and k A + k k E k< 1.Then, the following hold: rank (B) ≥ rank (A).
If rank (A) = n, m ≥ n, then we have rank (B) = n.
[8], [9], [10] Let A, E ∈ IR m×n .Assume B = A + E, and k A + k k E k< 1.Then, the following hold: It is convenient for us to define for each fixed a ∈ (0, 1], functions f a and g a on [0, 1] by and where, and It is simple algebra to show q is the small positive zero of function g a with q ∈ (0, 1).We also have f a (0) = −2 q < 0, and It follows from the intermediate value theorem that there exists a maximal

6)
Define also h 1 , b and c by for some x 0 ∈ D.
Let j ≥ 0. We get is a closed set), and The corresponding q, h given by Li and Zhang in [8] are q = .5> q and h > h.
Hence, we have expanded with applicability of Gauss-Newton method (1.3) under the same computational cost and with a smaller ratio than in [8].
In the next case, we take advantage of the case a < 1.
The advantages obtained here can also be extended to the case of the Newton-type method x n+1 = x n − F 0 (x n ) + F (x n ) (n ≥ 0) analyzed by Ben-Israel in [4].However, we leave the details to the motivated reader.