Robust cubically and quartically iterative techniques free from derivative F . SOLEYMANI ISLAMIC AZAD

Constructing of a technique which is both accurate and derivativefree is one of the most important tasks in the field of iterative processes. Hence in this study, convergent iterative techniques are suggested for solving single variable nonlinear equations. Their error equations are given theoretically to show that they have cubic and quartical convergence. Per iteration the novel schemes include three evaluations of the function while they are free from derivative as well. In viewpoint of optimality, the developed quartically class reaches the optimal efficiency index 4 ≈ 1.587 based on the Kung-Traub Hypothesis regarding the optimality of multi-point iterations without memory. In the end, the theoretical results are supported by numerical examples to elucidate the accuracy of the developed schemes. 2010 Mathematics Subject Classification : 65H05, 65B99, 41A25.


Introduction and Background Literature
Assume that f : D ⊆ R → R be a sufficiently smooth scalar function in the open interval D = (a, b).Determining the (simple or multiple) roots of the nonlinear equations has attracted the attention of pure and applied mathematicians for many years.Many problems can be formulated in terms of estimating the zeros of such nonlinear functions.In general, these zeros cannot be expressed in closed form.Thus, finding an approximation to the solution of the nonlinear equation f (x) = 0 has been considered by many researchers up to now [13,14].For the most part, the root solvers are divided into two categories, methods in which we use the derivatives of the function [8,10,11] and the methods in which we do not use any derivative of the given function [7,9].Clearly, derivative-involved techniques are not applicable in real-world circumstances always.Because the evaluation of the derivatives takes up a great deal of time and occasionally, the functions are not differentiable in the neighborhood wherein their roots are located.Therefore, derivative-free methods are now more in focus by researchers.
The first derivative-free scheme was mooted by Steffensen in [15] as follows with second order of convergence.Note that (1.1) was obtained through forward finite difference approximation for the first derivative of the function in the Newton's iteration and we should remark that backward finite difference could also be considered in this procedure.Steffensen's method has the efficiency index 1.414.As an another example, the derivative-free method of Sidi [7] of order at most two is given in the form below with x 0 , x 1 , . . ., x k as initial points to be provided by the user.Remind that this is a generalization of Secant method to reach the order two in its limitation case.In this method, it is also considered where f [x n , x n−1 , . . ., x n−i ] are divided differences of f (x).This technique is far from a good efficiency index, because with three points per iteration, it possesses 1.839 as the order and 1.225 as the index of efficiency.
Dehghan and Hajarian in [2] proposed a cubically derivative-free scheme in the form below, which also includes three evaluations of the function per iteration and possesses 1.442 as the efficiency index.
In 2010, a fourth-order derivative-free method [6] was given by Liu et al. in the following form . This technique consists of three evaluations of the function per iteration to obtain the fourth-order convergence.And hence 1.587, is its efficiency.Note that (1.5) is an optimal two-step method according to the still un-proved conjecture of Kung and Traub concerning the optimality of multi-point iterations without memory [5].A multipoint method without memory reaches the optimal efficiency index 2 (n−1)/n , where n is the total number of evaluations per full cycle.
In 2011, Soleymani and Hosseinabadi [9] provided the following cubically iterative method where and could be given by f [x n , w n ] = f (wn)−f (xn) To see more on this topic, we refer the readers to the literatures [1,3,4,12].Contents of this paper are summarized in what follows.In Section 2, our novel contributions are constructed by considering a two steps cycle (predictor-corrector) in which the derivative in the quotient of the new Newton's iteration (the corrector part) is estimated such that the orders three and particulary four, namely, optimal according to the Kung-Traub Hypothesis (1974) be attained.Our derivative-free method and class of methods without memory are supported with detailed proofs in this section to verify the construction theoretically.In Section 3, it will be observed that the computed results listed in Table 2, completely support the theory of convergence and efficiency analyses discussed in Section 2 for the suggested cubically and quarticaly methods.Moreover, it can be seen that in majority of the problems the accuracy of the presented optimal fourth-order methods is higher than the respective competitors in terms of the number of significant digits gained by each method.Thus, the presented schemes (specially the optimal ones) can be of practical interest.Finally, a short conclusion is given in Section 4.

Construction of the Novel Methods
In this section, we look for accurate derivative-free methods in which there are three evaluations of the function per iteration and they possess the efficiency indices 1.442 and 1.587.For this reason, we take into consideration the following two-step cycle in which there are four evaluations per iteration.To achieve our goal, we must solve two problems in the cycle (2.1).The problems are as follows.
First, it consists of two derivative evaluations and second, it has four evaluations per iteration.Accordingly, we approximate the first derivative of the function in the second step f 0 (y n ), using all previous known data, i.e. f (x n ), f 0 (x n ) and f (y n ).By using the Barrow's rule, we have We assume that the integral in (2.2) is estimated by a combination of known values, i.e., f (x n ), f(y n ), f 0 (x n ) in the following form At this time, the most important challenge is to attain the real-valued parameters α, β and γ as efficiently as possible such that the order of (2.1) does not decrease.Hopefully, by considering the validness of the equation (2.3) in the following three equations f (t)=constant, f (t) = t, and f (t) = t 2 , we can attain the three unknown real parameters.In fact, by substituting the known values in the considered three equations, we have a linear system of three equations with three unknowns as follows (2.4) By solving the system of linear equations (2.4), we obtain (2.5) Hence, by considering (2.5) in (2.3), we have and eventually by applying (2.6) in (2.2), an approximation for f 0 (y n ) is obtained in the following form Note that by using the divided differences, we attain f 0 (y n ) = 2f [y n , x n ]− f 0 (x n ) and as a result, relation (2.1) with four evaluations per iteration is reduced to the following case with three evaluations per iteration (2.8) Our aim is not fulfilled yet, because (2.8) is not derivative-free.To remedy this, we should approximate the first derivative f 0 (x n ) efficiently.Now, the backward finite difference approximation to estimate f 0 (x n ) effectively.Thus, we attain the following contributed derivative-free method (2.9) The method (2.9) consists of three evaluations of the function per iteration while it is derivative-free and reaches the order of convergence three.The forward finite difference approximation can also be considered in approximating of f 0 (x n ) in (2.8).Theorem 1 demonstrates its error equation.
Theorem 1.Let the scalar function f be sufficiently smooth in the real open domain D. Further assume that α is its simple root.Then the technique (2.9) satisfies the follow-up error equation is the asymptotic error constant of (2.9) where c j = f (j) (α) j! , j ≥ 1, we expand any terms of (2.9) around the simple root α in the nth iterate.Thus, we write f . Hence, we obtain In the same vein, we have f n ), and Finally, the Taylor expansion in the last step of (2.9) using relation (2.12) gives us 13) which shows that the method (2.9) is of order three with only three evaluations per iteration.As a consequence, the efficiency index of (2.9) is 1.442, which is greater than 1.225 of (1.2)'s, 1.414 of (1.1)'s, and is equal to 1.442 of (1.4)'s and (1.6)'s.This completes the proof.
Unfortunately, due to the use of backward (or forward) finite difference for approximating f 0 (x n ) in (2.8), we have obtained a cubically convergent method, which is not optimal in terms of Kung-Traub Conjecture on optimal multi-point methods without memory.To overcome on this and subsequently have an iteration, which is derivative-free and optimal with 1.587 as the index of efficiency, we apply a weight function in the second step of our two-step cycle to reach the optimality as comes next Taylor's series expanding at this time around the simple root manifests that Clearly, by applying the approach of weight function as above, the order of convergence will arrive at four by consuming only three evaluations of the function per full cycle when Thus, by taking into consideration of these conditions on the weight function, the order of convergence for the class of two-step derivative-free methods without memory (2.14) is four and its error equation reads the following (2.16) Hence, we could write, as an example from our class (2.14), the following contributed iterative method in which there are only three evaluations of the function per full iteration (2.17) Theorem 2. Let the scalar function f be sufficiently smooth in the real open domain D. Further assume that α is its root.Then the technique (2.17) which is a generalization of (2.9) is of optimal order four and satisfies the error equation below is the asymptotic error constant of (2.17), we find the Taylor's series expansion of any terms in (2.17).Symbolic computations results in relations (2.11)-(2.13)again.Thus, it is only required to obtain Now using (2.19) and the last step of (2.17) results in This shows that (2.17) is a fourth-order method consuming only three evaluations per iteration.Consequently, its efficiency index is 4 1/3 ≈ 1.587, which is optimal and greater than that of (2.9) and is equal to that of (1.5).This completes the proof.
Note that some typical formats of the weight function G(t) which satisfies the conditions G(0) = 1, G 0 (0) = f [w n , x n ] and | G 00 (t) |≤ ∞ and make (2.14) optimal are given in Table 1.As an another instance according to Table 1, we can produce the following optimal efficient two-step three-point (we mean three different points x n , w n , and y n are in use per computing step) without memory derivative-free method (2.21) Table 1.Some typical formats of the weight function G(t).
Weight Function Form 1 Form 2

Numerical Computations
The practical utilities of (2.9), (2.17) and (2.21) are given by solving a couple of numerical examples and comparison with other well-known methods of different orders in this section.We have used the second order method of Steffensen (SM2), the third-order scheme of Dehghan and Hajarian (DHM3), the fourth-order scheme of Liu et al. (LM4) and our novel contributed techniques (2.9), (2.17) and (2.21).Note that iteration is the repetition of a particular process like a generalized rule that we adopt in the first step and later implement it to the succeeding steps.The number of iterations used in obtaining the result of a particular problem is an important factor that decides the length of the solution of a problem.Hence, it is preferable to have a process that requires lesser number of iteration to reach its final solution in better way, like (2.17) and (2.21).The test functions and their simple roots are given as follows: The results are summarized in Table 2 in terms of accuracy when the Total Number of Evaluations is 12 (TNE=12), six iterations of SM2 and 4 iterations of the DH3, LM4, (2.9), (2.17), and (2.21).In our numerical comparisons, we have used the stopping criterion |f (x n )| ≤ 10 −450 .As Tables 2 manifests, our contributed methods are accurate and efficient in contrast to the other high-order schemes.Based upon Table 2, if initial approximations are sufficiently close to the wanted roots, then only three iterations are necessary in most practical problems for any method of our optimal class.That is to say, from the results shown in Table 2 and a number of numerical experiments we conclude that the proposed class of two-step methods is quick.We have checked that the sequence of iterations converge to an approximation of the solution for nonlinear equations in our numerical works.We note that an important problem of determining good starting points appears when applying iterative methods for solving nonlinear equations.Also note that a quick convergence, one of the advantages of multi-point methods, can be attained only if initial approximations are sufficiently close to the sought roots; otherwise, it is not possible to realize the expected convergence speed in practice.

Conclusions
One of the most important techniques to study nonlinear equations is the use of iterative processes, starting from an initial approximation x 0 , called pivot, successive approaches (until some predetermined convergence criterion is satisfied) x n are computed, n = 1, 2, ... with the help of certain iteration function.Certainly Newton's method (second-order) is the most useful iteration for this purpose.In this case, we need to evaluate a derivative in each step, which is indeed the main difficulty.Steffensen's method (second order) can be considered as a simplification of original Newton's method, but Steffensen's iteration has a low efficiency index as well.Hence in the language used so far, convergent third-and fourth-order methods in which there are three evaluations of the function and no derivative evaluation have discussed.The novel schemes reach the 1.442 and 1.587 as the efficiency indices and mostly perform well.In light of these strong points, the contribution (specially the optimal developed schemes) in this article can be viewed as powerful and robust techniques for solving one variable nonlinear equations.Our next aim is to build optimal without memory three-step four-point methods free from derivative according to the suggested class of optimal fourth-order methods (2.14) in this paper.