Actually i don't know how to distinguish maximization and minimization of nonlinear function with Ne

Leah Pope

Leah Pope

Answered question

2022-06-28

Actually i don't know how to distinguish maximization and minimization of nonlinear function with Newton Raphson Method.

What i know is, for finding the optimization points, we have to calculate this iteration:
x i + 1 = x i [ H f ( x i ) ] 1 f ( x i )
Then, what is actually the difference between maximization and minimization using this method?

Answer & Explanation

Trey Ross

Trey Ross

Beginner2022-06-29Added 30 answers

Newton-Raphson is based on a local quadratic approximation. The iterate moves to the optimum of the quadratic approximation. Whether you minimize or maximize does not depend on the iteration calculation (you cannot modify it to turn minimization into maximization or vice versa) but on the shape of the approximation. The approximation is convex when H f ( x i ) is positive semidefinite (psd), and concave when H f ( x i ) is negative semidefinite (nsd). When H f ( x i ) is psd, you expect to move to a local minimum, whereas when it is nsd,you expect to move to a local maximum.

The easiest way to think about this is for functions R R , so let's take f ( x ) = x 3 . At x = 1 the local quadratic approximation is g ( x ) = 1 + 3 ( x 1 ) + 3 ( x 1 ) 2 which is convex. So if you perform an iteration of Newton raphson, you move to the minimum of g and you hope to find a minimum of f.

On the other hand, if you start at x = 1, the local quadratic approximation is g ( x ) = 1 + 3 ( x + 1 ) 3 ( x + 1 ) 2 , which is concave. So if you perform an iteration of Newton raphson, you move to the maximum of g and you hope to find a maximum of f.

If the definiteness of H f does not agree with your goal (e.g., H f is nsd but you want to minimize), then a quadratic approximation is not useful. It might be better to switch to other methods such as gradient descent.
Yahir Crane

Yahir Crane

Beginner2022-06-30Added 9 answers

Suppose we want to find the x ^ R k that maximizes the (twice continuously) differentiable function f : R k R .

Well
f ( x + h ) a + b h + 1 2 h C h
where a = f ( x ) , b = f ( x ) and C = D 2 f ( x ).

Note that C will be symmetric. This implies
f ( x + h ) b + C h .
Thus the first order condition for a maximum is
0 = b + C h ^
which implies that
h ^ = C 1 b
In other words, the vector that maximizes the second order Taylor approximation to f at x is
x + h ^ = x C 1 b = x ( D 2 f ( x ) 1 ) f ( x )
Which I am sure you can relate to your initial formula above.

Do you have a similar question?

Recalculate according to your conditions!

New Questions in High school geometry

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?