Intuition behind optimzation problem of ridge regression? In one of the texts that I am reading it

cherriwood5so3z

cherriwood5so3z

Answered question

2022-06-01

Intuition behind optimzation problem of ridge regression?
In one of the texts that I am reading it is given that regularization parameter restricts the choice of functions in case data given is not sufficient for processing of signal. It is given that lambda acts as relative trade-off between norm and loss function. I have two questions:
Why we are choosing norm as the criterion in this optimization problem?
How lambda is acting as a trade-off?
My understanding says that as I increase lambda my weight has to be reduced much more as compared to when lambda is low,which happens only when we restrict lower limit of the optimization problem.So how lambda is a trade-off when both norm and loss-function reduces with increase of lambda.
Please shed some light on this.

Answer & Explanation

mirselena7kifm

mirselena7kifm

Beginner2022-06-02Added 1 answers

Why we are choosing norm as the criterion in this optimization problem?
Mainly because it has a closed form solution (unlike l1 norm penalty) since it was proposed much before the intensive computing era. There is nothing special in Euclidean norm. The l1 norm is much more popular these days, unlike the former, it also performs variables ("features") selection and not only shrinks the magnitude of the coefficients.
How lambda is acting as a trade-off?
For large penalty (λ) you shrink heavily the coefficients of the features, namely you don't put much faith on your raw data. For λ→∞, the coefficients are shrank to zero. For λ=0, you just use the usual least squares estimates, i.e., you use only the loss function and not penalizing it or stabilizing the results.

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?