Recent questions in Regression

Inferential StatisticsAnswered question

twaishi03m 2022-12-18

Which characteristic of a data set makes a linear regression model unreasonable?

Inferential StatisticsAnswered question

Bailee Richards 2022-11-26

Find the meaning of 'Sxx' and 'Sxy' in simple linear regression

Inferential StatisticsAnswered question

zweifelndcuv 2022-11-24

In the least-squares regression line, the desired sum of the errors (residuals) should be

a) zero

b) positive

c) 1

d) negative

e) maximized

a) zero

b) positive

c) 1

d) negative

e) maximized

Inferential StatisticsAnswered question

Ricky Arias 2022-11-07

Explain what does it mean this denotation: $$\underset{w}{min}||Xw-y|{|}_{2}^{2}$$

Inferential StatisticsAnswered question

Messiah Sutton 2022-11-06

Why is the standardized regression coefficient in a regression model with more than one independent variable not the same as the correlation coefficient between x we interested in and y in a regression model with more than one independent variable?

$$\hat{{\beta}_{i}}=\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})\cdot \frac{\mathrm{S}\mathrm{D}({Y}_{i})}{\mathrm{S}\mathrm{D}({X}_{i})}$$

So

$$\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})=\hat{{\beta}_{i}}\cdot \frac{\mathrm{S}\mathrm{D}({X}_{i})}{\mathrm{S}\mathrm{D}({Y}_{i})}$$

The formula for the standardized regression coefficient is also:

$$standardizedBeta=\hat{{\beta}_{i}}\cdot \frac{\mathrm{S}\mathrm{D}({X}_{i})}{\mathrm{S}\mathrm{D}({Y}_{i})}$$

So shouldn't it be

$$standardizedBeta=\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})$$?

$$\hat{{\beta}_{i}}=\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})\cdot \frac{\mathrm{S}\mathrm{D}({Y}_{i})}{\mathrm{S}\mathrm{D}({X}_{i})}$$

So

$$\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})=\hat{{\beta}_{i}}\cdot \frac{\mathrm{S}\mathrm{D}({X}_{i})}{\mathrm{S}\mathrm{D}({Y}_{i})}$$

The formula for the standardized regression coefficient is also:

$$standardizedBeta=\hat{{\beta}_{i}}\cdot \frac{\mathrm{S}\mathrm{D}({X}_{i})}{\mathrm{S}\mathrm{D}({Y}_{i})}$$

So shouldn't it be

$$standardizedBeta=\mathrm{c}\mathrm{o}\mathrm{r}({Y}_{i},{X}_{i})$$?

Inferential StatisticsAnswered question

akuzativo617 2022-11-03

The gradient of the regression line $x$ on $y$ is $-0.2$ and the line passes through $(0,3)$. If the equation of the line is $x=c+dy$, find the value of $c$ and $d$.

Inferential StatisticsAnswered question

Kailyn Hamilton 2022-11-02

An simple formula, an example, and an explanation for what all the symbols and variables are for basic linear regression?

Inferential StatisticsAnswered question

Trace Glass 2022-10-31

How to reduce equation?

$$p=\frac{{e}^{{\beta}_{0}+{\beta}_{1}\ast age}}{{e}^{{\beta}_{0}+{\beta}_{1}\ast age}+1}$$

to $${\mathrm{log}}_{e}\frac{p}{1-p}={\beta}_{0}+{\beta}_{1}\ast age$$

$$p=\frac{{e}^{{\beta}_{0}+{\beta}_{1}\ast age}}{{e}^{{\beta}_{0}+{\beta}_{1}\ast age}+1}$$

to $${\mathrm{log}}_{e}\frac{p}{1-p}={\beta}_{0}+{\beta}_{1}\ast age$$

Inferential StatisticsAnswered question

Angel Kline 2022-10-18

Suppose we have the regression model: ${y}_{i}$ = ${\beta}_{0}$ + ${\beta}_{1}{x}_{i}$ + ${\u03f5}_{i}$

where ${y}_{i}$ = (${Y}_{i}$ - $\overline{Y}$) and ${x}_{i}$ = (${X}_{i}$ - $\overline{X}$).

This will be true iff ${\beta}_{0}$ = 0. We immediately see that ${\beta}_{0}$ = (${Y}_{i}$ - $\overline{Y})$) - ${\beta}_{1}$(${X}_{i}$ - $\overline{X})$). Where ${\beta}_{1}$ is given by $\frac{COV(X,Y)}{VAR(X)}$. I don't believe this quantity is guaranteed to be 0, so would the answer be that we are unable to determine if the regression line passes through the origin?

where ${y}_{i}$ = (${Y}_{i}$ - $\overline{Y}$) and ${x}_{i}$ = (${X}_{i}$ - $\overline{X}$).

This will be true iff ${\beta}_{0}$ = 0. We immediately see that ${\beta}_{0}$ = (${Y}_{i}$ - $\overline{Y})$) - ${\beta}_{1}$(${X}_{i}$ - $\overline{X})$). Where ${\beta}_{1}$ is given by $\frac{COV(X,Y)}{VAR(X)}$. I don't believe this quantity is guaranteed to be 0, so would the answer be that we are unable to determine if the regression line passes through the origin?

Inferential StatisticsAnswered question

bergvolk0k 2022-10-17

Regression analysis is a statistical process for estimating the relationships among variables. Regression analysis is widely used for prediction and forecasting. So why is regression analysis also used as statistical test?

Inferential StatisticsAnswered question

Winston Todd 2022-10-15

Given this regression model: ${y}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i}+{E}_{i}$.

All the assumptions are valid except that now: ${E}_{i}\sim N(0,{x}_{i}{\sigma}^{2})$

Find Maximum likelihood parameters for ${\beta}_{0}$, ${\beta}_{1}$

All the assumptions are valid except that now: ${E}_{i}\sim N(0,{x}_{i}{\sigma}^{2})$

Find Maximum likelihood parameters for ${\beta}_{0}$, ${\beta}_{1}$

Inferential StatisticsAnswered question

ecoanuncios7x 2022-09-30

Linear Regression:

$$Y=a+bX+\u03f5$$

For $$R$$ squared in linear regression, in the form of ratio between $({y}_{i}-{y}^{bar})$, or in terms of

$$({S}_{xy}{)}^{2}/({S}_{xx}{S}_{yy})$$

Not sure if you guys come across this form:

$${R}^{2}=\frac{Var(bX)}{V(bX)+V(\u03f5)}$$?

$$Y=a+bX+\u03f5$$

For $$R$$ squared in linear regression, in the form of ratio between $({y}_{i}-{y}^{bar})$, or in terms of

$$({S}_{xy}{)}^{2}/({S}_{xx}{S}_{yy})$$

Not sure if you guys come across this form:

$${R}^{2}=\frac{Var(bX)}{V(bX)+V(\u03f5)}$$?

Inferential StatisticsOpen question

streutexw 2022-08-20

Specify the regression equation to estimate the DiD as:

$Y={\beta}_{0}+{\beta}_{1}\ast [Time]+{\beta}_{3}\ast [Time\ast Intervention]+{\beta}_{4}\ast [Covariates]+\epsilon $

instead of:

$Y={\beta}_{0}+{\beta}_{1}\ast [Time]+{\beta}_{2}\ast [Intervention]+{\beta}_{3}\ast [Time\ast Intervention]+{\beta}_{4}\ast [Covariates]+\epsilon $

Our ${\beta}_{3}$ coefficient would still be yielding the DiD estimator right?

$Y={\beta}_{0}+{\beta}_{1}\ast [Time]+{\beta}_{3}\ast [Time\ast Intervention]+{\beta}_{4}\ast [Covariates]+\epsilon $

instead of:

$Y={\beta}_{0}+{\beta}_{1}\ast [Time]+{\beta}_{2}\ast [Intervention]+{\beta}_{3}\ast [Time\ast Intervention]+{\beta}_{4}\ast [Covariates]+\epsilon $

Our ${\beta}_{3}$ coefficient would still be yielding the DiD estimator right?

Inferential StatisticsAnswered question

Annabathuni Seethu keerthana2022-07-24

In a poker hand consisting of 5 cards, find the probability of holding (a) 3 aces (b) 4 hearts and 1 club (c) Cards of same suit (d) 2 aces and 3 jacks

Inferential StatisticsAnswered question

Annabathuni Seethu keerthana2022-07-24

In a poker hand consisting of 5 cards, find the probability of holding (a) 3 aces (b) 4 hearts and 1 club (c) Cards of same suit (d) 2 aces and 3 jacks

Inferential StatisticsAnswered question

Sonia Ayers 2022-07-09

In logistic regression, the regression coefficients ($\hat{{\beta}_{0}},\hat{{\beta}_{1}}$) are calculated via the general method of maximum likelihood. For a simple logistic regression, the maximum likelihood function is given as

$\ell ({\beta}_{0},{\beta}_{1})=\prod _{i:{y}_{i}=1}p({x}_{i})\prod _{{i}^{\prime}:{y}_{{i}^{\prime}}=0}(1-p({x}_{{i}^{\prime}})).$

What is the maximum likelihood function for $2$ predictors? Or $3$ predictors?

$\ell ({\beta}_{0},{\beta}_{1})=\prod _{i:{y}_{i}=1}p({x}_{i})\prod _{{i}^{\prime}:{y}_{{i}^{\prime}}=0}(1-p({x}_{{i}^{\prime}})).$

What is the maximum likelihood function for $2$ predictors? Or $3$ predictors?

Inferential StatisticsAnswered question

Gretchen Schwartz 2022-07-07

Why is polynomial regression considered a kind of linear regression? For example, the hypothesis function is

$h(x;{t}_{0},{t}_{1},{t}_{2})={t}_{0}+{t}_{1}x+{t}_{2}{x}^{2},$

and the sample points are

$({x}_{1},{y}_{1}),({x}_{2},{y}_{2}),\dots $

$h(x;{t}_{0},{t}_{1},{t}_{2})={t}_{0}+{t}_{1}x+{t}_{2}{x}^{2},$

and the sample points are

$({x}_{1},{y}_{1}),({x}_{2},{y}_{2}),\dots $

Inferential StatisticsAnswered question

logiski9s 2022-07-03

What is the difference between multi-task lasso regression and ridge regression? The optimization function of multi-task lasso regression is

$mi{n}_{w}\sum _{l=1}^{L}1/{N}_{t}\sum _{i=1}^{{N}_{t}}{J}^{l}(w,x,y)+\gamma \sum _{l=1}^{L}||{w}^{l}|{|}_{2}$

while ridge regression is

$mi{n}_{w}\sum _{l=1}^{L}1/{N}_{t}{J}^{l}(w,x,y)+\gamma ||{w}^{l}|{|}_{2}$

which looks the same as the ridge regression. As for me, the problem of multi-task lasso regression is equivalent to solve global ridge regression. So what is the difference between these two regression methods? Both of them use ${L}_{2}$ function. Or does it mean that in multi-task lasso regression, the shape of $W$ is (1,n)?

$mi{n}_{w}\sum _{l=1}^{L}1/{N}_{t}\sum _{i=1}^{{N}_{t}}{J}^{l}(w,x,y)+\gamma \sum _{l=1}^{L}||{w}^{l}|{|}_{2}$

while ridge regression is

$mi{n}_{w}\sum _{l=1}^{L}1/{N}_{t}{J}^{l}(w,x,y)+\gamma ||{w}^{l}|{|}_{2}$

which looks the same as the ridge regression. As for me, the problem of multi-task lasso regression is equivalent to solve global ridge regression. So what is the difference between these two regression methods? Both of them use ${L}_{2}$ function. Or does it mean that in multi-task lasso regression, the shape of $W$ is (1,n)?

Inferential StatisticsAnswered question

vittorecostao1 2022-07-01

What I know, linear means polynomial of degree 1. But then, I found that in one of my lectures, the lecturers are saying that this regression is a linear regression:

${Y}_{i}={\alpha}_{0}+{\alpha}_{1}{x}_{i}+{\alpha}_{2}{x}_{i}^{2}$

How is this a linear regression when it has quadratic terms in it?

${Y}_{i}={\alpha}_{0}+{\alpha}_{1}{x}_{i}+{\alpha}_{2}{x}_{i}^{2}$

How is this a linear regression when it has quadratic terms in it?

An essential part of inferential statistics and studies of probability in engineering, economics, and related disciplines is understanding the regression problems. This concept is often used in machine learning as one needs to predict the quantity of something. It can be either a real value or discrete input variables. See the free suggestions for similar tasks, you may also find questions on multivariate regression challenges that will help you to see how the answers came about. As you take a look through at least one regression equation example, you will understand how prognosis and the concept of probability work together.