vagnhestagn

2022-09-07

In a normal linear model (with intercept), show that if the residuals satisfy ${e}_{i}=a+\beta {x}_{i}$, for $i=1\dots n$, where $x$ is a predictor in the model, then each residual is equal to zero.

How to do this

How to do this

Mckenna Friedman

Beginner2022-09-08Added 10 answers

Since your regression model has intercept, we can assume, the $X$ matrix for the regression has the form

$X=\left(\begin{array}{cc}1& {x}_{1}^{T}\\ 1& {x}_{2}^{T}\\ \vdots & \vdots \\ 1& {x}_{n}^{T}\end{array}\right)$

Note that the residual must be orthogonal to every vector in column space of $X$. This is because the predicted value, $\hat{Y}={P}_{X}Y$ (where ${P}_{X}$ is the orthogonal projection matrix onto the column space of $X$), and hence the residual vector $e=Y-\hat{Y}=(I-{P}_{X})Y$. So for any vector $c$ of appropriate dimension

${c}^{T}e=({c}^{T}-{c}^{T}{P}_{X})Y.$

Now if $c$ lies in the column space of $X$ then

${P}_{X}c=c$

or

${c}^{T}={c}^{T}{P}_{X}^{T}={c}^{T}{P}_{X}\text{since}{P}_{X}\text{is symmetric}$

and it follows for any $c$ in the column space of X$X$

${c}^{T}e=0.$

Now, your condition implies $e$ itself lies in the column space of $X$ and hence must be orthogonal to itself, i.e., ${e}^{T}e=\Vert e{\Vert}^{2}=0$, i.e., $e=0.$.

$X=\left(\begin{array}{cc}1& {x}_{1}^{T}\\ 1& {x}_{2}^{T}\\ \vdots & \vdots \\ 1& {x}_{n}^{T}\end{array}\right)$

Note that the residual must be orthogonal to every vector in column space of $X$. This is because the predicted value, $\hat{Y}={P}_{X}Y$ (where ${P}_{X}$ is the orthogonal projection matrix onto the column space of $X$), and hence the residual vector $e=Y-\hat{Y}=(I-{P}_{X})Y$. So for any vector $c$ of appropriate dimension

${c}^{T}e=({c}^{T}-{c}^{T}{P}_{X})Y.$

Now if $c$ lies in the column space of $X$ then

${P}_{X}c=c$

or

${c}^{T}={c}^{T}{P}_{X}^{T}={c}^{T}{P}_{X}\text{since}{P}_{X}\text{is symmetric}$

and it follows for any $c$ in the column space of X$X$

${c}^{T}e=0.$

Now, your condition implies $e$ itself lies in the column space of $X$ and hence must be orthogonal to itself, i.e., ${e}^{T}e=\Vert e{\Vert}^{2}=0$, i.e., $e=0.$.

dalllc

Beginner2022-09-09Added 1 answers

If our linear model is given by

${y}_{i}={w}_{0}+{w}_{1}{x}_{i}+{e}_{i}$

we can substitute ${e}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i}$ to obtain

${y}_{i}={w}_{0}+{w}_{1}{x}_{i}+({\beta}_{0}+{\beta}_{1}{x}_{i})$

$\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{y}_{i}=[{w}_{0}+{\beta}_{0}]+[{w}_{1}+{\beta}_{1}]{x}_{i}$

$\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{y}_{i}={\stackrel{~}{w}}_{o}+{\stackrel{~}{w}}_{1}{x}_{i}+{\stackrel{~}{e}}_{i},$

in which ${\stackrel{~}{e}}_{i}=0$. Hence, as the error is not random. The model can capture everything that is present in the error. Residual term ${\stackrel{~}{e}}_{i}\equiv 0.$.

${y}_{i}={w}_{0}+{w}_{1}{x}_{i}+{e}_{i}$

we can substitute ${e}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i}$ to obtain

${y}_{i}={w}_{0}+{w}_{1}{x}_{i}+({\beta}_{0}+{\beta}_{1}{x}_{i})$

$\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{y}_{i}=[{w}_{0}+{\beta}_{0}]+[{w}_{1}+{\beta}_{1}]{x}_{i}$

$\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{y}_{i}={\stackrel{~}{w}}_{o}+{\stackrel{~}{w}}_{1}{x}_{i}+{\stackrel{~}{e}}_{i},$

in which ${\stackrel{~}{e}}_{i}=0$. Hence, as the error is not random. The model can capture everything that is present in the error. Residual term ${\stackrel{~}{e}}_{i}\equiv 0.$.

Read carefully and choose only one option

A statistic is an unbiased estimator of a parameter when (a) the statistic is calculated from a random sample. (b) in a single sample, the value of the statistic is equal to the value of the parameter. (c) in many samples, the values of the statistic are very close to the value of the parameter. (d) in many samples, the values of the statistic are centered at the value of the parameter. (e) in many samples, the distribution of the statistic has a shape that is approximately NormalConstruct all random samples consisting three observations from the given data. Arrange the observations in ascending order without replacement and repetition.

86 89 92 95 98.Find the mean of the following data: 12,10,15,10,16,12,10,15,15,13.

The equation has a positive slope and a negativey-intercept.

1) y=−2x−3

2) y=2−3x

3) y=2+3x

4) y=−2+3xWhat term refers to the standard deviation of the sampling distribution?

Fill in the blanks to make the statement true: $30\%of\u20b9360=\_\_\_\_\_\_\_\_$.

What percent of $240$ is $30$$?$

The first 15 digits of pi are as follows: 3.14159265358979

The frequency distribution table for the digits is as follows:

$\begin{array}{|cc|}\hline DIGIT& FREQUENCY\\ 1& 2\\ 2& 1\\ 3& 2\\ 4& 1\\ 5& 3\\ 6& 1\\ 7& 1\\ 8& 1\\ 9& 3\\ \hline\end{array}$

Which two digits appear for 3 times each?

A) 1, 7

B) 2, 6

C) 5, 9<br<D) 3, 8How to write

as a percent?$\frac{2}{20}$ What is the simple interest of a loan for $1000 with 5 percent interest after 3 years?

What number is 12% of 45?

The probability that an automobile being filled with gasoline also needs an oil change is 0.30; the probability that it needs a new oil filter is 0.40; and the probability that both the oil and the filter need changing is 0.10. (a) If the oil has to be changed, what is the probability that a new oil filter is needed? (b) If a new oil filter is needed, what is the probability that the oil has to be changed?

Leasing a car. The price of the car is$45,000. You have $3000 for a down payment. The term of the lease is and the interest rate is 3.5% APR. The buyout on the lease is51% of its purchase price and it is due at the end of the term. What are the monthly lease payments (before tax)?

The mean of sample A is significantly different than the mean of sample B. Sample A: $59,33,74,62,87,73$ Sample B: $53,67,72,57,93,79$ Use a two-tailed $t$-test of independent samples for the above hypothesis and data. What is the $p$-value?

What is mean and its advantages?