Mariah Sparks

2022-07-22

Show that: $\sum {x}_{i}{e}_{i}=0$ and also show that $\sum {\hat{y}}_{i}{e}_{i}=0$. Now I do believe that being able to solve the first sum will make the solution to the second sum more clear. So far I have proved that $\sum {e}_{i}=0$.

Kitamiliseakekw

Beginner2022-07-23Added 23 answers

Here's one way of viewing it. We want to write

$\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]=\hat{\alpha}\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]+\hat{\beta}\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]+\left[\begin{array}{c}{\hat{\epsilon}}_{1}\\ \vdots \\ {\epsilon}_{n}\end{array}\right]$

and choose the values of $\hat{\alpha}$ and $\hat{\beta}$ that minimze ${\hat{\epsilon}}_{1}^{2}+\cdots +{\hat{\epsilon}}_{n}^{2}$. The sum of the first two terms on the right is $[{\hat{y}}_{1},\dots ,{\hat{y}}_{n}{]}^{T}$.

That means the point $\left[\begin{array}{c}{\hat{y}}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]=\hat{\alpha}\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]+\hat{\beta}\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]$ is closer to $\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]$ in ordinary Euclidean distance than is any other point in the plane spanned by $\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]$ and $\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]$. The point in a plane that is closet to $\mathbf{y}$ is the point you get by dropping a perpendicular from $\mathbf{y}$ to the plane. That means $\hat{\epsilon}=\hat{\mathbf{y}}-\mathbf{y}$ is perpendicular to the two columns that span the plane, and thus perpendicular to every linear combination of them, such as $\hat{\mathbf{y}}$. "Perpendicular" means the dot-product is zero. Q.E.D.

The vector of fitted values $\hat{\mathbf{y}}=\left[\begin{array}{c}{\hat{y}}_{1}\\ \vdots \\ {\hat{y}}_{n}\end{array}\right]$ is the orthogonal projection of the vector $\mathbf{y}=\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]$ onto the column space of the design matrix $X=\left[\begin{array}{cc}1& {x}_{1}\\ \vdots & \vdots \\ 1& {x}_{n}\end{array}\right]$.

The orthogonal projection is a linear transformation whose matrix is the "hat matrix" is $H=X({X}^{T}X{)}^{-1}{X}^{T}$, an $n\times n$ matrix of rank $2$. Observe that if $\mathbf{w}$ is orthogonal to that column space then $X\mathbf{w}=0$ so $H\mathbf{w}=0$, and if $\mathbf{w}$ is in the column space, then $\mathbf{w}=Xu$ for some $u\in {\mathbb{R}}^{2}$, and so $H\mathbf{w}=\mathbf{w}$.

It follows that $\hat{\epsilon}=\mathbf{y}-\hat{\mathbf{y}}=(I-H)\mathbf{y}$ is orthogonal to the column space. Since $\hat{\mathbf{y}}$ is in the column space, $\hat{\epsilon}$ is orthogonal to $\hat{\mathbf{y}}$.

$\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]=\hat{\alpha}\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]+\hat{\beta}\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]+\left[\begin{array}{c}{\hat{\epsilon}}_{1}\\ \vdots \\ {\epsilon}_{n}\end{array}\right]$

and choose the values of $\hat{\alpha}$ and $\hat{\beta}$ that minimze ${\hat{\epsilon}}_{1}^{2}+\cdots +{\hat{\epsilon}}_{n}^{2}$. The sum of the first two terms on the right is $[{\hat{y}}_{1},\dots ,{\hat{y}}_{n}{]}^{T}$.

That means the point $\left[\begin{array}{c}{\hat{y}}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]=\hat{\alpha}\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]+\hat{\beta}\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]$ is closer to $\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]$ in ordinary Euclidean distance than is any other point in the plane spanned by $\left[\begin{array}{c}1\\ \vdots \\ 1\end{array}\right]$ and $\left[\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{n}\end{array}\right]$. The point in a plane that is closet to $\mathbf{y}$ is the point you get by dropping a perpendicular from $\mathbf{y}$ to the plane. That means $\hat{\epsilon}=\hat{\mathbf{y}}-\mathbf{y}$ is perpendicular to the two columns that span the plane, and thus perpendicular to every linear combination of them, such as $\hat{\mathbf{y}}$. "Perpendicular" means the dot-product is zero. Q.E.D.

The vector of fitted values $\hat{\mathbf{y}}=\left[\begin{array}{c}{\hat{y}}_{1}\\ \vdots \\ {\hat{y}}_{n}\end{array}\right]$ is the orthogonal projection of the vector $\mathbf{y}=\left[\begin{array}{c}{y}_{1}\\ \vdots \\ {y}_{n}\end{array}\right]$ onto the column space of the design matrix $X=\left[\begin{array}{cc}1& {x}_{1}\\ \vdots & \vdots \\ 1& {x}_{n}\end{array}\right]$.

The orthogonal projection is a linear transformation whose matrix is the "hat matrix" is $H=X({X}^{T}X{)}^{-1}{X}^{T}$, an $n\times n$ matrix of rank $2$. Observe that if $\mathbf{w}$ is orthogonal to that column space then $X\mathbf{w}=0$ so $H\mathbf{w}=0$, and if $\mathbf{w}$ is in the column space, then $\mathbf{w}=Xu$ for some $u\in {\mathbb{R}}^{2}$, and so $H\mathbf{w}=\mathbf{w}$.

It follows that $\hat{\epsilon}=\mathbf{y}-\hat{\mathbf{y}}=(I-H)\mathbf{y}$ is orthogonal to the column space. Since $\hat{\mathbf{y}}$ is in the column space, $\hat{\epsilon}$ is orthogonal to $\hat{\mathbf{y}}$.

Construct all random samples consisting three observations from the given data. Arrange the observations in ascending order without replacement and repetition.

86 89 92 95 98.Read carefully and choose only one option

A statistic is an unbiased estimator of a parameter when (a) the statistic is calculated from a random sample. (b) in a single sample, the value of the statistic is equal to the value of the parameter. (c) in many samples, the values of the statistic are very close to the value of the parameter. (d) in many samples, the values of the statistic are centered at the value of the parameter. (e) in many samples, the distribution of the statistic has a shape that is approximately NormalFind the mean of the following data: 12,10,15,10,16,12,10,15,15,13.

The equation has a positive slope and a negativey-intercept.

1) y=−2x−3

2) y=2−3x

3) y=2+3x

4) y=−2+3xWhat term refers to the standard deviation of the sampling distribution?

Fill in the blanks to make the statement true: $30\%of\u20b9360=\_\_\_\_\_\_\_\_$.

What percent of $240$ is $30$$?$

The first 15 digits of pi are as follows: 3.14159265358979

The frequency distribution table for the digits is as follows:

$\begin{array}{|cc|}\hline DIGIT& FREQUENCY\\ 1& 2\\ 2& 1\\ 3& 2\\ 4& 1\\ 5& 3\\ 6& 1\\ 7& 1\\ 8& 1\\ 9& 3\\ \hline\end{array}$

Which two digits appear for 3 times each?

A) 1, 7

B) 2, 6

C) 5, 9<br<D) 3, 8How to write

as a percent?$\frac{2}{20}$ What is the simple interest of a loan for $1000 with 5 percent interest after 3 years?

What number is 12% of 45?

The probability that an automobile being filled with gasoline also needs an oil change is 0.30; the probability that it needs a new oil filter is 0.40; and the probability that both the oil and the filter need changing is 0.10. (a) If the oil has to be changed, what is the probability that a new oil filter is needed? (b) If a new oil filter is needed, what is the probability that the oil has to be changed?

Leasing a car. The price of the car is$45,000. You have $3000 for a down payment. The term of the lease is and the interest rate is 3.5% APR. The buyout on the lease is51% of its purchase price and it is due at the end of the term. What are the monthly lease payments (before tax)?

The mean of sample A is significantly different than the mean of sample B. Sample A: $59,33,74,62,87,73$ Sample B: $53,67,72,57,93,79$ Use a two-tailed $t$-test of independent samples for the above hypothesis and data. What is the $p$-value?

What is mean and its advantages?