Recent questions in Linear algebra

Linear algebraAnswered question

Krish Schmitt 2022-09-30

Suppose $V$ is a $n$-dimensional linear vector space. $\{{s}_{1},{s}_{2},...,{s}_{n}\}$ and $\{{e}_{1},{e}_{2},...,{e}_{n}\}$ are two sets of orthonormal basis with basis transformation matrix $U$ such that ${e}_{i}=\sum _{j}{U}_{ij}{s}_{j}$.

Now consider the ${n}^{2}$ dim vector space $V\u2a02V$ (kronecker product) with equivalent basis sets $\{{s}_{1}{s}_{1},{s}_{1}{s}_{2},...,{s}_{n}{s}_{n}\}$ and $\{{e}_{1}{e}_{1},{e}_{1}{e}_{2},...,{e}_{n}{e}_{n}\}$. Now can we find the basis transformation matrix for this in terms of U?

Now consider the ${n}^{2}$ dim vector space $V\u2a02V$ (kronecker product) with equivalent basis sets $\{{s}_{1}{s}_{1},{s}_{1}{s}_{2},...,{s}_{n}{s}_{n}\}$ and $\{{e}_{1}{e}_{1},{e}_{1}{e}_{2},...,{e}_{n}{e}_{n}\}$. Now can we find the basis transformation matrix for this in terms of U?

Linear algebraAnswered question

Chelsea Lamb 2022-09-26

If the matrix of a linear transformation $:{\mathbb{R}}^{N}\to {\mathbb{R}}^{N}$ with respect to some basis is symmetric, what does it say about the transformation? Is there a way to geometrically interpret the transformation in a nice/simple way?

Linear algebraAnswered question

Julia Chang 2022-09-25

Give the standard matrix of the linear transformation that first sends (x, y, z) to (y, y, z), and rotates this vector 90 degrees counterclockwise about the origin in the x = y plane. Find standard matrix of linear transformation.

Linear algebraAnswered question

Colten Andrade 2022-09-21

Given a matrix $Y\in {\mathbb{R}}^{m\times n}$. Find a transformation matrix $\mathrm{\Theta}\in {\mathbb{R}}^{n\times p}$ such that

$$\frac{1}{m}}{\mathrm{\Theta}}^{T}{Y}^{T}Y\mathrm{\Theta}={I}_{p\times p},$$

where 𝐼𝑝×𝑝 is identity matrix.

My attempt: $\frac{1}{\sqrt{m}}}Y\mathrm{\Theta$ is orthogonal matrix and tried to find $\mathrm{\Theta}$ satisfies it but that doesn't work.

$$\frac{1}{m}}{\mathrm{\Theta}}^{T}{Y}^{T}Y\mathrm{\Theta}={I}_{p\times p},$$

where 𝐼𝑝×𝑝 is identity matrix.

My attempt: $\frac{1}{\sqrt{m}}}Y\mathrm{\Theta$ is orthogonal matrix and tried to find $\mathrm{\Theta}$ satisfies it but that doesn't work.

Linear algebraAnswered question

Celinamg8 2022-09-20

Vector, $u:=[{u}_{1},\dots ,{u}_{n}{]}^{\mathrm{T}}$. I am trying to find a coordinate transformation matrix, $Q\in {\mathbb{R}}^{n\times n}$, which is nonsingular, satisfying:

$$\begin{array}{r}\left[\begin{array}{c}0\\ \vdots \\ 0\\ ||u||\end{array}\right]=Qu.\end{array}$$

$$\begin{array}{r}\left[\begin{array}{c}0\\ \vdots \\ 0\\ ||u||\end{array}\right]=Qu.\end{array}$$

Linear algebraAnswered question

ahmed zubair2022-09-14

A trust fund has $200,000 to invest. Three alternative investments have been identified, earning income of 10 percent, 7 percent and 8 percent respectively. A goal has been set to earn an annual income of $16,000 on the total investment. One condition set by the trust is that the combine investment in alternatives 2 and 3 should be triple the amount invested in alternative 1. Determine the amount of money, which should be invested in each option to satisfy the requirements of the trust fund. Solve by Gauss- Jordon method

What are the equations formed in this question?

Linear algebraAnswered question

tamola7f 2022-09-13

Every matrix represents a linear transformation, but depending on characteristics of the matrix, the linear transformation it represents can be limited to a specific type. For example, an orthogonal matrix represents a rotation (and possibly a reflection). Is it something similar about triangular matrices? Do they represent any specific type of transformation?

Linear algebraAnswered question

nar6jetaime86 2022-09-13

Findthe Matrix T of the following linear transformation

$T\phantom{\rule{mediummathspace}{0ex}}:\phantom{\rule{mediummathspace}{0ex}}R2\left(x\right)\phantom{\rule{mediummathspace}{0ex}}->R2\left(x\right)\phantom{\rule{mediummathspace}{0ex}}defined\phantom{\rule{mediummathspace}{0ex}}by\phantom{\rule{mediummathspace}{0ex}}T(a{x}^{2}+bx+c)=2ax+b$

$T\phantom{\rule{mediummathspace}{0ex}}:\phantom{\rule{mediummathspace}{0ex}}R2\left(x\right)\phantom{\rule{mediummathspace}{0ex}}->R2\left(x\right)\phantom{\rule{mediummathspace}{0ex}}defined\phantom{\rule{mediummathspace}{0ex}}by\phantom{\rule{mediummathspace}{0ex}}T(a{x}^{2}+bx+c)=2ax+b$

Linear algebraAnswered question

Addison Parker 2022-09-12

Without constructing, find the coordinates of the points of intersection of the graph of the function y = -0.6x + 3 with the coordinate axes.

Linear algebraAnswered question

Julius Blankenship 2022-09-12

Consider ${\mathbb{K}}^{n}$, ${\mathbb{K}}^{m}$, both with the $||.|{|}_{1}$-norm, where $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$.

Let $||T||=inf\{M\ge 0:||T(x)||\le M||x||\text{}\mathrm{\forall}x\in {\mathbb{K}}^{n}\}$ be the operator norm of a linear transformation $T:{\mathbb{K}}^{n}\to {\mathbb{K}}^{m}$.

Show that the operator norm of the linear transformation $T$ is also given by:

$$||T||=max\{\sum _{i=1}^{m}|{a}_{ij}|,1\le j\le n\}=:||A|{|}_{1}$$

where $A$ is the transformation matrix of $T$ and ${a}_{ij}$ it's entry in the $i$-th row and $j$-the column.

Let $||T||=inf\{M\ge 0:||T(x)||\le M||x||\text{}\mathrm{\forall}x\in {\mathbb{K}}^{n}\}$ be the operator norm of a linear transformation $T:{\mathbb{K}}^{n}\to {\mathbb{K}}^{m}$.

Show that the operator norm of the linear transformation $T$ is also given by:

$$||T||=max\{\sum _{i=1}^{m}|{a}_{ij}|,1\le j\le n\}=:||A|{|}_{1}$$

where $A$ is the transformation matrix of $T$ and ${a}_{ij}$ it's entry in the $i$-th row and $j$-the column.

Linear algebraAnswered question

tuzkutimonq4 2022-09-11

Convert the following matrix:

$$\left[\begin{array}{cccccccc}0& 0& 1& 1& 0& 0& 1& 1\\ 0& 0& 0& 0& 1& 1& 1& 1\\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]$$

To the following:

$$\left[\begin{array}{cccccccc}0& 0& 1& 1& 0& 0& 1& 1\\ 0& 0& 0& 0& 1& 1& 1& 1\\ 0& 1& 0& 1& 0& 1& 0& 1\end{array}\right]$$

$$\left[\begin{array}{cccccccc}0& 0& 1& 1& 0& 0& 1& 1\\ 0& 0& 0& 0& 1& 1& 1& 1\\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]$$

To the following:

$$\left[\begin{array}{cccccccc}0& 0& 1& 1& 0& 0& 1& 1\\ 0& 0& 0& 0& 1& 1& 1& 1\\ 0& 1& 0& 1& 0& 1& 0& 1\end{array}\right]$$

Linear algebraAnswered question

engausidarb 2022-09-11

Real symmetric matrices ${S}_{ij}$ can always be put in a standard diagonal form ${s}_{i}{\delta}_{ij}$ under an orthogonal transformation. Similarly, real antisymmetric matrices ${A}_{ij}$ can always be put in a standard band diagonal form with diagonal matrix entries ${a}_{i}\left(\begin{array}{cc}0& 1\\ -1& 0\end{array}\right)$ (with a $0$ diagonal entry when the dimension of the matrix is odd), again under an orthogonal transformation.

Linear algebraAnswered question

moidu13x8 2022-09-09

How to find a matrix of linear transformation $f:{R}^{n}\to Ma{t}^{(n,n)}$.

Let's say we do $f:{R}^{2}\to Ma{t}^{(2,2)}$ given by $f(x,y)=\left[\begin{array}{cc}x& 2y\\ x+y& x\end{array}\right]$

We calculate image of canonical basis $f(1,0)=\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right]$ and $f(0,1)=\left[\begin{array}{cc}0& 2\\ 1& 0\end{array}\right]$

Now the problematic part, ever since when caltulating matrix from vectors ${R}^{n}\to {R}^{m}$ the approach is to transpose images of standard base $(f(1,0,...,0{)}^{T}|f(0,1,...,0{)}^{T}|f(0,0,...,1{)}^{T})$. We can solve the $R\to Mat$ problem by using $({A}^{T}|{B}^{T}|{C}^{T}...)$, of course, but is there any way how to shrink the vector so we can succeed something like $({A}^{T}|{B}^{T}|{C}^{T}...)$?

Let's say we do $f:{R}^{2}\to Ma{t}^{(2,2)}$ given by $f(x,y)=\left[\begin{array}{cc}x& 2y\\ x+y& x\end{array}\right]$

We calculate image of canonical basis $f(1,0)=\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right]$ and $f(0,1)=\left[\begin{array}{cc}0& 2\\ 1& 0\end{array}\right]$

Now the problematic part, ever since when caltulating matrix from vectors ${R}^{n}\to {R}^{m}$ the approach is to transpose images of standard base $(f(1,0,...,0{)}^{T}|f(0,1,...,0{)}^{T}|f(0,0,...,1{)}^{T})$. We can solve the $R\to Mat$ problem by using $({A}^{T}|{B}^{T}|{C}^{T}...)$, of course, but is there any way how to shrink the vector so we can succeed something like $({A}^{T}|{B}^{T}|{C}^{T}...)$?

Linear algebraAnswered question

vballa15ei 2022-09-08

Let $T:V\to W$ be a linear transformation of two finite dimensional vector spaces $V$,$W$ (both over a field $F$).

My assignment is to show that there exists a basis $B$ in $V$ and a basis $C$ in $W$ such that the transformation $F$ with respect to the bases $B$ and $C$ actually does have $A$ as a transformation matrix.

Since $V$,$W$ are vector spaces of course they have bases and I can do a transformation with respect to the bases from $V$ to $W$. But how do I show that (some specific?) bases exist such that the transformation with respect to these bases have a given a $A$ as a transformation matrix?

Assume $A$ is transformation matrix of $T$. Then we take one vector in $V$ expressed in (some basis) $B$ and when we transform it to some vector in $W$, expressed in (some basis) 𝐶. So $A$ does at least change the basis of a vector, but I since $V$ and $W$ might be of different dimension I don't really know where to go from here. Am I one the wrong track?

My assignment is to show that there exists a basis $B$ in $V$ and a basis $C$ in $W$ such that the transformation $F$ with respect to the bases $B$ and $C$ actually does have $A$ as a transformation matrix.

Since $V$,$W$ are vector spaces of course they have bases and I can do a transformation with respect to the bases from $V$ to $W$. But how do I show that (some specific?) bases exist such that the transformation with respect to these bases have a given a $A$ as a transformation matrix?

Assume $A$ is transformation matrix of $T$. Then we take one vector in $V$ expressed in (some basis) $B$ and when we transform it to some vector in $W$, expressed in (some basis) 𝐶. So $A$ does at least change the basis of a vector, but I since $V$ and $W$ might be of different dimension I don't really know where to go from here. Am I one the wrong track?

Linear algebraAnswered question

Spactapsula2l 2022-09-07

Linear transformation, T, such that:

T:${M}_{22}$

$\left(\left[\begin{array}{cc}{x}_{11}& {x}_{12}\\ {x}_{21}& {x}_{22}\end{array}\right]\right)=\left[\begin{array}{cc}{x}_{12}-5{x}_{21}-{x}_{22}& -{x}_{11}-2{x}_{12}+3{x}_{21}+4{x}_{22}\\ -3{x}_{21}& -{x}_{11}-{x}_{12}+{x}_{21}+3{x}_{22}\end{array}\right]$

What is the matrix that represents this ${M}_{22}$->${M}_{22}$ transformation?

Is it:

$\left[\begin{array}{cccc}0& 1& -5& -1\\ -1& -2& 3& 4\\ 0& 0& -3& 0\\ -1& -1& 1& 3\end{array}\right]$

If so, how could this be multiplied by a 2x2 matrix to give another 2x2 matrix. (2x2 matrices cannot multiply with 4x4 matrices).

T:${M}_{22}$

$\left(\left[\begin{array}{cc}{x}_{11}& {x}_{12}\\ {x}_{21}& {x}_{22}\end{array}\right]\right)=\left[\begin{array}{cc}{x}_{12}-5{x}_{21}-{x}_{22}& -{x}_{11}-2{x}_{12}+3{x}_{21}+4{x}_{22}\\ -3{x}_{21}& -{x}_{11}-{x}_{12}+{x}_{21}+3{x}_{22}\end{array}\right]$

What is the matrix that represents this ${M}_{22}$->${M}_{22}$ transformation?

Is it:

$\left[\begin{array}{cccc}0& 1& -5& -1\\ -1& -2& 3& 4\\ 0& 0& -3& 0\\ -1& -1& 1& 3\end{array}\right]$

If so, how could this be multiplied by a 2x2 matrix to give another 2x2 matrix. (2x2 matrices cannot multiply with 4x4 matrices).

Linear algebraAnswered question

cuuhorre76 2022-09-05

Star operator in the simplest form

Let E together with g be a inner product space(over field R) , $\text{dim}E=n<\mathrm{\infty}$ and $\{{e}^{1},\cdots ,{e}^{n}\}$ is orthonormal basis of E that $\{{e}^{1},\cdots ,{e}^{n}\}$ is its dual basis(for E∗). Now we define $\omega :={e}^{1}\wedge \cdots \wedge {e}^{n}$ as an element of volume of E.

I prove that ${g}^{\mathrm{\u266d}}:E\to {E}^{\ast}$ with rule $({g}^{\mathrm{\u266d}}(u))(v)=g(u,v)$, is an isomorphism $\mathrm{\forall}u,v\in E$

Convention: $\stackrel{~}{u}:={g}^{\mathrm{\u266d}}(u)$

I wish to prove that for any p-form $\theta \in {\mathrm{\Lambda}}^{p}(E)$, There exist a unique element $\eta \in {\mathrm{\Lambda}}^{(n-p)}(E)$ such that

$\eta ({u}_{1},\cdots ,{u}_{n-p})\omega =\theta \wedge {\stackrel{~}{u}}_{1}\cdots \wedge {\stackrel{~}{u}}_{n-p}\phantom{\rule{2em}{0ex}}\mathrm{\forall}{u}_{1},\cdots ,{u}_{n-p}\in E$

How can I do this?

Of course I guess that should be defined an inner product ${g}_{{\mathrm{\Lambda}}^{k}(E)}$ on ${\mathrm{\Lambda}}^{k}(E)$ for any 0<k<n and then use ${g}_{{\mathrm{\Lambda}}^{k}(E)}^{\mathrm{\u266d}}$. what is your method ? and How?

Let E together with g be a inner product space(over field R) , $\text{dim}E=n<\mathrm{\infty}$ and $\{{e}^{1},\cdots ,{e}^{n}\}$ is orthonormal basis of E that $\{{e}^{1},\cdots ,{e}^{n}\}$ is its dual basis(for E∗). Now we define $\omega :={e}^{1}\wedge \cdots \wedge {e}^{n}$ as an element of volume of E.

I prove that ${g}^{\mathrm{\u266d}}:E\to {E}^{\ast}$ with rule $({g}^{\mathrm{\u266d}}(u))(v)=g(u,v)$, is an isomorphism $\mathrm{\forall}u,v\in E$

Convention: $\stackrel{~}{u}:={g}^{\mathrm{\u266d}}(u)$

I wish to prove that for any p-form $\theta \in {\mathrm{\Lambda}}^{p}(E)$, There exist a unique element $\eta \in {\mathrm{\Lambda}}^{(n-p)}(E)$ such that

$\eta ({u}_{1},\cdots ,{u}_{n-p})\omega =\theta \wedge {\stackrel{~}{u}}_{1}\cdots \wedge {\stackrel{~}{u}}_{n-p}\phantom{\rule{2em}{0ex}}\mathrm{\forall}{u}_{1},\cdots ,{u}_{n-p}\in E$

How can I do this?

Of course I guess that should be defined an inner product ${g}_{{\mathrm{\Lambda}}^{k}(E)}$ on ${\mathrm{\Lambda}}^{k}(E)$ for any 0<k<n and then use ${g}_{{\mathrm{\Lambda}}^{k}(E)}^{\mathrm{\u266d}}$. what is your method ? and How?

Linear algebraAnswered question

Alfredeim 2022-09-05

Is the differential $\mathrm{d}\overrightarrow{r}$ a sensible mathematical object?

When doing differential geometry, physicists often use

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$

for many different things. For instance, they define the holonomic basis $\{{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\}$ relative to a coordinate system $\{{x}^{\prime a}\}$ by imposing

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{\prime a}\text{}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}=\frac{\mathrm{\partial}\overrightarrow{r}}{\mathrm{\partial}{x}^{\prime a}}$

and they compute the quadratic form of the metric $\mathrm{d}{s}^{2}$ as $\mathrm{d}\overrightarrow{r}\cdot \mathrm{d}\overrightarrow{r}$ .

Computing the differential of a vector field ($\overrightarrow{r}={x}^{i}{\overrightarrow{e}}_{i}$ i, in this case) feels strange, as in differential geometry differentials are usually considered to be alternating k-forms, so it would only make sense to talk about the differential of a scalar field (aka its exterior derivative).

Not only that, the "true" definitions of holonomic bases and ds2 don't use this $\mathrm{d}\overrightarrow{r}$ at all.

EDIT: in fact, taking the derivative of $\overrightarrow{r}$, or any other vector field, is something we are not allowed to do in a general differentiable manifold without a connection, so we obviously wouldn't define a holonomic basis like that. A holonomic basis would basically be the basis formed by the tangent vectors $\mathrm{\partial}/\mathrm{\partial}{x}^{\prime a}$.

After thinking about it, I thought the differential of a vector field might just be

$\mathrm{d}\overrightarrow{\phi}=({\mathrm{\nabla}}_{i}{\phi}^{j})\text{}{\overrightarrow{e}}_{j}\otimes \mathrm{d}{x}^{i},$

so maybe $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$ i means $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\otimes {\overrightarrow{e}}_{i}$? How is $\mathrm{d}\overrightarrow{r}$ rigorously defined, otherwise?

When doing differential geometry, physicists often use

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$

for many different things. For instance, they define the holonomic basis $\{{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\}$ relative to a coordinate system $\{{x}^{\prime a}\}$ by imposing

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{\prime a}\text{}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}=\frac{\mathrm{\partial}\overrightarrow{r}}{\mathrm{\partial}{x}^{\prime a}}$

and they compute the quadratic form of the metric $\mathrm{d}{s}^{2}$ as $\mathrm{d}\overrightarrow{r}\cdot \mathrm{d}\overrightarrow{r}$ .

Computing the differential of a vector field ($\overrightarrow{r}={x}^{i}{\overrightarrow{e}}_{i}$ i, in this case) feels strange, as in differential geometry differentials are usually considered to be alternating k-forms, so it would only make sense to talk about the differential of a scalar field (aka its exterior derivative).

Not only that, the "true" definitions of holonomic bases and ds2 don't use this $\mathrm{d}\overrightarrow{r}$ at all.

EDIT: in fact, taking the derivative of $\overrightarrow{r}$, or any other vector field, is something we are not allowed to do in a general differentiable manifold without a connection, so we obviously wouldn't define a holonomic basis like that. A holonomic basis would basically be the basis formed by the tangent vectors $\mathrm{\partial}/\mathrm{\partial}{x}^{\prime a}$.

After thinking about it, I thought the differential of a vector field might just be

$\mathrm{d}\overrightarrow{\phi}=({\mathrm{\nabla}}_{i}{\phi}^{j})\text{}{\overrightarrow{e}}_{j}\otimes \mathrm{d}{x}^{i},$

so maybe $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$ i means $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\otimes {\overrightarrow{e}}_{i}$? How is $\mathrm{d}\overrightarrow{r}$ rigorously defined, otherwise?

Linear algebraAnswered question

Gaige Haynes 2022-09-05

Why do determinants have their particular form?

I know that for a matrix A, if det(A)=0 then the matrix does not have an inverse, and hence the associated system of equations does not have a unique solution. However, why do the determinant formulas have the form they do? Why all the complicated co-factor expansions and alternating signs ?

To sum it up: I know what determinants do, but its unclear to me why. Is there an intuitive explanation that can be attached to a co-factor expansion??..

I know that for a matrix A, if det(A)=0 then the matrix does not have an inverse, and hence the associated system of equations does not have a unique solution. However, why do the determinant formulas have the form they do? Why all the complicated co-factor expansions and alternating signs ?

To sum it up: I know what determinants do, but its unclear to me why. Is there an intuitive explanation that can be attached to a co-factor expansion??..

Linear algebraAnswered question

Modelfino0g 2022-09-05

Given ax+by+c=0, what is the set of all operations on this equation that do not alter the plotted line?

Operations such as $f(x)+a-a$, are obvious candidates for such a set. However, e.g., for the line y=−x, it seems to me to be non-trivial that ${x}^{3}+{y}^{3}=0$ will plot the same line but ${x}^{2}+{y}^{2}=0$ won't. Translation between coordinate systems also seems to be a non-trivial example. Is there any way to designate such a set? (Could this be generalized to other types of curves?)

The following are some more thoughts on the question:

It would be interesting in order to find alternate equation forms that might make more clear certain properties of a curve. For instance, $\frac{x}{a}+\frac{x}{b}=1$ makes immediately obvious the abscissa and ordinate at origin. But we know that under some types of algebra, $ax+by+c=0$ might fail to be represented $\frac{x}{a}+\frac{x}{b}=1$. So we're lead to think that these two equations plot a line by virtue of legitimate operations between them.

The equation of a plane also seems to be nicely related to the general form of a line, if ${r}_{0}=({x}_{0},{y}_{0})$ and r=(x,y) are two vectors pointing to the plane and the normal is $n=({n}_{x},{n}_{y})$. If $\circ $ between vectors is the dot product, $(x-{x}_{0},y-{y}_{0})\circ n=(x-{x}_{0})\ast {n}_{x}+(y-{y}_{0})\ast {n}_{y}={n}_{x}\ast x+{n}_{y}\ast y-({x}_{0}{n}_{x}+{y}_{0}{n}_{y})=a\ast x+b\ast y+c=0$

The idea is to be able to see how the form of an equation can be altered, not the content of the variables. It seems odd to me that very complicated equations could have the same plotted curve as simple forms, but that this property wouldn't appear by virtue of the equation themselves, or the set of valid operations on this equation. This might seem weird, but say it is never immediately obvious that ax+by+c=0 plots a line, or ${x}^{2}+{y}^{2}={r}^{2}$ plots a circle, unless we actually do the plotting, and ax+by+c=0 seems way less fundamental than y=mx+b.

Note that in the case of a circle, we have the pythagorean theorem that seems to be its clearest representation with the methods of analytic geometry, and the moment an equation can be said to share some sort of operation set with the pythagorean theorem, we know we're speaking of a circle. It seems that if we could somehow draw the operation set of a circle, we would get something like the pythagorean theorem, and that this operation set gets somehow deformed in order to give a representation onto the cartesian plane. For a translated circle with center (h,k), ${x}^{2}-2xh+{h}^{2}+{y}^{2}-2yk+{k}^{2}={r}^{2}$ means absolutely nothing to us, but the form $(x-h{)}^{2}+(y-k{)}^{2}={r}^{2}$ is clear as day.

Operations such as $f(x)+a-a$, are obvious candidates for such a set. However, e.g., for the line y=−x, it seems to me to be non-trivial that ${x}^{3}+{y}^{3}=0$ will plot the same line but ${x}^{2}+{y}^{2}=0$ won't. Translation between coordinate systems also seems to be a non-trivial example. Is there any way to designate such a set? (Could this be generalized to other types of curves?)

The following are some more thoughts on the question:

It would be interesting in order to find alternate equation forms that might make more clear certain properties of a curve. For instance, $\frac{x}{a}+\frac{x}{b}=1$ makes immediately obvious the abscissa and ordinate at origin. But we know that under some types of algebra, $ax+by+c=0$ might fail to be represented $\frac{x}{a}+\frac{x}{b}=1$. So we're lead to think that these two equations plot a line by virtue of legitimate operations between them.

The equation of a plane also seems to be nicely related to the general form of a line, if ${r}_{0}=({x}_{0},{y}_{0})$ and r=(x,y) are two vectors pointing to the plane and the normal is $n=({n}_{x},{n}_{y})$. If $\circ $ between vectors is the dot product, $(x-{x}_{0},y-{y}_{0})\circ n=(x-{x}_{0})\ast {n}_{x}+(y-{y}_{0})\ast {n}_{y}={n}_{x}\ast x+{n}_{y}\ast y-({x}_{0}{n}_{x}+{y}_{0}{n}_{y})=a\ast x+b\ast y+c=0$

The idea is to be able to see how the form of an equation can be altered, not the content of the variables. It seems odd to me that very complicated equations could have the same plotted curve as simple forms, but that this property wouldn't appear by virtue of the equation themselves, or the set of valid operations on this equation. This might seem weird, but say it is never immediately obvious that ax+by+c=0 plots a line, or ${x}^{2}+{y}^{2}={r}^{2}$ plots a circle, unless we actually do the plotting, and ax+by+c=0 seems way less fundamental than y=mx+b.

Note that in the case of a circle, we have the pythagorean theorem that seems to be its clearest representation with the methods of analytic geometry, and the moment an equation can be said to share some sort of operation set with the pythagorean theorem, we know we're speaking of a circle. It seems that if we could somehow draw the operation set of a circle, we would get something like the pythagorean theorem, and that this operation set gets somehow deformed in order to give a representation onto the cartesian plane. For a translated circle with center (h,k), ${x}^{2}-2xh+{h}^{2}+{y}^{2}-2yk+{k}^{2}={r}^{2}$ means absolutely nothing to us, but the form $(x-h{)}^{2}+(y-k{)}^{2}={r}^{2}$ is clear as day.

Linear algebraAnswered question

manudyent7 2022-09-05

Trouble with the definition of the cross product

I am trying to understand the definition of the cross product given by Wikipedia

The article says that we can define the cross product c of two vectors u,v given a suitable "dot product" ${\eta}^{mi}$ as follows

${c}^{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\eta}^{mi}{\u03f5}_{ijk}{u}^{j}{v}^{k}$

To demonstrate my current understanding of this definition, I will introduce some notation and terminology. Then I will show where my confusion arises with an example. I do apologize in advance for the length of this post.

Let M be a smooth Riemannian manifold on ${\mathbb{R}}^{3}$ with the metric tensor g. Pick a coordinate chart (U,$\varphi $) with $\varphi $ a diffeomorphism. We define a collection $\beta =\{{b}_{i}:U\to TM|i\in \{1,2,3\}\}$ of vector fields, called coordinate vectors, as follows

${b}_{i}(x):={\textstyle (}x,{\textstyle (}{\delta}_{x}\circ \frac{\mathrm{\partial}{\varphi}^{-1}}{\mathrm{\partial}{q}_{i}}\circ \varphi {\textstyle )}(x){\textstyle )}$

where ${\delta}_{x}:{\mathbb{R}}^{3}\to {T}_{x}M$ denotes the canonical bijection. The coordinate vectors induce a natural basis ${\gamma}_{x}$ at each point $x\in U$ for the tangent space ${T}_{x}M$. Let $[{g}_{x}{]}_{S}$ S denote the matrix representation of the metric tensor at the point x in the standard basis for ${T}_{x}M$ and let $[{g}_{x}{]}_{{\gamma}_{x}}$ denote the matrix representation in the basis ${\gamma}_{x}$.

My understanding of the above definition of the cross product now follows. Let $u,v\in {T}_{x}M$ be tangent vectors and let

$[u{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{u}_{1}\\ {u}_{2}\\ {u}_{3}\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]$

denote the coordinates of u,v in the basis ${\gamma}_{x}$. Then we define the mth coordinate of the cross product $u\times v\in {T}_{x}M$ in the basis ${\gamma}_{x}$ as

$(}[u\times v{]}_{{\gamma}_{x}}{{\textstyle )}}_{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\textstyle (}[{g}_{x}{]}_{{\gamma}_{x}}{{\textstyle )}}_{mi}{\u03f5}_{ijk}{u}_{j}{v}_{k$

Now I will demonstrate my apparent misunderstanding with an example. Let the manifold M be the usual Riemannian manifold on ${\mathbb{R}}^{3}$ and let $\varphi $ be given by

$\varphi ({x}_{1},{x}_{2},{x}_{3})=({x}_{1},{x}_{2},{x}_{3}-{x}_{1}^{2}-{x}_{2}^{2})$

${\varphi}^{-1}({q}_{1},{q}_{2},{q}_{3})=({q}_{1},{q}_{2},{q}_{3}+{q}_{1}^{2}+{q}_{2}^{2})$

The Jacobian matrix J of ${\varphi}^{-1}$ is

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2{q}_{1}& -2{q}_{2}& 1\end{array}\right]$

And the matrix representation of the metric tensor in the basis ${\gamma}_{x}$ is

$[{g}_{x}{]}_{{\gamma}_{x}}={J}^{T}[{g}_{x}{]}_{S}J=\left[\begin{array}{ccc}1+4{q}_{1}^{2}& 4{q}_{1}{q}_{2}& 2{q}_{1}\\ \text{}4{q}_{1}{q}_{2}& 1+4{q}_{2}^{2}& 2{q}_{2}\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$

Now choose $x=(1,1,-1)$. The coordinates of x are evidently $\varphi (x)=(1,1,1)$ and the three matrices above become

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2& -2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[{g}_{x}{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}5& 4& 2\\ \text{}4& 5& 2\\ \text{}2& 2& 1\end{array}\right]$

Now we compute the cross product in the basis ${\gamma}_{x}$. Using my understanding of the definition as outlined above, I get

$[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]$

If we instead compute the cross product in the standard basis, then using my understanding of the definition, I get

$[u\times v{]}_{S}=\left[\begin{array}{c}0\\ \text{}-1\\ \text{}2\end{array}\right]$

Naturally, these results ought to agree if we perform a change of basis on $[u\times v{]}_{{\gamma}_{x}}$. Doing just that, I get

$[u\times v{]}_{S}=J[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]=\left[\begin{array}{c}36\\ \text{}35\\ \text{}158\end{array}\right]$

Clearly, these do not agree. I can think of several reasons for this. Perhaps the definition given on Wikipedia is erroneous or only works for orthogonal coordinates. Perhaps I am misinterpreting the definition given on Wikipedia. Or maybe I have made an error somewhere in my calculation. My question is then as follows. How should I interpret the definition given on Wikipedia, and how should one express that definition using the notation provided here?

I am trying to understand the definition of the cross product given by Wikipedia

The article says that we can define the cross product c of two vectors u,v given a suitable "dot product" ${\eta}^{mi}$ as follows

${c}^{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\eta}^{mi}{\u03f5}_{ijk}{u}^{j}{v}^{k}$

To demonstrate my current understanding of this definition, I will introduce some notation and terminology. Then I will show where my confusion arises with an example. I do apologize in advance for the length of this post.

Let M be a smooth Riemannian manifold on ${\mathbb{R}}^{3}$ with the metric tensor g. Pick a coordinate chart (U,$\varphi $) with $\varphi $ a diffeomorphism. We define a collection $\beta =\{{b}_{i}:U\to TM|i\in \{1,2,3\}\}$ of vector fields, called coordinate vectors, as follows

${b}_{i}(x):={\textstyle (}x,{\textstyle (}{\delta}_{x}\circ \frac{\mathrm{\partial}{\varphi}^{-1}}{\mathrm{\partial}{q}_{i}}\circ \varphi {\textstyle )}(x){\textstyle )}$

where ${\delta}_{x}:{\mathbb{R}}^{3}\to {T}_{x}M$ denotes the canonical bijection. The coordinate vectors induce a natural basis ${\gamma}_{x}$ at each point $x\in U$ for the tangent space ${T}_{x}M$. Let $[{g}_{x}{]}_{S}$ S denote the matrix representation of the metric tensor at the point x in the standard basis for ${T}_{x}M$ and let $[{g}_{x}{]}_{{\gamma}_{x}}$ denote the matrix representation in the basis ${\gamma}_{x}$.

My understanding of the above definition of the cross product now follows. Let $u,v\in {T}_{x}M$ be tangent vectors and let

$[u{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{u}_{1}\\ {u}_{2}\\ {u}_{3}\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]$

denote the coordinates of u,v in the basis ${\gamma}_{x}$. Then we define the mth coordinate of the cross product $u\times v\in {T}_{x}M$ in the basis ${\gamma}_{x}$ as

$(}[u\times v{]}_{{\gamma}_{x}}{{\textstyle )}}_{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\textstyle (}[{g}_{x}{]}_{{\gamma}_{x}}{{\textstyle )}}_{mi}{\u03f5}_{ijk}{u}_{j}{v}_{k$

Now I will demonstrate my apparent misunderstanding with an example. Let the manifold M be the usual Riemannian manifold on ${\mathbb{R}}^{3}$ and let $\varphi $ be given by

$\varphi ({x}_{1},{x}_{2},{x}_{3})=({x}_{1},{x}_{2},{x}_{3}-{x}_{1}^{2}-{x}_{2}^{2})$

${\varphi}^{-1}({q}_{1},{q}_{2},{q}_{3})=({q}_{1},{q}_{2},{q}_{3}+{q}_{1}^{2}+{q}_{2}^{2})$

The Jacobian matrix J of ${\varphi}^{-1}$ is

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2{q}_{1}& -2{q}_{2}& 1\end{array}\right]$

And the matrix representation of the metric tensor in the basis ${\gamma}_{x}$ is

$[{g}_{x}{]}_{{\gamma}_{x}}={J}^{T}[{g}_{x}{]}_{S}J=\left[\begin{array}{ccc}1+4{q}_{1}^{2}& 4{q}_{1}{q}_{2}& 2{q}_{1}\\ \text{}4{q}_{1}{q}_{2}& 1+4{q}_{2}^{2}& 2{q}_{2}\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$

Now choose $x=(1,1,-1)$. The coordinates of x are evidently $\varphi (x)=(1,1,1)$ and the three matrices above become

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2& -2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[{g}_{x}{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}5& 4& 2\\ \text{}4& 5& 2\\ \text{}2& 2& 1\end{array}\right]$

Now we compute the cross product in the basis ${\gamma}_{x}$. Using my understanding of the definition as outlined above, I get

$[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]$

If we instead compute the cross product in the standard basis, then using my understanding of the definition, I get

$[u\times v{]}_{S}=\left[\begin{array}{c}0\\ \text{}-1\\ \text{}2\end{array}\right]$

Naturally, these results ought to agree if we perform a change of basis on $[u\times v{]}_{{\gamma}_{x}}$. Doing just that, I get

$[u\times v{]}_{S}=J[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]=\left[\begin{array}{c}36\\ \text{}35\\ \text{}158\end{array}\right]$

Clearly, these do not agree. I can think of several reasons for this. Perhaps the definition given on Wikipedia is erroneous or only works for orthogonal coordinates. Perhaps I am misinterpreting the definition given on Wikipedia. Or maybe I have made an error somewhere in my calculation. My question is then as follows. How should I interpret the definition given on Wikipedia, and how should one express that definition using the notation provided here?

Finding detailed linear algebra problems and solutions has always been difficult because the textbooks would never provide anything that would be sufficient. Since it is used not only by engineering students but by anyone who has to work with specific calculations, we have provided you with a plethora of questions and answers in their original form. It will help you to see some logic as you are solving complex numbers and understand the basic concepts of linear Algebra in a clearer way. If you need additional help or would like to connect several solutions, compare more than one solution as you approach your task.