Recent questions in Alternate coordinate systems

Linear algebraAnswered question

5nu1miq8u 2022-12-14

What relationship does linear acceleration have with angular acceleration?

Linear algebraAnswered question

hEorpaigh3tR 2022-12-04

The coordinates of the origin are ...........

A. (0, 1)

B.(0, 0)

C.(0, -1)

D.(1, 0)

A. (0, 1)

B.(0, 0)

C.(0, -1)

D.(1, 0)

Linear algebraOpen question

willinghamkids4 2022-11-07

Two groups of friends go to a baseball game and each plan to share snacks if 3 drinks and 2 orders of churros cost 16.00 and 5 drinks and 5 orders of churros cost 31.25 how much is 1 drin and 1 order of churros cost

Linear algebraAnswered question

Addison Parker 2022-09-12

Without constructing, find the coordinates of the points of intersection of the graph of the function y = -0.6x + 3 with the coordinate axes.

Linear algebraAnswered question

manudyent7 2022-09-05

Trouble with the definition of the cross product

I am trying to understand the definition of the cross product given by Wikipedia

The article says that we can define the cross product c of two vectors u,v given a suitable "dot product" ${\eta}^{mi}$ as follows

${c}^{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\eta}^{mi}{\u03f5}_{ijk}{u}^{j}{v}^{k}$

To demonstrate my current understanding of this definition, I will introduce some notation and terminology. Then I will show where my confusion arises with an example. I do apologize in advance for the length of this post.

Let M be a smooth Riemannian manifold on ${\mathbb{R}}^{3}$ with the metric tensor g. Pick a coordinate chart (U,$\varphi $) with $\varphi $ a diffeomorphism. We define a collection $\beta =\{{b}_{i}:U\to TM|i\in \{1,2,3\}\}$ of vector fields, called coordinate vectors, as follows

${b}_{i}(x):={\textstyle (}x,{\textstyle (}{\delta}_{x}\circ \frac{\mathrm{\partial}{\varphi}^{-1}}{\mathrm{\partial}{q}_{i}}\circ \varphi {\textstyle )}(x){\textstyle )}$

where ${\delta}_{x}:{\mathbb{R}}^{3}\to {T}_{x}M$ denotes the canonical bijection. The coordinate vectors induce a natural basis ${\gamma}_{x}$ at each point $x\in U$ for the tangent space ${T}_{x}M$. Let $[{g}_{x}{]}_{S}$ S denote the matrix representation of the metric tensor at the point x in the standard basis for ${T}_{x}M$ and let $[{g}_{x}{]}_{{\gamma}_{x}}$ denote the matrix representation in the basis ${\gamma}_{x}$.

My understanding of the above definition of the cross product now follows. Let $u,v\in {T}_{x}M$ be tangent vectors and let

$[u{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{u}_{1}\\ {u}_{2}\\ {u}_{3}\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]$

denote the coordinates of u,v in the basis ${\gamma}_{x}$. Then we define the mth coordinate of the cross product $u\times v\in {T}_{x}M$ in the basis ${\gamma}_{x}$ as

$(}[u\times v{]}_{{\gamma}_{x}}{{\textstyle )}}_{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\textstyle (}[{g}_{x}{]}_{{\gamma}_{x}}{{\textstyle )}}_{mi}{\u03f5}_{ijk}{u}_{j}{v}_{k$

Now I will demonstrate my apparent misunderstanding with an example. Let the manifold M be the usual Riemannian manifold on ${\mathbb{R}}^{3}$ and let $\varphi $ be given by

$\varphi ({x}_{1},{x}_{2},{x}_{3})=({x}_{1},{x}_{2},{x}_{3}-{x}_{1}^{2}-{x}_{2}^{2})$

${\varphi}^{-1}({q}_{1},{q}_{2},{q}_{3})=({q}_{1},{q}_{2},{q}_{3}+{q}_{1}^{2}+{q}_{2}^{2})$

The Jacobian matrix J of ${\varphi}^{-1}$ is

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2{q}_{1}& -2{q}_{2}& 1\end{array}\right]$

And the matrix representation of the metric tensor in the basis ${\gamma}_{x}$ is

$[{g}_{x}{]}_{{\gamma}_{x}}={J}^{T}[{g}_{x}{]}_{S}J=\left[\begin{array}{ccc}1+4{q}_{1}^{2}& 4{q}_{1}{q}_{2}& 2{q}_{1}\\ \text{}4{q}_{1}{q}_{2}& 1+4{q}_{2}^{2}& 2{q}_{2}\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$

Now choose $x=(1,1,-1)$. The coordinates of x are evidently $\varphi (x)=(1,1,1)$ and the three matrices above become

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2& -2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[{g}_{x}{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}5& 4& 2\\ \text{}4& 5& 2\\ \text{}2& 2& 1\end{array}\right]$

Now we compute the cross product in the basis ${\gamma}_{x}$. Using my understanding of the definition as outlined above, I get

$[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]$

If we instead compute the cross product in the standard basis, then using my understanding of the definition, I get

$[u\times v{]}_{S}=\left[\begin{array}{c}0\\ \text{}-1\\ \text{}2\end{array}\right]$

Naturally, these results ought to agree if we perform a change of basis on $[u\times v{]}_{{\gamma}_{x}}$. Doing just that, I get

$[u\times v{]}_{S}=J[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]=\left[\begin{array}{c}36\\ \text{}35\\ \text{}158\end{array}\right]$

Clearly, these do not agree. I can think of several reasons for this. Perhaps the definition given on Wikipedia is erroneous or only works for orthogonal coordinates. Perhaps I am misinterpreting the definition given on Wikipedia. Or maybe I have made an error somewhere in my calculation. My question is then as follows. How should I interpret the definition given on Wikipedia, and how should one express that definition using the notation provided here?

I am trying to understand the definition of the cross product given by Wikipedia

The article says that we can define the cross product c of two vectors u,v given a suitable "dot product" ${\eta}^{mi}$ as follows

${c}^{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\eta}^{mi}{\u03f5}_{ijk}{u}^{j}{v}^{k}$

To demonstrate my current understanding of this definition, I will introduce some notation and terminology. Then I will show where my confusion arises with an example. I do apologize in advance for the length of this post.

Let M be a smooth Riemannian manifold on ${\mathbb{R}}^{3}$ with the metric tensor g. Pick a coordinate chart (U,$\varphi $) with $\varphi $ a diffeomorphism. We define a collection $\beta =\{{b}_{i}:U\to TM|i\in \{1,2,3\}\}$ of vector fields, called coordinate vectors, as follows

${b}_{i}(x):={\textstyle (}x,{\textstyle (}{\delta}_{x}\circ \frac{\mathrm{\partial}{\varphi}^{-1}}{\mathrm{\partial}{q}_{i}}\circ \varphi {\textstyle )}(x){\textstyle )}$

where ${\delta}_{x}:{\mathbb{R}}^{3}\to {T}_{x}M$ denotes the canonical bijection. The coordinate vectors induce a natural basis ${\gamma}_{x}$ at each point $x\in U$ for the tangent space ${T}_{x}M$. Let $[{g}_{x}{]}_{S}$ S denote the matrix representation of the metric tensor at the point x in the standard basis for ${T}_{x}M$ and let $[{g}_{x}{]}_{{\gamma}_{x}}$ denote the matrix representation in the basis ${\gamma}_{x}$.

My understanding of the above definition of the cross product now follows. Let $u,v\in {T}_{x}M$ be tangent vectors and let

$[u{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{u}_{1}\\ {u}_{2}\\ {u}_{3}\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]$

denote the coordinates of u,v in the basis ${\gamma}_{x}$. Then we define the mth coordinate of the cross product $u\times v\in {T}_{x}M$ in the basis ${\gamma}_{x}$ as

$(}[u\times v{]}_{{\gamma}_{x}}{{\textstyle )}}_{m}:=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\textstyle (}[{g}_{x}{]}_{{\gamma}_{x}}{{\textstyle )}}_{mi}{\u03f5}_{ijk}{u}_{j}{v}_{k$

Now I will demonstrate my apparent misunderstanding with an example. Let the manifold M be the usual Riemannian manifold on ${\mathbb{R}}^{3}$ and let $\varphi $ be given by

$\varphi ({x}_{1},{x}_{2},{x}_{3})=({x}_{1},{x}_{2},{x}_{3}-{x}_{1}^{2}-{x}_{2}^{2})$

${\varphi}^{-1}({q}_{1},{q}_{2},{q}_{3})=({q}_{1},{q}_{2},{q}_{3}+{q}_{1}^{2}+{q}_{2}^{2})$

The Jacobian matrix J of ${\varphi}^{-1}$ is

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2{q}_{1}& -2{q}_{2}& 1\end{array}\right]$

And the matrix representation of the metric tensor in the basis ${\gamma}_{x}$ is

$[{g}_{x}{]}_{{\gamma}_{x}}={J}^{T}[{g}_{x}{]}_{S}J=\left[\begin{array}{ccc}1+4{q}_{1}^{2}& 4{q}_{1}{q}_{2}& 2{q}_{1}\\ \text{}4{q}_{1}{q}_{2}& 1+4{q}_{2}^{2}& 2{q}_{2}\\ \text{}2{q}_{1}& 2{q}_{2}& 1\end{array}\right]$

Now choose $x=(1,1,-1)$. The coordinates of x are evidently $\varphi (x)=(1,1,1)$ and the three matrices above become

$J=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}{J}^{-1}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}-2& -2& 1\end{array}\right]$ $\text{}\text{}\text{}\text{}\text{}\text{}[{g}_{x}{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}5& 4& 2\\ \text{}4& 5& 2\\ \text{}2& 2& 1\end{array}\right]$

Now we compute the cross product in the basis ${\gamma}_{x}$. Using my understanding of the definition as outlined above, I get

$[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]$

If we instead compute the cross product in the standard basis, then using my understanding of the definition, I get

$[u\times v{]}_{S}=\left[\begin{array}{c}0\\ \text{}-1\\ \text{}2\end{array}\right]$

Naturally, these results ought to agree if we perform a change of basis on $[u\times v{]}_{{\gamma}_{x}}$. Doing just that, I get

$[u\times v{]}_{S}=J[u\times v{]}_{{\gamma}_{x}}=\left[\begin{array}{ccc}1& 0& 0\\ \text{}0& 1& 0\\ \text{}2& 2& 1\end{array}\right]\left[\begin{array}{c}36\\ \text{}35\\ \text{}16\end{array}\right]=\left[\begin{array}{c}36\\ \text{}35\\ \text{}158\end{array}\right]$

Clearly, these do not agree. I can think of several reasons for this. Perhaps the definition given on Wikipedia is erroneous or only works for orthogonal coordinates. Perhaps I am misinterpreting the definition given on Wikipedia. Or maybe I have made an error somewhere in my calculation. My question is then as follows. How should I interpret the definition given on Wikipedia, and how should one express that definition using the notation provided here?

Linear algebraAnswered question

Skye Vazquez 2022-09-05

Vector projection in polar cordinates

In Euclidean Space and Cartesian coordinates system, we know the vector projection of Vector u onto a vector v is simply

$\overrightarrow{P}=\frac{(\overrightarrow{u}.\overrightarrow{v})\text{}\overrightarrow{v}}{\overrightarrow{v}.\overrightarrow{v}}$

What would be the vector projection on polar or spherical coordinates or alternate coordinate systems?

Suppose we have vectors defined as

$\overrightarrow{u}={u}^{\alpha}{e}_{\alpha}$

$\overrightarrow{v}={v}^{\beta}{e}_{\beta}$

Does a projection vector of u on v becomes

$\overrightarrow{P}=\frac{({u}^{\alpha}{v}^{\beta}{e}_{\alpha \beta})\overrightarrow{v}}{{v}^{\alpha}{v}^{\beta}{e}_{\alpha \beta}}$

Where ${e}_{\alpha}$ represent the basis vector and ${e}_{\alpha \beta}$ the metric tensor.

In Euclidean Space and Cartesian coordinates system, we know the vector projection of Vector u onto a vector v is simply

$\overrightarrow{P}=\frac{(\overrightarrow{u}.\overrightarrow{v})\text{}\overrightarrow{v}}{\overrightarrow{v}.\overrightarrow{v}}$

What would be the vector projection on polar or spherical coordinates or alternate coordinate systems?

Suppose we have vectors defined as

$\overrightarrow{u}={u}^{\alpha}{e}_{\alpha}$

$\overrightarrow{v}={v}^{\beta}{e}_{\beta}$

Does a projection vector of u on v becomes

$\overrightarrow{P}=\frac{({u}^{\alpha}{v}^{\beta}{e}_{\alpha \beta})\overrightarrow{v}}{{v}^{\alpha}{v}^{\beta}{e}_{\alpha \beta}}$

Where ${e}_{\alpha}$ represent the basis vector and ${e}_{\alpha \beta}$ the metric tensor.

Linear algebraAnswered question

Alfredeim 2022-09-05

Is the differential $\mathrm{d}\overrightarrow{r}$ a sensible mathematical object?

When doing differential geometry, physicists often use

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$

for many different things. For instance, they define the holonomic basis $\{{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\}$ relative to a coordinate system $\{{x}^{\prime a}\}$ by imposing

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{\prime a}\text{}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}=\frac{\mathrm{\partial}\overrightarrow{r}}{\mathrm{\partial}{x}^{\prime a}}$

and they compute the quadratic form of the metric $\mathrm{d}{s}^{2}$ as $\mathrm{d}\overrightarrow{r}\cdot \mathrm{d}\overrightarrow{r}$ .

Computing the differential of a vector field ($\overrightarrow{r}={x}^{i}{\overrightarrow{e}}_{i}$ i, in this case) feels strange, as in differential geometry differentials are usually considered to be alternating k-forms, so it would only make sense to talk about the differential of a scalar field (aka its exterior derivative).

Not only that, the "true" definitions of holonomic bases and ds2 don't use this $\mathrm{d}\overrightarrow{r}$ at all.

EDIT: in fact, taking the derivative of $\overrightarrow{r}$, or any other vector field, is something we are not allowed to do in a general differentiable manifold without a connection, so we obviously wouldn't define a holonomic basis like that. A holonomic basis would basically be the basis formed by the tangent vectors $\mathrm{\partial}/\mathrm{\partial}{x}^{\prime a}$.

After thinking about it, I thought the differential of a vector field might just be

$\mathrm{d}\overrightarrow{\phi}=({\mathrm{\nabla}}_{i}{\phi}^{j})\text{}{\overrightarrow{e}}_{j}\otimes \mathrm{d}{x}^{i},$

so maybe $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$ i means $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\otimes {\overrightarrow{e}}_{i}$? How is $\mathrm{d}\overrightarrow{r}$ rigorously defined, otherwise?

When doing differential geometry, physicists often use

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$

for many different things. For instance, they define the holonomic basis $\{{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\}$ relative to a coordinate system $\{{x}^{\prime a}\}$ by imposing

$\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{\prime a}\text{}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{\overrightarrow{e}}_{a}^{\text{}\mathrm{\prime}}=\frac{\mathrm{\partial}\overrightarrow{r}}{\mathrm{\partial}{x}^{\prime a}}$

and they compute the quadratic form of the metric $\mathrm{d}{s}^{2}$ as $\mathrm{d}\overrightarrow{r}\cdot \mathrm{d}\overrightarrow{r}$ .

Computing the differential of a vector field ($\overrightarrow{r}={x}^{i}{\overrightarrow{e}}_{i}$ i, in this case) feels strange, as in differential geometry differentials are usually considered to be alternating k-forms, so it would only make sense to talk about the differential of a scalar field (aka its exterior derivative).

Not only that, the "true" definitions of holonomic bases and ds2 don't use this $\mathrm{d}\overrightarrow{r}$ at all.

EDIT: in fact, taking the derivative of $\overrightarrow{r}$, or any other vector field, is something we are not allowed to do in a general differentiable manifold without a connection, so we obviously wouldn't define a holonomic basis like that. A holonomic basis would basically be the basis formed by the tangent vectors $\mathrm{\partial}/\mathrm{\partial}{x}^{\prime a}$.

After thinking about it, I thought the differential of a vector field might just be

$\mathrm{d}\overrightarrow{\phi}=({\mathrm{\nabla}}_{i}{\phi}^{j})\text{}{\overrightarrow{e}}_{j}\otimes \mathrm{d}{x}^{i},$

so maybe $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\text{}{\overrightarrow{e}}_{i}$ i means $\mathrm{d}\overrightarrow{r}=\mathrm{d}{x}^{i}\otimes {\overrightarrow{e}}_{i}$? How is $\mathrm{d}\overrightarrow{r}$ rigorously defined, otherwise?

Linear algebraAnswered question

Gaige Haynes 2022-09-05

Why do determinants have their particular form?

I know that for a matrix A, if det(A)=0 then the matrix does not have an inverse, and hence the associated system of equations does not have a unique solution. However, why do the determinant formulas have the form they do? Why all the complicated co-factor expansions and alternating signs ?

To sum it up: I know what determinants do, but its unclear to me why. Is there an intuitive explanation that can be attached to a co-factor expansion??..

I know that for a matrix A, if det(A)=0 then the matrix does not have an inverse, and hence the associated system of equations does not have a unique solution. However, why do the determinant formulas have the form they do? Why all the complicated co-factor expansions and alternating signs ?

To sum it up: I know what determinants do, but its unclear to me why. Is there an intuitive explanation that can be attached to a co-factor expansion??..

Linear algebraAnswered question

Modelfino0g 2022-09-05

Given ax+by+c=0, what is the set of all operations on this equation that do not alter the plotted line?

Operations such as $f(x)+a-a$, are obvious candidates for such a set. However, e.g., for the line y=−x, it seems to me to be non-trivial that ${x}^{3}+{y}^{3}=0$ will plot the same line but ${x}^{2}+{y}^{2}=0$ won't. Translation between coordinate systems also seems to be a non-trivial example. Is there any way to designate such a set? (Could this be generalized to other types of curves?)

The following are some more thoughts on the question:

It would be interesting in order to find alternate equation forms that might make more clear certain properties of a curve. For instance, $\frac{x}{a}+\frac{x}{b}=1$ makes immediately obvious the abscissa and ordinate at origin. But we know that under some types of algebra, $ax+by+c=0$ might fail to be represented $\frac{x}{a}+\frac{x}{b}=1$. So we're lead to think that these two equations plot a line by virtue of legitimate operations between them.

The equation of a plane also seems to be nicely related to the general form of a line, if ${r}_{0}=({x}_{0},{y}_{0})$ and r=(x,y) are two vectors pointing to the plane and the normal is $n=({n}_{x},{n}_{y})$. If $\circ $ between vectors is the dot product, $(x-{x}_{0},y-{y}_{0})\circ n=(x-{x}_{0})\ast {n}_{x}+(y-{y}_{0})\ast {n}_{y}={n}_{x}\ast x+{n}_{y}\ast y-({x}_{0}{n}_{x}+{y}_{0}{n}_{y})=a\ast x+b\ast y+c=0$

The idea is to be able to see how the form of an equation can be altered, not the content of the variables. It seems odd to me that very complicated equations could have the same plotted curve as simple forms, but that this property wouldn't appear by virtue of the equation themselves, or the set of valid operations on this equation. This might seem weird, but say it is never immediately obvious that ax+by+c=0 plots a line, or ${x}^{2}+{y}^{2}={r}^{2}$ plots a circle, unless we actually do the plotting, and ax+by+c=0 seems way less fundamental than y=mx+b.

Note that in the case of a circle, we have the pythagorean theorem that seems to be its clearest representation with the methods of analytic geometry, and the moment an equation can be said to share some sort of operation set with the pythagorean theorem, we know we're speaking of a circle. It seems that if we could somehow draw the operation set of a circle, we would get something like the pythagorean theorem, and that this operation set gets somehow deformed in order to give a representation onto the cartesian plane. For a translated circle with center (h,k), ${x}^{2}-2xh+{h}^{2}+{y}^{2}-2yk+{k}^{2}={r}^{2}$ means absolutely nothing to us, but the form $(x-h{)}^{2}+(y-k{)}^{2}={r}^{2}$ is clear as day.

Operations such as $f(x)+a-a$, are obvious candidates for such a set. However, e.g., for the line y=−x, it seems to me to be non-trivial that ${x}^{3}+{y}^{3}=0$ will plot the same line but ${x}^{2}+{y}^{2}=0$ won't. Translation between coordinate systems also seems to be a non-trivial example. Is there any way to designate such a set? (Could this be generalized to other types of curves?)

The following are some more thoughts on the question:

It would be interesting in order to find alternate equation forms that might make more clear certain properties of a curve. For instance, $\frac{x}{a}+\frac{x}{b}=1$ makes immediately obvious the abscissa and ordinate at origin. But we know that under some types of algebra, $ax+by+c=0$ might fail to be represented $\frac{x}{a}+\frac{x}{b}=1$. So we're lead to think that these two equations plot a line by virtue of legitimate operations between them.

The equation of a plane also seems to be nicely related to the general form of a line, if ${r}_{0}=({x}_{0},{y}_{0})$ and r=(x,y) are two vectors pointing to the plane and the normal is $n=({n}_{x},{n}_{y})$. If $\circ $ between vectors is the dot product, $(x-{x}_{0},y-{y}_{0})\circ n=(x-{x}_{0})\ast {n}_{x}+(y-{y}_{0})\ast {n}_{y}={n}_{x}\ast x+{n}_{y}\ast y-({x}_{0}{n}_{x}+{y}_{0}{n}_{y})=a\ast x+b\ast y+c=0$

The idea is to be able to see how the form of an equation can be altered, not the content of the variables. It seems odd to me that very complicated equations could have the same plotted curve as simple forms, but that this property wouldn't appear by virtue of the equation themselves, or the set of valid operations on this equation. This might seem weird, but say it is never immediately obvious that ax+by+c=0 plots a line, or ${x}^{2}+{y}^{2}={r}^{2}$ plots a circle, unless we actually do the plotting, and ax+by+c=0 seems way less fundamental than y=mx+b.

Note that in the case of a circle, we have the pythagorean theorem that seems to be its clearest representation with the methods of analytic geometry, and the moment an equation can be said to share some sort of operation set with the pythagorean theorem, we know we're speaking of a circle. It seems that if we could somehow draw the operation set of a circle, we would get something like the pythagorean theorem, and that this operation set gets somehow deformed in order to give a representation onto the cartesian plane. For a translated circle with center (h,k), ${x}^{2}-2xh+{h}^{2}+{y}^{2}-2yk+{k}^{2}={r}^{2}$ means absolutely nothing to us, but the form $(x-h{)}^{2}+(y-k{)}^{2}={r}^{2}$ is clear as day.

Linear algebraAnswered question

cuuhorre76 2022-09-05

Star operator in the simplest form

Let E together with g be a inner product space(over field R) , $\text{dim}E=n<\mathrm{\infty}$ and $\{{e}^{1},\cdots ,{e}^{n}\}$ is orthonormal basis of E that $\{{e}^{1},\cdots ,{e}^{n}\}$ is its dual basis(for E∗). Now we define $\omega :={e}^{1}\wedge \cdots \wedge {e}^{n}$ as an element of volume of E.

I prove that ${g}^{\mathrm{\u266d}}:E\to {E}^{\ast}$ with rule $({g}^{\mathrm{\u266d}}(u))(v)=g(u,v)$, is an isomorphism $\mathrm{\forall}u,v\in E$

Convention: $\stackrel{~}{u}:={g}^{\mathrm{\u266d}}(u)$

I wish to prove that for any p-form $\theta \in {\mathrm{\Lambda}}^{p}(E)$, There exist a unique element $\eta \in {\mathrm{\Lambda}}^{(n-p)}(E)$ such that

$\eta ({u}_{1},\cdots ,{u}_{n-p})\omega =\theta \wedge {\stackrel{~}{u}}_{1}\cdots \wedge {\stackrel{~}{u}}_{n-p}\phantom{\rule{2em}{0ex}}\mathrm{\forall}{u}_{1},\cdots ,{u}_{n-p}\in E$

How can I do this?

Of course I guess that should be defined an inner product ${g}_{{\mathrm{\Lambda}}^{k}(E)}$ on ${\mathrm{\Lambda}}^{k}(E)$ for any 0<k<n and then use ${g}_{{\mathrm{\Lambda}}^{k}(E)}^{\mathrm{\u266d}}$. what is your method ? and How?

Let E together with g be a inner product space(over field R) , $\text{dim}E=n<\mathrm{\infty}$ and $\{{e}^{1},\cdots ,{e}^{n}\}$ is orthonormal basis of E that $\{{e}^{1},\cdots ,{e}^{n}\}$ is its dual basis(for E∗). Now we define $\omega :={e}^{1}\wedge \cdots \wedge {e}^{n}$ as an element of volume of E.

I prove that ${g}^{\mathrm{\u266d}}:E\to {E}^{\ast}$ with rule $({g}^{\mathrm{\u266d}}(u))(v)=g(u,v)$, is an isomorphism $\mathrm{\forall}u,v\in E$

Convention: $\stackrel{~}{u}:={g}^{\mathrm{\u266d}}(u)$

I wish to prove that for any p-form $\theta \in {\mathrm{\Lambda}}^{p}(E)$, There exist a unique element $\eta \in {\mathrm{\Lambda}}^{(n-p)}(E)$ such that

$\eta ({u}_{1},\cdots ,{u}_{n-p})\omega =\theta \wedge {\stackrel{~}{u}}_{1}\cdots \wedge {\stackrel{~}{u}}_{n-p}\phantom{\rule{2em}{0ex}}\mathrm{\forall}{u}_{1},\cdots ,{u}_{n-p}\in E$

How can I do this?

Of course I guess that should be defined an inner product ${g}_{{\mathrm{\Lambda}}^{k}(E)}$ on ${\mathrm{\Lambda}}^{k}(E)$ for any 0<k<n and then use ${g}_{{\mathrm{\Lambda}}^{k}(E)}^{\mathrm{\u266d}}$. what is your method ? and How?

Linear algebraAnswered question

Zackary Duffy 2022-09-04

Deriving the distance of closest approach between ellipsoid and line (prev. "equation of a 3-dimensional line in spherical coordinates")

Currently trying to solve a problem of calculating the smallest distance between a given ellipsoid centered on the coordinate system starting point and a given line (located... somewhere).

After chasing a few promising but non-functional methods I have settled on trying to use spherical coordintates - to determine the formula for said distance by determining the formula of the distance of the ellipsoid from the center and doing the same for the line, consequently subtracting the two, and using gradient descent on the resulting function to approach a minimum (hopefully the global one).

However, while that makes the ellipsoid calculations easy, I have found no consise way of determining a line in spherical coordinates in equation form. I have found the old questions on similar topics proposing use of Euler angles and the like, but that does not seem to be the solution (possibly because I haven't managed to appreciate it). So, asking here - is there any way to derive an equation for a line in 3-dimensional space in spherical coordinates?

Alternate methods for the task at hand are appreciated too - for the record, my previous lead was using a cyllindrical coordinate system with the line as its X axis, but the resulting formula for the ellipsoid turned out to be bogus.

Edit: may have figured out a solution for the bigger problem that does not rely on the spherical equation - see my own answer to the question. Title changed accordingly.

Edit 2: Scratch that. Fell into the same trap again; that solution is not going to work.

Currently trying to solve a problem of calculating the smallest distance between a given ellipsoid centered on the coordinate system starting point and a given line (located... somewhere).

After chasing a few promising but non-functional methods I have settled on trying to use spherical coordintates - to determine the formula for said distance by determining the formula of the distance of the ellipsoid from the center and doing the same for the line, consequently subtracting the two, and using gradient descent on the resulting function to approach a minimum (hopefully the global one).

However, while that makes the ellipsoid calculations easy, I have found no consise way of determining a line in spherical coordinates in equation form. I have found the old questions on similar topics proposing use of Euler angles and the like, but that does not seem to be the solution (possibly because I haven't managed to appreciate it). So, asking here - is there any way to derive an equation for a line in 3-dimensional space in spherical coordinates?

Alternate methods for the task at hand are appreciated too - for the record, my previous lead was using a cyllindrical coordinate system with the line as its X axis, but the resulting formula for the ellipsoid turned out to be bogus.

Edit: may have figured out a solution for the bigger problem that does not rely on the spherical equation - see my own answer to the question. Title changed accordingly.

Edit 2: Scratch that. Fell into the same trap again; that solution is not going to work.

Linear algebraAnswered question

Koronicaqn 2022-09-04

What is the Mathematical equivalent of the definition of vectors in physics?

Physics has an additional requirement to define physical quantities as vectors i.e. if they are invariant under rotation of the coordinate system for example quantities like force, velocity etc.

How does it correspond to the mathematical formulation? Is it a subset of vector spaces? Is it an alternate characterisation to the axioms of vector space and can it be proved from the former axioms?

From wikipedia:

A vector space over a field F is a set V together with two operations that satisfy the eight axioms.

The first operation, called vector addition or simply addition + : V × V$\to $ V, takes any two vectors v and w and assigns to them a third vector which is commonly written as v + w, and called the sum of these two vectors. (Note that the resultant vector is also an element of the set V ). The second operation, called scalar multiplication · : F × V$\to $ V， takes any scalar a and any vector v and gives another vector av. (Similarly, the vector av is an element of the set V ).

Please try to respond in layman's terms as I am just starting with university level maths. If the proof involves advanced concepts, I am satisfied with reference to the topic in which I might encounter this in advanced studies.

Physics has an additional requirement to define physical quantities as vectors i.e. if they are invariant under rotation of the coordinate system for example quantities like force, velocity etc.

How does it correspond to the mathematical formulation? Is it a subset of vector spaces? Is it an alternate characterisation to the axioms of vector space and can it be proved from the former axioms?

From wikipedia:

A vector space over a field F is a set V together with two operations that satisfy the eight axioms.

The first operation, called vector addition or simply addition + : V × V$\to $ V, takes any two vectors v and w and assigns to them a third vector which is commonly written as v + w, and called the sum of these two vectors. (Note that the resultant vector is also an element of the set V ). The second operation, called scalar multiplication · : F × V$\to $ V， takes any scalar a and any vector v and gives another vector av. (Similarly, the vector av is an element of the set V ).

Please try to respond in layman's terms as I am just starting with university level maths. If the proof involves advanced concepts, I am satisfied with reference to the topic in which I might encounter this in advanced studies.

Linear algebraAnswered question

Modelfino0g 2022-09-04

Using index notation to write ${d}^{2}=0$ in terms of a torsion free connection.

Let (M,g) be a Riemannian manifold and let $\omega $ be a 1-form on M. I want to rewrite ${d}^{2}\omega =0$ in terms of the Levi-Civita connection.

I can show the following:

$d\omega (X,Y)=({\mathrm{\nabla}}_{X}\omega )(Y)-({\mathrm{\nabla}}_{Y}\omega )(X),$

which in index notation reads

$(d\omega {)}_{ab}=2{\mathrm{\nabla}}_{[a}{\omega}_{b]}.$

Similiarly for a 2-form, $\mu $, we have:

$d\mu (X,Y,Z)=({\mathrm{\nabla}}_{X}\mu )(Y,Z)-({\mathrm{\nabla}}_{Y}\mu )(X,Z)+({\mathrm{\nabla}}_{Z}\mu )(X,Y),$

which in index notation reads

$(d\mu {)}_{abc}=3{\mathrm{\nabla}}_{[a}{\varphi}_{bc]}.$

Now plugging in $d\omega $ for $\mu $ we get

$0={d}^{2}\omega =(d(d\omega ){)}_{abc}={\mathrm{\nabla}}_{[a}(d\omega {)}_{bc]}.$

I want to plug in the above expression (in index notation) for dω but I'm not really sure how to handle the indices. Do I just get

$3{\mathrm{\nabla}}_{[a}2{\mathrm{\nabla}}_{[b}{\omega}_{c]]}=6{\mathrm{\nabla}}_{[a}{\mathrm{\nabla}}_{b}{\omega}_{c]}?$

Let (M,g) be a Riemannian manifold and let $\omega $ be a 1-form on M. I want to rewrite ${d}^{2}\omega =0$ in terms of the Levi-Civita connection.

I can show the following:

$d\omega (X,Y)=({\mathrm{\nabla}}_{X}\omega )(Y)-({\mathrm{\nabla}}_{Y}\omega )(X),$

which in index notation reads

$(d\omega {)}_{ab}=2{\mathrm{\nabla}}_{[a}{\omega}_{b]}.$

Similiarly for a 2-form, $\mu $, we have:

$d\mu (X,Y,Z)=({\mathrm{\nabla}}_{X}\mu )(Y,Z)-({\mathrm{\nabla}}_{Y}\mu )(X,Z)+({\mathrm{\nabla}}_{Z}\mu )(X,Y),$

which in index notation reads

$(d\mu {)}_{abc}=3{\mathrm{\nabla}}_{[a}{\varphi}_{bc]}.$

Now plugging in $d\omega $ for $\mu $ we get

$0={d}^{2}\omega =(d(d\omega ){)}_{abc}={\mathrm{\nabla}}_{[a}(d\omega {)}_{bc]}.$

I want to plug in the above expression (in index notation) for dω but I'm not really sure how to handle the indices. Do I just get

$3{\mathrm{\nabla}}_{[a}2{\mathrm{\nabla}}_{[b}{\omega}_{c]]}=6{\mathrm{\nabla}}_{[a}{\mathrm{\nabla}}_{b}{\omega}_{c]}?$

Linear algebraAnswered question

Darius Nash 2022-09-04

Polar Coordinate Transformation - Motivation

I am trying to work out the reason why the integral

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-({x}^{2}+{y}^{2})}\phantom{\rule{thinmathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy$

is, in polar coordinates,

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-{r}^{2}}\phantom{\rule{thinmathspace}{0ex}}r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

As I understand it, a polar coordinate transformation involves the following substitution:

$(x,y)\to (r\mathrm{cos}\theta ,r\mathrm{sin}\theta )$

This would imply that

$-({x}^{2}+{y}^{2})=-((r\mathrm{cos}\theta {)}^{2}+(r\mathrm{sin}\theta {)}^{2})=-{r}^{2}((\mathrm{cos}\theta {)}^{2}+(\mathrm{sin}\theta {)}^{2})=-{r}^{2}$

which gets us this far

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-{r}^{2}}\phantom{\rule{thinmathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy$

To motivate why

$dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

I thought of the following argument:

$dx=d(x(\theta ,r))=\frac{\mathrm{\partial}(x(\theta ,r))}{\mathrm{\partial}\theta}d\theta +\frac{\mathrm{\partial}(x(\theta ,r))}{\mathrm{\partial}r}dr\phantom{\rule{thinmathspace}{0ex}}+\frac{{\mathrm{\partial}}^{2}(x(\theta ,r))}{(\mathrm{\partial}r{)}^{2}}(dr{)}^{2}+\phantom{\rule{thinmathspace}{0ex}}...\phantom{\rule{0ex}{0ex}}=r(-\mathrm{sin}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...$

$dy=r(\mathrm{cos}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...$

$\therefore \phantom{\rule{mediummathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy=[r(-\mathrm{sin}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...]\phantom{\rule{thinmathspace}{0ex}}[r(\mathrm{cos}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...]\phantom{\rule{0ex}{0ex}}=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}-(\mathrm{sin}\theta {)}^{2})\phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...\phantom{\rule{0ex}{0ex}}=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}-(\mathrm{sin}\theta {)}^{2}),\phantom{\rule{thinmathspace}{0ex}}\text{ignoring}\mathit{\text{o}}((dr{)}^{2})\text{and}\mathit{\text{o}}((d\theta {)}^{2})\text{terms.}$

However, I am off by a minus sign, which would enable me to argue

$dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}+(\mathrm{sin}\theta {)}^{2})=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

If it were correct, I would find this line of argument would be much more analytically convincing than the typical argument involving

$dx\phantom{\rule{thinmathspace}{0ex}}dy=dA=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

which I find to be less mechanistically obvious than the above substitution-based argument that I considered but that I am not being able to fully justify.

Could you please tell me whether my substitution-based argument can work, potentially by correcting some mistake or another that I might have made? If not, do you have any similarly analytical or mechanistic justification as to why $dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $ ?

I am trying to work out the reason why the integral

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-({x}^{2}+{y}^{2})}\phantom{\rule{thinmathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy$

is, in polar coordinates,

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-{r}^{2}}\phantom{\rule{thinmathspace}{0ex}}r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

As I understand it, a polar coordinate transformation involves the following substitution:

$(x,y)\to (r\mathrm{cos}\theta ,r\mathrm{sin}\theta )$

This would imply that

$-({x}^{2}+{y}^{2})=-((r\mathrm{cos}\theta {)}^{2}+(r\mathrm{sin}\theta {)}^{2})=-{r}^{2}((\mathrm{cos}\theta {)}^{2}+(\mathrm{sin}\theta {)}^{2})=-{r}^{2}$

which gets us this far

${\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{\int}_{-\mathrm{\infty}}^{\mathrm{\infty}}{e}^{-{r}^{2}}\phantom{\rule{thinmathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy$

To motivate why

$dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

I thought of the following argument:

$dx=d(x(\theta ,r))=\frac{\mathrm{\partial}(x(\theta ,r))}{\mathrm{\partial}\theta}d\theta +\frac{\mathrm{\partial}(x(\theta ,r))}{\mathrm{\partial}r}dr\phantom{\rule{thinmathspace}{0ex}}+\frac{{\mathrm{\partial}}^{2}(x(\theta ,r))}{(\mathrm{\partial}r{)}^{2}}(dr{)}^{2}+\phantom{\rule{thinmathspace}{0ex}}...\phantom{\rule{0ex}{0ex}}=r(-\mathrm{sin}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...$

$dy=r(\mathrm{cos}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...$

$\therefore \phantom{\rule{mediummathspace}{0ex}}dx\phantom{\rule{thinmathspace}{0ex}}dy=[r(-\mathrm{sin}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...]\phantom{\rule{thinmathspace}{0ex}}[r(\mathrm{cos}\theta )\phantom{\rule{thinmathspace}{0ex}}d\theta +dr\mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...]\phantom{\rule{0ex}{0ex}}=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}-(\mathrm{sin}\theta {)}^{2})\phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}...\phantom{\rule{0ex}{0ex}}=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}-(\mathrm{sin}\theta {)}^{2}),\phantom{\rule{thinmathspace}{0ex}}\text{ignoring}\mathit{\text{o}}((dr{)}^{2})\text{and}\mathit{\text{o}}((d\theta {)}^{2})\text{terms.}$

However, I am off by a minus sign, which would enable me to argue

$dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta \phantom{\rule{thinmathspace}{0ex}}((\mathrm{cos}\theta {)}^{2}+(\mathrm{sin}\theta {)}^{2})=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

If it were correct, I would find this line of argument would be much more analytically convincing than the typical argument involving

$dx\phantom{\rule{thinmathspace}{0ex}}dy=dA=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $

which I find to be less mechanistically obvious than the above substitution-based argument that I considered but that I am not being able to fully justify.

Could you please tell me whether my substitution-based argument can work, potentially by correcting some mistake or another that I might have made? If not, do you have any similarly analytical or mechanistic justification as to why $dx\phantom{\rule{thinmathspace}{0ex}}dy=r\phantom{\rule{thinmathspace}{0ex}}dr\phantom{\rule{thinmathspace}{0ex}}d\theta $ ?

Linear algebraAnswered question

spoofing44 2022-09-04

Surprisingly elementary and direct proofs

What are some examples of theorems, whose first proof was quite hard and sophisticated, perhaps using some other deep theorems of some theory, before years later surprisingly a quite elementary, direct, perhaps even short proof has been found?

A related question is MO/24913, which deals with hard theorems whose proofs were simplified by the development of more sophisticated theories. But I would like to see examples where this wasn't necessary, but rather the theory turned out to be superfluous as for the proof of the theorem. I expect that this didn't happen so often. [Ok after reading all the answers, it obviously happened all the time!]

What are some examples of theorems, whose first proof was quite hard and sophisticated, perhaps using some other deep theorems of some theory, before years later surprisingly a quite elementary, direct, perhaps even short proof has been found?

A related question is MO/24913, which deals with hard theorems whose proofs were simplified by the development of more sophisticated theories. But I would like to see examples where this wasn't necessary, but rather the theory turned out to be superfluous as for the proof of the theorem. I expect that this didn't happen so often. [Ok after reading all the answers, it obviously happened all the time!]

Linear algebraAnswered question

Paulkenyo 2022-09-03

The Determinant, Tensors, and Orientation

I am a bit confused about orientation and tensors as exemplified by the determinant.

If we have an inner product or a metric and transform a vector from one coordinate system to another the magnitude of that vector is unchanged as the coordinate representation of the metric also changes. Therefore it is natural to think of a vector as a tensor and it's length doesn't change under orientation reversing transformations.

In contrast, we can consider the determinant as an alternating tensor from the exterior algebra, for instance in ${\mathbb{R}}^{3}$,

$det({v}_{1},{v}_{2},{v}_{3})={e}^{1}\wedge {e}^{2}\wedge {e}^{3}({v}_{1},{v}_{2},{v}_{3})$

But if we perform a reflection, A then we get,

$det(A{v}_{1},A{v}_{2},A{v}_{3})=det(A)det({v}_{1},{v}_{2},{v}_{3})=-det({v}_{1},{v}_{2},{v}_{3})$

Does that mean that the 3-covector ${e}^{1}\wedge {e}^{2}\wedge {e}^{3}$ when contracted with 3 vectors is not invariant, which would seem to violate the idea of contracting tensors to a scalar produces an invariant under transformations.

Or do we instead by transforming our 3-covector and the input vectors get an invariant oriented volume measure?

${e}^{\prime 1}\wedge {e}^{\prime 2}\wedge {e}^{\prime 3}(A{v}_{1},A{v}_{2},A{v}_{3})={e}^{1}\wedge {e}^{2}\wedge {e}^{3}({v}_{1},{v}_{2},{v}_{3})?$

That would seem to mean that the determinant includes an arbitrary sign convention and that the definition of the determinant is different in left and right handed oriented coordinate systems.

I am a bit confused about orientation and tensors as exemplified by the determinant.

If we have an inner product or a metric and transform a vector from one coordinate system to another the magnitude of that vector is unchanged as the coordinate representation of the metric also changes. Therefore it is natural to think of a vector as a tensor and it's length doesn't change under orientation reversing transformations.

In contrast, we can consider the determinant as an alternating tensor from the exterior algebra, for instance in ${\mathbb{R}}^{3}$,

$det({v}_{1},{v}_{2},{v}_{3})={e}^{1}\wedge {e}^{2}\wedge {e}^{3}({v}_{1},{v}_{2},{v}_{3})$

But if we perform a reflection, A then we get,

$det(A{v}_{1},A{v}_{2},A{v}_{3})=det(A)det({v}_{1},{v}_{2},{v}_{3})=-det({v}_{1},{v}_{2},{v}_{3})$

Does that mean that the 3-covector ${e}^{1}\wedge {e}^{2}\wedge {e}^{3}$ when contracted with 3 vectors is not invariant, which would seem to violate the idea of contracting tensors to a scalar produces an invariant under transformations.

Or do we instead by transforming our 3-covector and the input vectors get an invariant oriented volume measure?

${e}^{\prime 1}\wedge {e}^{\prime 2}\wedge {e}^{\prime 3}(A{v}_{1},A{v}_{2},A{v}_{3})={e}^{1}\wedge {e}^{2}\wedge {e}^{3}({v}_{1},{v}_{2},{v}_{3})?$

That would seem to mean that the determinant includes an arbitrary sign convention and that the definition of the determinant is different in left and right handed oriented coordinate systems.

Linear algebraAnswered question

inhiba5f 2022-09-03

Formula relating covariant derivative and exterior derivative

According to Gallot-Hulin-Lafontaine one has

$d\alpha ({X}_{0},\cdots ,{X}_{q})=\sum _{i=0}^{q}(-1{)}^{i}{D}_{{X}_{i}}\alpha ({X}_{1},\cdots ,{X}_{i-1},{X}_{0},{X}_{i+1},\cdots ,{X}_{q})$

It seems to me that it should be

$d\alpha ({X}_{0},\cdots ,{X}_{q})=\sum _{i=0}^{q}(-1{)}^{i}{D}_{{X}_{i}}\alpha ({X}_{0},\cdots ,\hat{{X}_{i}},\cdots ,{X}_{q})$

Is this right ?

According to Gallot-Hulin-Lafontaine one has

$d\alpha ({X}_{0},\cdots ,{X}_{q})=\sum _{i=0}^{q}(-1{)}^{i}{D}_{{X}_{i}}\alpha ({X}_{1},\cdots ,{X}_{i-1},{X}_{0},{X}_{i+1},\cdots ,{X}_{q})$

It seems to me that it should be

$d\alpha ({X}_{0},\cdots ,{X}_{q})=\sum _{i=0}^{q}(-1{)}^{i}{D}_{{X}_{i}}\alpha ({X}_{0},\cdots ,\hat{{X}_{i}},\cdots ,{X}_{q})$

Is this right ?

Linear algebraAnswered question

Jadon Stein 2022-09-03

Find the Coordinates of the foot of the perpendicular from the point A with Coordinates ( 3,5 ) to the line -5x+2y-5=0 .

1. What is the x coordinate of the foot of the perpendicular? Giveanswer as a number only correct to at least 3 decimal places.

2. What is the y coordinate of the foot of the perpendicular? Giveanswer as a number only correct to at least 3 decimal places.

3. Find the distance from the point A to the foot of theperpendicular. Give answer as a number only correct to at least 3 decimal places

1. What is the x coordinate of the foot of the perpendicular? Giveanswer as a number only correct to at least 3 decimal places.

2. What is the y coordinate of the foot of the perpendicular? Giveanswer as a number only correct to at least 3 decimal places.

3. Find the distance from the point A to the foot of theperpendicular. Give answer as a number only correct to at least 3 decimal places

Linear algebraAnswered question

onthewevd 2022-09-03

Can vectors even be expressed unambiguously?

Vectors are abstract concepts. Lets take one of the simplest, more concrete vectors out there: an euclidian vector in 2D. Now, I think that even such a vector is abstract, it cannot be written down, it cannot be expressed, it cannot be specified, it cannot be conveyed. At best you can give it a name, like V.

What one could try to do, is to express it as a linear combination of other vectors of that space, for example an orthonormal basis ${B}_{2}\}$.

For example, it may be that "$V=28\cdot {B}_{1}+7\cdot {B}_{2}$" Or, "V=(28,7) in the basis $\{{B}_{1},{B}_{2}\}$"

The problem is that I expressed the vector in terms of other vectors. (28,7) means nothing unless I can somehow describe ${B}_{2}$ and ${B}_{2}$. After all, if I chose another basis $\{{C}_{1},{C}_{2}$, (28,7) would represent a completely different vector.

And I can't describe ${B}_{1}$ or ${B}_{2}$, express them, other than by doing so in terms of other vectors, just like I couldn't do it for V.

So I cannot specify which vector V is, other than adding two new vectors, which I also can't specify. It all seems completely circular to me.

Trying to frame this as a question: how can one write down a vector in a way that it actually specifies which vector it is? How can someone even specify what basis he is using? Aren't all those expressions circular and meaningless?

Vectors are abstract concepts. Lets take one of the simplest, more concrete vectors out there: an euclidian vector in 2D. Now, I think that even such a vector is abstract, it cannot be written down, it cannot be expressed, it cannot be specified, it cannot be conveyed. At best you can give it a name, like V.

What one could try to do, is to express it as a linear combination of other vectors of that space, for example an orthonormal basis ${B}_{2}\}$.

For example, it may be that "$V=28\cdot {B}_{1}+7\cdot {B}_{2}$" Or, "V=(28,7) in the basis $\{{B}_{1},{B}_{2}\}$"

The problem is that I expressed the vector in terms of other vectors. (28,7) means nothing unless I can somehow describe ${B}_{2}$ and ${B}_{2}$. After all, if I chose another basis $\{{C}_{1},{C}_{2}$, (28,7) would represent a completely different vector.

And I can't describe ${B}_{1}$ or ${B}_{2}$, express them, other than by doing so in terms of other vectors, just like I couldn't do it for V.

So I cannot specify which vector V is, other than adding two new vectors, which I also can't specify. It all seems completely circular to me.

Trying to frame this as a question: how can one write down a vector in a way that it actually specifies which vector it is? How can someone even specify what basis he is using? Aren't all those expressions circular and meaningless?

Linear algebraAnswered question

bravere4g 2022-09-03

How to prove an expression to be a tensor?

How to prove that the expression ${\phi}_{,ij}:=\frac{{\mathrm{\partial}}^{2}\phi}{\mathrm{\partial}{x}_{i}\mathrm{\partial}{x}_{j}}=\mathrm{\nabla}\mathrm{\nabla}\phi $ is a tensor of second order where φ is a scalar? Furthermore, how to prove that ${q}_{{j}_{1}\cdots {j}_{n}}$ is a vector?

We can either prove it by definition or use the so-called "tensor recognition theorem" claiming that if ${q}_{{j}_{1}\cdots {j}_{n}}$, then p must be a tensor of order m+n, where ${q}_{{j}_{1}\cdots {j}_{n}}$ is a tensor of order n and ${r}_{{i}_{1}\cdots {i}_{m}}$ a tensor of order m.

How to prove that the expression ${\phi}_{,ij}:=\frac{{\mathrm{\partial}}^{2}\phi}{\mathrm{\partial}{x}_{i}\mathrm{\partial}{x}_{j}}=\mathrm{\nabla}\mathrm{\nabla}\phi $ is a tensor of second order where φ is a scalar? Furthermore, how to prove that ${q}_{{j}_{1}\cdots {j}_{n}}$ is a vector?

We can either prove it by definition or use the so-called "tensor recognition theorem" claiming that if ${q}_{{j}_{1}\cdots {j}_{n}}$, then p must be a tensor of order m+n, where ${q}_{{j}_{1}\cdots {j}_{n}}$ is a tensor of order n and ${r}_{{i}_{1}\cdots {i}_{m}}$ a tensor of order m.

Coordinate system examples can be met in college geometry among architects and 3D designers as they are dealing with the Euclidean space and other objectives. The solutions and answers that have been presented below will also include linear algebra for various calculations. Do not forget to look through the list of questions as these will have great coordinate system equations that will help you determine how to solve your unique problem. Start with given coordinates, provide a position of existing points, and just change your task's problem accordingly by learning from the answers provided.