Recent questions in Matrix transformations

Linear algebraAnswered question

sebadillab0 2022-07-07

Lets say, there is a transformation: $T:{\mathrm{\Re}}^{n}\to {\mathrm{\Re}}^{m}$ transforming a vector in $V$ to $W$. Now the transformation matrix,

$A=\left[\begin{array}{cccc}{a}_{11}& {a}_{12}& ...& {a}_{1n}\\ {a}_{21}& {a}_{22}& ...& {a}_{2n}\\ .& .& .\\ .& .& .\\ .& .& .\\ {a}_{n1}& {a}_{n2}& ...& {a}_{n3}\end{array}\right]$

The basis vectors of $V$ are ${v}_{1},{v}_{2},{v}_{3}...,{v}_{n}$ which are all non standard vectors and similarly ${w}_{1},{w}_{2},...,{w}_{m}$

My question is, in the absence of the basis vectors being standard vectors what is the procedure of finding $T$

$A=\left[\begin{array}{cccc}{a}_{11}& {a}_{12}& ...& {a}_{1n}\\ {a}_{21}& {a}_{22}& ...& {a}_{2n}\\ .& .& .\\ .& .& .\\ .& .& .\\ {a}_{n1}& {a}_{n2}& ...& {a}_{n3}\end{array}\right]$

The basis vectors of $V$ are ${v}_{1},{v}_{2},{v}_{3}...,{v}_{n}$ which are all non standard vectors and similarly ${w}_{1},{w}_{2},...,{w}_{m}$

My question is, in the absence of the basis vectors being standard vectors what is the procedure of finding $T$

Linear algebraAnswered question

aangenaamyj 2022-07-07

If $T:{\mathbb{R}}^{n}\to {\mathbb{R}}^{m}$ is a matrix transformation, does $T</math\; depend\; on\; the\; dimensions\; of$ \mathbb{R}$?\; i.e.,\; is$ T$one-one\; if$ m>n$,$ m=n$,\; or$ n>m$?$

Also, say if $T$ is one-one, does this mean it is a matrix transformation and hence a linear transformation?

Also, say if $T$ is one-one, does this mean it is a matrix transformation and hence a linear transformation?

Linear algebraAnswered question

tripes3h 2022-07-07

Suppose $T:V\to V$ is the identity transformation.

If B is a basis of V, then the matrix representation of $[T{]}_{B}^{B}=[{I}_{n}]$.

Let's say C is also a basis of V, then it is clear that

$[T{]}_{C}^{B}\ne [{I}_{n}]$

However, I was taught that matrices representing the same linear transformation in different bases are similar, and the only matrix similar to ${I}_{n}$ is ${I}_{n}$. Thus, $[T{]}_{C}^{B}$ and $[T{]}_{B}^{B}$ are not similar.

Can anyone clear what seems to be a contradiction?

If B is a basis of V, then the matrix representation of $[T{]}_{B}^{B}=[{I}_{n}]$.

Let's say C is also a basis of V, then it is clear that

$[T{]}_{C}^{B}\ne [{I}_{n}]$

However, I was taught that matrices representing the same linear transformation in different bases are similar, and the only matrix similar to ${I}_{n}$ is ${I}_{n}$. Thus, $[T{]}_{C}^{B}$ and $[T{]}_{B}^{B}$ are not similar.

Can anyone clear what seems to be a contradiction?

Linear algebraAnswered question

aggierabz2006zw 2022-07-07

Given a matrix (3 * 1)

$\left[\begin{array}{c}a\\ b\\ c\end{array}\right]$

how can I obtain a matrix 3 * 3 of

$\left[\begin{array}{ccc}a-a& a-b& a-c\\ b-a& b-b& b-c\\ c-a& c-b& c-c\end{array}\right]$

$\left[\begin{array}{c}a\\ b\\ c\end{array}\right]$

how can I obtain a matrix 3 * 3 of

$\left[\begin{array}{ccc}a-a& a-b& a-c\\ b-a& b-b& b-c\\ c-a& c-b& c-c\end{array}\right]$

Linear algebraAnswered question

Blericker74 2022-07-06

Matrix, that projects, any point of the xy-plane, on the line

$y=4x$

The solution should be:

$T=\left(\begin{array}{cc}0.06& 0.235\\ 0.235& 0.94\end{array}\right)$

But somehow i dont know how to get this solution?

$y=4x$

The solution should be:

$T=\left(\begin{array}{cc}0.06& 0.235\\ 0.235& 0.94\end{array}\right)$

But somehow i dont know how to get this solution?

Linear algebraAnswered question

pouzdrotf 2022-07-06

Is the result of a matrix transformation equivalent to the that of that same matrix but orthonormalized?

Say we have the transformation matrix $A$ of full rank such that ${A}^{-1}\ne {A}^{t},$, i.e., the matrix $A$ consists of linearly independent vectors which aren't orthogonal to each other, a vector v, and the orthonormalized transformation matrix ${A}^{\prime}.$ Is it true that $Av={A}^{\prime}v?$ And if not, is this unimportant?

Say we have the transformation matrix $A$ of full rank such that ${A}^{-1}\ne {A}^{t},$, i.e., the matrix $A$ consists of linearly independent vectors which aren't orthogonal to each other, a vector v, and the orthonormalized transformation matrix ${A}^{\prime}.$ Is it true that $Av={A}^{\prime}v?$ And if not, is this unimportant?

Linear algebraAnswered question

Jonathan Miles 2022-07-06

Question that asks me to describe a vector x that satisfies:

$T(x)=\left[\begin{array}{c}-8\\ 9\\ 2\end{array}\right]$

Gven matrix:

$A=\left[\begin{array}{ccc}1& 3& 1\\ -2& 1& 5\\ 0& 2& 2\end{array}\right]$

also aware that T(x) = Ax. I would like to know the general process for finding what x is when given the output vector and a matrix to be multiplied by the unknown input vector x.

$\left[\begin{array}{cccc}1& 0& -2& 0\\ 0& 1& 1& 0\\ 0& 0& 0& 1\end{array}\right]$

I have tried putting the augmented matrix in reduced row echelon form above, but I am not sure where to go from there.

$T(x)=\left[\begin{array}{c}-8\\ 9\\ 2\end{array}\right]$

Gven matrix:

$A=\left[\begin{array}{ccc}1& 3& 1\\ -2& 1& 5\\ 0& 2& 2\end{array}\right]$

also aware that T(x) = Ax. I would like to know the general process for finding what x is when given the output vector and a matrix to be multiplied by the unknown input vector x.

$\left[\begin{array}{cccc}1& 0& -2& 0\\ 0& 1& 1& 0\\ 0& 0& 0& 1\end{array}\right]$

I have tried putting the augmented matrix in reduced row echelon form above, but I am not sure where to go from there.

Linear algebraAnswered question

Aganippe76 2022-07-06

Take, for instance, the following matrix:

$\left[\begin{array}{cc}12& 5\\ 5& -12\end{array}\right]$

How can I find its eigenvalues/eigenvectors simply by knowing its a reflection-dilation?

$\left[\begin{array}{cc}12& 5\\ 5& -12\end{array}\right]$

How can I find its eigenvalues/eigenvectors simply by knowing its a reflection-dilation?

Linear algebraAnswered question

gorgeousgen9487 2022-07-06

$A=\left(\begin{array}{cc}k& -2\\ 1-k& k\end{array}\right)\text{, where k is a constant}$

$\text{A transformation}T:{\mathbb{R}}^{2}\to {\mathbb{R}}^{2}\text{is represented by the matrix A.}$

$\text{Find the value of k for which the line}y=2x\text{is mapped onto itself under T.}$

$\text{A transformation}T:{\mathbb{R}}^{2}\to {\mathbb{R}}^{2}\text{is represented by the matrix A.}$

$\text{Find the value of k for which the line}y=2x\text{is mapped onto itself under T.}$

Linear algebraAnswered question

Logan Wyatt 2022-07-05

The Transformation of A is defined on the space ${\mathcal{P}}_{2}$ of polynomials $p$ such that $\mathrm{deg}(p)\le 2$ by $Ap(t)={p}^{\prime}(t)$. Find the matrix of this transformation in the basis $\{1,t,{t}^{2}\}$. What is $Ker(A)$?

Linear algebraAnswered question

vasorasy8 2022-07-05

$T({e}_{1})=T(1,0)=(\mathrm{cos}\theta ,\mathrm{sin}\theta )$

and

$T({e}_{2})=T(0,1)=(-\mathrm{sin}\theta ,\mathrm{cos}\theta )$

and

$A=[T({e}_{1})|T({e}_{2})]=\left[\begin{array}{cc}\mathrm{cos}\theta & -\mathrm{sin}\theta \\ \mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right]$

When I rotate a vector $\left[\begin{array}{c}x\\ y\end{array}\right]$ I get

$\left[\begin{array}{c}{x}^{\prime}\\ {y}^{\prime}\end{array}\right]=\left[\begin{array}{c}x\cdot \mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}y\cdot \mathrm{sin}\theta \\ x\cdot \mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}y\cdot \mathrm{cos}\theta \end{array}\right]$

Correct me if I'm wrong, but I thought that column 1 of A $\left[\begin{array}{c}\mathrm{cos}\theta \\ \mathrm{sin}\theta \end{array}\right]$, holds the 'x' values and column 2 holds the 'y' values. What I'm confused about is why does x' contain both an x component and a y component?

and

$T({e}_{2})=T(0,1)=(-\mathrm{sin}\theta ,\mathrm{cos}\theta )$

and

$A=[T({e}_{1})|T({e}_{2})]=\left[\begin{array}{cc}\mathrm{cos}\theta & -\mathrm{sin}\theta \\ \mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right]$

When I rotate a vector $\left[\begin{array}{c}x\\ y\end{array}\right]$ I get

$\left[\begin{array}{c}{x}^{\prime}\\ {y}^{\prime}\end{array}\right]=\left[\begin{array}{c}x\cdot \mathrm{cos}\theta \phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}y\cdot \mathrm{sin}\theta \\ x\cdot \mathrm{sin}\theta \phantom{\rule{thinmathspace}{0ex}}+\phantom{\rule{thinmathspace}{0ex}}y\cdot \mathrm{cos}\theta \end{array}\right]$

Correct me if I'm wrong, but I thought that column 1 of A $\left[\begin{array}{c}\mathrm{cos}\theta \\ \mathrm{sin}\theta \end{array}\right]$, holds the 'x' values and column 2 holds the 'y' values. What I'm confused about is why does x' contain both an x component and a y component?

Linear algebraAnswered question

malalawak44 2022-07-05

Does a non-invertible matrix transformation "really" not have an inverse?

Linear algebraAnswered question

Shea Stuart 2022-07-05

Find matrix representation of transformation

Given two lines ${l}_{1}:y=x-3$ and ${l}_{2}:x=1$ find matrix representation of transformation $f$(in standard base) which switch lines each others and find all invariant lines of $f$.

Given two lines ${l}_{1}:y=x-3$ and ${l}_{2}:x=1$ find matrix representation of transformation $f$(in standard base) which switch lines each others and find all invariant lines of $f$.

Linear algebraAnswered question

Frank Day 2022-07-04

Suppose $T:{\mathbb{R}}^{n}\to {\mathbb{R}}^{m}$ is a matrix transformation. I want to show that

$T(\overrightarrow{0})=\overrightarrow{0}$

Since $T$ is a matrix transformation, then for every $\overrightarrow{x}\in {\mathbb{R}}^{n}$ there exists a unique $m\times n$ matrix $A$ such that

$T(\overrightarrow{x})=A\cdot \overrightarrow{x}$

If we let $\overrightarrow{x}=\overrightarrow{0}$, where this zero vector has dimensions $n\times 1$, then

$T(\overrightarrow{0})=A\cdot \overrightarrow{0}=\overrightarrow{0}$

where the $\overrightarrow{0}$ on the right hand side of the equation is an $m\times 1$ vector.

Hence, every transformation maps the zero vector in ${\mathbb{R}}^{n}$ to the zero vector in ${\mathbb{R}}^{m}$

Are there any problems with the proof?

$T(\overrightarrow{0})=\overrightarrow{0}$

Since $T$ is a matrix transformation, then for every $\overrightarrow{x}\in {\mathbb{R}}^{n}$ there exists a unique $m\times n$ matrix $A$ such that

$T(\overrightarrow{x})=A\cdot \overrightarrow{x}$

If we let $\overrightarrow{x}=\overrightarrow{0}$, where this zero vector has dimensions $n\times 1$, then

$T(\overrightarrow{0})=A\cdot \overrightarrow{0}=\overrightarrow{0}$

where the $\overrightarrow{0}$ on the right hand side of the equation is an $m\times 1$ vector.

Hence, every transformation maps the zero vector in ${\mathbb{R}}^{n}$ to the zero vector in ${\mathbb{R}}^{m}$

Are there any problems with the proof?

Linear algebraAnswered question

uplakanimkk 2022-07-03

Using matrix methods, how to find the image of the point (1,-2) for the transformations?

1) a dilation of factor 3 from the x-axis

2) reflection in the x-axis

1) a dilation of factor 3 from the x-axis

2) reflection in the x-axis

Linear algebraAnswered question

prirodnogbk 2022-07-03

Consider the $m\times m$-matrix $B$, which is symmetric and positive definite (full rank). Now this matrix is transformed using another matrix, say $A$, in the following manner: $AB{A}^{T}$. The matrix A is $n\times m$ with $n<m$. Furthermore the constraint $rank(A)<n$ is imposed.

My intuition tells me that $AB{A}^{T}$ must be symmetric and positive semi-definite, but what is the mathematical proof for this? (why exactly does the transformation preserve symmetry and why is it that possibly negative eigenvalues in $A$ still result in the transformation to be PSD? Or is my intuition wrong)?

My intuition tells me that $AB{A}^{T}$ must be symmetric and positive semi-definite, but what is the mathematical proof for this? (why exactly does the transformation preserve symmetry and why is it that possibly negative eigenvalues in $A$ still result in the transformation to be PSD? Or is my intuition wrong)?

Linear algebraAnswered question

mistergoneo7 2022-07-02

By conjugate linear transformation, I mean under scalar multiplication instead of $C(af)=aC(f)$, I would have $C(af)=\overline{a}C(f)$, where a is $a$ constant complex number, and $C$ is the transformation.

Linear algebraAnswered question

Nylah Hendrix 2022-07-02

An Eigenvector is nothing more than a vector that points to some place. This pointing vector will then be invariant under linear transformations.

Now my questions:

- Ok so this vector is invariant. So what? (in my case for attitude determination algorithm I even less understand what this could give me as useful information)

- how does a simple $4\times 4$ matrix actually represent a transformation?

Now my questions:

- Ok so this vector is invariant. So what? (in my case for attitude determination algorithm I even less understand what this could give me as useful information)

- how does a simple $4\times 4$ matrix actually represent a transformation?

Linear algebraAnswered question

uri2e4g 2022-07-02

Prove that pre-multiplying a matrix ${A}_{m}$ by the elementary matrix obtained with any matrix elementary line transformation ${I}_{m}\underset{{l}_{1}\leftrightarrow {l}_{2}}{\u27f6}E$ is the same as applying said elementary line transformation on the matrix ${A}_{m}$

Linear algebraAnswered question

kramberol 2022-07-02

Need a help please when I have a matrix with complex eigenvalues for example

$A=\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ -24& -29& -18\end{array}\right)$

with its eigenvalues of: $-16.3$, $-0.844+0.871j$ and $-0.844-0.871j$

Matrix A can be diagonalized to the classical known form of

${A}_{\text{diag}}=\left(\begin{array}{ccc}-0.844+0.871j& 0& 0\\ 0& -0.844-0.871j& 0\\ 0& 0& -16.3\end{array}\right)$

with a vandermone transformation matrix

The Question is I need a transformation matrix to transform matrix A to the form of matrix Ad (with no complex elements)

${A}_{d}=\left(\begin{array}{ccc}-0.844& 0.871& 0\\ -0.871& -0.844& 0\\ 0& 0& -16.3\end{array}\right)$

and this form called a quasi-diagonal matrix

$A=\left(\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ -24& -29& -18\end{array}\right)$

with its eigenvalues of: $-16.3$, $-0.844+0.871j$ and $-0.844-0.871j$

Matrix A can be diagonalized to the classical known form of

${A}_{\text{diag}}=\left(\begin{array}{ccc}-0.844+0.871j& 0& 0\\ 0& -0.844-0.871j& 0\\ 0& 0& -16.3\end{array}\right)$

with a vandermone transformation matrix

The Question is I need a transformation matrix to transform matrix A to the form of matrix Ad (with no complex elements)

${A}_{d}=\left(\begin{array}{ccc}-0.844& 0.871& 0\\ -0.871& -0.844& 0\\ 0& 0& -16.3\end{array}\right)$

and this form called a quasi-diagonal matrix

If you are dealing with linear algebra, the chances are high that you will encounter various questions related to matrix transformation. Turning to matrix transformation examples, you will also encounter various geometric transformations, yet these will always be based on algebraic analysis and calculations. The answers that we have presented to various challenges will help you to compare our solutions with your unique matrix transformation example that deals with linear transformation and mapping. Visual assistance is also included and will be essential to see how these are built with the help of the column vectors.