Showing that detA=detB⋅detC when B,C are the restrictions of A onto a subspace I am a bit unsure about one approach that is mentioned to prove this determinant result Here is the quote from Pages 100-101 of Finite-Dimensional Vector Spaces by Halmos: Here is another useful fact about determinants. If M is a subspace invariant under A, if B is the transformation A considered on M only, and if C is the quotient transformation A/M, then detA=detB⋅detC This multiplicative relation holds if, in particular, A is the direct sum of two transformations B and C. The proof can be based directly on the definition of determinants, or, alternatively, on the expansion obtained in the preceding paragraph. What I am confused about is how you can use the definition of determinants to conclude this res
ngombangouh
Answered question
2022-09-01
Showing that when B,C are the restrictions of A onto a subspace
I am a bit unsure about one approach that is mentioned to prove this determinant result.
Here is the quote from Pages 100-101 of Finite-Dimensional Vector Spaces by Halmos:
Here is another useful fact about determinants. If is a subspace invariant under A, if B is the transformation A considered on only, and if C is the quotient transformation , then
This multiplicative relation holds if, in particular, A is the direct sum of two transformations B and C. The proof can be based directly on the definition of determinants, or, alternatively, on the expansion obtained in the preceding paragraph.
What I am confused about is how you can use the definition of determinants to conclude this result.
In this book, the determinant of a linear transformation A is defined as the scalar such that for all alternating n-linear forms w, where V is an n-dimensional vector space.
It is then shown that by fixing a coordinate system (or basis) and letting be the entries of the matrix of the linear transformation under the coordinate system, the determinant of the linear transformation A in that coordinate system is:
where the summation goes over all permutations in .
I have been able to use the expression involving the coordinates to show this result, but I am not sure about how this would be done directly from the definition. I have tried looking at defining other alternating forms and using their product to show this, but I was not able to make much use of that approach.
Are there any suggestions for proving this result directly from the definition?
Edit: I would like to add that part of my confusion may be from the fact that A, B and C are all linear transformations on different vector spaces and I am not sure how the definition can be used in this situation.