Squaring a matrix example. Exponentiation of a matrix

Here we will continue the topic of operations on matrices started in the first part and analyze a couple of examples in which you will need to apply several operations at once.

Exponentiation of a matrix.

Let k be a non-negative integer. For any square matrix $ A_ (n \ times n) $ we have: $$ A ^ k = \ underbrace (A \ cdot A \ cdot \ ldots \ cdot A) _ (k \; times) $$

In this case, we assume that $ A ^ 0 = E $, where $ E $ is the identity matrix of the corresponding order.

Example No. 4

Matrix $ A = \ left (\ begin (array) (cc) 1 & 2 \\ -1 & -3 \ end (array) \ right) $ is given. Find matrices $ A ^ 2 $ and $ A ^ 6 $.

According to the definition, $ A ^ 2 = A \ cdot A $, i.e. to find $ A ^ 2 $ we just need to multiply the matrix $ A $ by itself. The operation of multiplying matrices was considered in the first part of the topic, so here we will simply write down the solution process without detailed explanations:

$$ A ^ 2 = A \ cdot A = \ left (\ begin (array) (cc) 1 & 2 \\ -1 & -3 \ end (array) \ right) \ cdot \ left (\ begin (array) (cc) 1 & 2 \\ -1 & -3 \ end (array) \ right) = \ left (\ begin (array) (cc) 1 \ cdot 1 + 2 \ cdot (-1) & 1 \ cdot 2 +2 \ cdot (-3) \\ -1 \ cdot 1 + (- 3) \ cdot (-1) & -1 \ cdot 2 + (- 3) \ cdot (-3) \ end (array) \ right ) = \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right). $$

To find the matrix $ A ^ 6 $ we have two options. Option one: it is corny to continue multiplying $ A ^ 2 $ by the matrix $ A $:

$$ A ^ 6 = A ^ 2 \ cdot A \ cdot A \ cdot A \ cdot A. $$

However, you can go a little more simple way, using the associativity property of matrix multiplication. Let's place the brackets in the expression for $ A ^ 6 $:

$$ A ^ 6 = A ^ 2 \ cdot A \ cdot A \ cdot A \ cdot A = A ^ 2 \ cdot (A \ cdot A) \ cdot (A \ cdot A) = A ^ 2 \ cdot A ^ 2 \ cdot A ^ 2. $$

If solving the first method would require four multiplication operations, then for the second method - only two. Therefore, let's go the second way:

$$ A ^ 6 = A ^ 2 \ cdot A ^ 2 \ cdot A ^ 2 = \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) \ cdot \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) \ cdot \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) = \\ = \ left (\ begin (array) (cc) -1 \ cdot (-1) + (- 4) \ cdot 2 & -1 \ cdot (-4 ) + (- 4) \ cdot 7 \\ 2 \ cdot (-1) +7 \ cdot 2 & 2 \ cdot (-4) +7 \ cdot 7 \ end (array) \ right) \ cdot \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) = \ left (\ begin (array) (cc) -7 & -24 \\ 12 & 41 \ end ( array) \ right) \ cdot \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) = \\ = \ left (\ begin (array) (cc ) -7 \ cdot (-1) + (- 24) \ cdot 2 & -7 \ cdot (-4) + (- 24) \ cdot 7 \\ 12 \ cdot (-1) +41 \ cdot 2 & 12 \ cdot (-4) +41 \ cdot 7 \ end (array) \ right) = \ left (\ begin (array) (cc) -41 & -140 \\ 70 & 239 \ end (array) \ right). $$

Answer: $ A ^ 2 = \ left (\ begin (array) (cc) -1 & -4 \\ 2 & 7 \ end (array) \ right) $, $ A ^ 6 = \ left (\ begin (array) (cc) -41 & -140 \\ 70 & 239 \ end (array) \ right) $.

Example No. 5

Given matrices $ A = \ left (\ begin (array) (cccc) 1 & 0 & -1 & 2 \\ 3 & -2 & 5 & 0 \\ -1 & 4 & -3 & 6 \ end (array) \ right) $, $ B = \ left (\ begin (array) (ccc) -9 & 1 & 0 \\ 2 & -1 & 4 \\ 0 & -2 & 3 \\ 1 & 5 & 0 \ end (array) \ right) $, $ C = \ left (\ begin (array) (ccc) -5 & -20 & 13 \\ 10 & 12 & 9 \\ 3 & -15 & 8 \ end (array) \ right) $. Find the matrix $ D = 2AB-3C ^ T + 7E $.

We start calculating the matrix $ D $ by finding the result of the product $ AB $. The matrices $ A $ and $ B $ can be multiplied, since the number of columns in the matrix $ A $ is equal to the number of rows in the matrix $ B $. We denote $ F = AB $. In this case, the $ F $ matrix will have three columns and three rows, i.e. will be square (if this conclusion seems not obvious, see the description of matrix multiplication in the first part of this topic). Let's find the matrix $ F $ by calculating all its elements:

$$ F = A \ cdot B = \ left (\ begin (array) (cccc) 1 & 0 & -1 & 2 \\ 3 & -2 & 5 & 0 \\ -1 & 4 & -3 & 6 \ end (array) \ right) \ cdot \ left (\ begin (array) (ccc) -9 & 1 & 0 \\ 2 & -1 & 4 \\ 0 & -2 & 3 \\ 1 & 5 & 0 \ end (array) \ right) \\ \ begin (aligned) & f_ (11) = 1 \ cdot (-9) +0 \ cdot 2 + (- 1) \ cdot 0 + 2 \ cdot 1 = -7; \\ & f_ (12) = 1 \ cdot 1 + 0 \ cdot (-1) + (- 1) \ cdot (-2) +2 \ cdot 5 = 13; \\ & f_ (13) = 1 \ cdot 0 + 0 \ cdot 4 + (- 1) \ cdot 3 + 2 \ cdot 0 = -3; \\ \\ & f_ (21) = 3 \ cdot (-9 ) + (- 2) \ cdot 2 + 5 \ cdot 0 + 0 \ cdot 1 = -31; \\ & f_ (22) = 3 \ cdot 1 + (- 2) \ cdot (-1) +5 \ cdot (-2) +0 \ cdot 5 = -5; \\ & f_ (23) = 3 \ cdot 0 + (- 2) \ cdot 4 + 5 \ cdot 3 + 0 \ cdot 0 = 7; \\ \\ & f_ (31) = - 1 \ cdot (-9) +4 \ cdot 2 + (- 3) \ cdot 0 + 6 \ cdot 1 = 23; \\ & f_ (32) = - 1 \ cdot 1 + 4 \ cdot (-1) + (- 3) \ cdot (-2) +6 \ cdot 5 = 31; \\ & f_ (33) = - 1 \ cdot 0 + 4 \ cdot 4 + (- 3) \ cdot 3 + 6 \ cdot 0 = 7. \ end (aligned) $$

So $ F = \ left (\ begin (array) (ccc) -7 & 13 & -3 \\ -31 & -5 & 7 \\ 23 & 31 & 7 \ end (array) \ right) $. Let's go further. The $ C ^ T $ matrix is ​​the transpose matrix for the $ C $ matrix, i.e. $ C ^ T = \ left (\ begin (array) (ccc) -5 & 10 & 3 \\ -20 & 12 & -15 \\ 13 & 9 & 8 \ end (array) \ right) $. As for the matrix $ E $, this is the identity matrix. V this case the order of this matrix is ​​three, i.e. $ E = \ left (\ begin (array) (ccc) 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \ end (array) \ right) $.

In principle, we can continue to go step by step, but it is better to consider the remaining expression in its entirety, without being distracted by auxiliary actions. In fact, we are left with only the operations of multiplying matrices by a number, as well as operations of addition and subtraction.

$$ D = 2AB-3C ^ T + 7E = 2 \ cdot \ left (\ begin (array) (ccc) -7 & 13 & -3 \\ -31 & -5 & 7 \\ 23 & 31 & 7 \ end (array) \ right) -3 \ cdot \ left (\ begin (array) (ccc) -5 & 10 & 3 \\ -20 & 12 & -15 \\ 13 & 9 & 8 \ end (array) \ right) +7 \ cdot \ left (\ begin (array) (ccc) 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \ end (array) \ right) $$

We multiply the matrices on the right-hand side of the equality by the corresponding numbers (i.e., 2, 3, and 7):

$$ 2 \ cdot \ left (\ begin (array) (ccc) -7 & 13 & -3 \\ -31 & -5 & 7 \\ 23 & 31 & 7 \ end (array) \ right) -3 \ cdot \ left (\ begin (array) (ccc) -5 & 10 & 3 \\ -20 & 12 & -15 \\ 13 & 9 & 8 \ end (array) \ right) +7 \ cdot \ left (\ begin (array) (ccc) 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \ end (array) \ right) = \\ = \ left (\ begin (array) (ccc) - 14 & 26 & -6 \\ -62 & -10 & 14 \\ 46 & 62 & 14 \ end (array) \ right) - \ left (\ begin (array) (ccc) -15 & 13 & 9 \\ -60 & 36 & -45 \\ 39 & 27 & 24 \ end (array) \ right) + \ left (\ begin (array) (ccc) 7 & 0 & 0 \\ 0 & 7 & 0 \\ 0 & 0 & 7 \ end (array) \ right) $$

Let's execute recent actions: subtraction and addition:

$$ \ left (\ begin (array) (ccc) -14 & 26 & -6 \\ -62 & -10 & 14 \\ 46 & 62 & 14 \ end (array) \ right) - \ left (\ begin (array) (ccc) -15 & 30 & 9 \\ -60 & 36 & -45 \\ 39 & 27 & 24 \ end (array) \ right) + \ left (\ begin (array) (ccc) 7 & 0 & 0 \\ 0 & 7 & 0 \\ 0 & 0 & 7 \ end (array) \ right) = \\ = \ left (\ begin (array) (ccc) -14 - (- 15) +7 & 26-30 + 0 & -6-9 + 0 \\ -62 - (- 60) +0 & -10-36 + 7 & 14 - (- 45) +0 \\ 46-39 + 0 & 62-27 +0 & 14-24 + 7 \ end (array) \ right) = \ left (\ begin (array) (ccc) 8 & -4 & -15 \\ -2 & -39 & 59 \\ 7 & 35 & -3 \ end (array) \ right). $$

Problem solved, $ D = \ left (\ begin (array) (ccc) 8 & -4 & -15 \\ -2 & -39 & 59 \\ 7 & 35 & -3 \ end (array) \ right) $ ...

Answer: $ D = \ left (\ begin (array) (ccc) 8 & -4 & -15 \\ -2 & -39 & 59 \\ 7 & 35 & -3 \ end (array) \ right) $.

Example No. 6

Let $ f (x) = 2x ^ 2 + 3x-9 $ and matrix $ A = \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) $. Find the value of $ f (A) $.

If $ f (x) = 2x ^ 2 + 3x-9 $, then by $ f (A) $ we mean the matrix:

$$ f (A) = 2A ^ 2 + 3A-9E. $$

This is how a polynomial of a matrix is ​​defined. So, we need to substitute the matrix $ A $ into the expression for $ f (A) $ and get the result. Since all the actions were discussed in detail earlier, then here I will just give a solution. If the process of performing the operation $ A ^ 2 = A \ cdot A $ is not clear to you, then I advise you to look at the description of matrix multiplication in the first part of this topic.

$$ f (A) = 2A ^ 2 + 3A-9E = 2A \ cdot A + 3A-9E = 2 \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) \ cdot \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) +3 \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) -9 \ left (\ begin (array) (cc) 1 & 0 \\ 0 & 1 \ end (array) \ right) = \\ = 2 \ left ( \ begin (array) (cc) (-3) \ cdot (-3) +1 \ cdot 5 & (-3) \ cdot 1 + 1 \ cdot 0 \\ 5 \ cdot (-3) +0 \ cdot 5 & 5 \ cdot 1 + 0 \ cdot 0 \ end (array) \ right) +3 \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) -9 \ left (\ begin (array) (cc) 1 & 0 \\ 0 & 1 \ end (array) \ right) = \\ = 2 \ left (\ begin (array) (cc) 14 & -3 \\ - 15 & 5 \ end (array) \ right) +3 \ left (\ begin (array) (cc) -3 & 1 \\ 5 & 0 \ end (array) \ right) -9 \ left (\ begin (array ) (cc) 1 & 0 \\ 0 & 1 \ end (array) \ right) = \ left (\ begin (array) (cc) 28 & -6 \\ -30 & 10 \ end (array) \ right) + \ left (\ begin (array) (cc) -9 & 3 \\ 15 & 0 \ end (array) \ right) - \ left (\ begin (array) (cc) 9 & 0 \\ 0 & 9 \ end (array) \ right) = \ left (\ begin (array) (cc) 10 & -3 \\ -15 & 1 \ end (array) \ right). $$

Answer: $ f (A) = \ left (\ begin (array) (cc) 10 & -3 \\ -15 & 1 \ end (array) \ right) $.

The matrix А -1 is called the inverse matrix with respect to the matrix А if А * А -1 = Е, where Е is the n-th order unit matrix. An inverse matrix can only exist for square matrices.

Service purpose... By using this service online you can find algebraic complements, transposed matrix A T, adjoint matrix and inverse matrix. The solution is carried out directly on the website (online) and is free of charge. The calculation results are formatted in a Word report and in Excel format(i.e. it is possible to check the solution). see design example.

Instruction. To obtain a solution, it is necessary to set the dimension of the matrix. Next, in a new dialog box, fill in the matrix A.

See also Inverse matrix using the Jordan-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T.
  2. Definition of algebraic complements. Replace each element of the matrix with its algebraic complement.
  3. Drafting inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next inverse matrix algorithm is similar to the previous one, except for some steps: first, the algebraic complements are calculated, and then the adjoint matrix C is determined.
  1. Determine if the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If it is not equal to zero, we continue the solution; otherwise, the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling the union (reciprocal, adjoint) matrix C.
  5. Composing an inverse matrix from algebraic complements: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. A check is made: the original and the resulting matrices are multiplied. The result should be the identity matrix.

Example # 1. Let's write the matrix as follows:

Algebraic complements. ∆ 1,2 = - (2 4 - (- 2 (-2))) = -4 ∆ 2,1 = - (2 4-5 3) = 7 ∆ 2,3 = - (- 1 5 - (- 2 2)) = 1 ∆ 3.2 = - (- 1 (-2) -2 3) = 4
A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us give another scheme for finding the inverse matrix.
  1. Find the determinant of the given square matrix A.
  2. Find the algebraic complements to all elements of the matrix A.
  3. We write the algebraic complements of row elements into columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As you can see, the transposition operation can be applied both at the beginning, over the original matrix, and at the end, over the obtained algebraic complements.

A special case: The inverse of the identity matrix E is the identity matrix E.

Some properties of operations on matrices.
Matrix expressions

And now the continuation of the topic will follow, in which we will consider not only new material, but we will also work out operations with matrices.

Some properties of operations on matrices

There are quite a few properties that relate to actions with matrices, in the same Wikipedia you can admire the slender ranks of the corresponding rules. However, in practice, many properties are, in a certain sense, "dead", since only a few of them are used in solving real problems. My goal is to look at the application of properties with specific examples, and if you need a rigorous theory, please use another source of information.

Consider some exceptions to the rule that will be required to perform practical tasks.

If a square matrix has inverse matrix, then their multiplication is commutative:

Unit matrix is called a square matrix in which on main diagonal units are located, and the rest of the elements are equal to zero. For example:, etc.

Wherein the following property is true: if an arbitrary matrix is ​​multiplied left or right on the identity matrix of suitable sizes, then the result will be the original matrix:

As you can see, matrix multiplication is commutative here as well.

Let's take some kind of matrix, let's say, the matrix from the previous problem: .

Those interested can check and make sure that:

The identity matrix for matrices is an analogue of the numeric unit for numbers, which is especially clearly seen from the examples just considered.

Commutativity of a numeric factor with respect to matrix multiplication

For matrices and real numbers, the following property is true:

That is, the numerical factor can (and should) be moved forward so that it does not "interfere" with the multiplication of matrices.

Note : Generally speaking, the formulation of the property is incomplete - the "lambda" can be placed anywhere between matrices, even at the end. The rule remains true if three or more matrices are multiplied.

Example 4

Calculate the product

Solution:

(1) According to the property move the numerical factor forward. The matrices themselves cannot be rearranged!

(2) - (3) Perform matrix multiplication.

(4) Here you can divide each number by 10, but then decimal fractions will appear among the matrix elements, which is not good. However, we notice that all the numbers in the matrix are divisible by 5, so we multiply each element by.

Answer:

A little charade for self-solution:

Example 5

Calculate if

Solution and answer at the end of the lesson.

What technique is important when solving such examples? We deal with the number in the last place .

Let's attach one more car to the locomotive:

How do I multiply three matrices?

First of all, WHAT should be the result of the multiplication of three matrices? A cat will not give birth to a mouse. If matrix multiplication is feasible, then the result will also be a matrix. Hmmm, well my algebra teacher does not see how I explain the closedness of an algebraic structure relative to its elements =)

The product of three matrices can be calculated in two ways:

1) find, and then multiply by the matrix "tse":;

2) either first find, then multiply.

The results will definitely match, and in theory this property is called associativity of matrix multiplication:

Example 6

Multiply matrices in two ways

Algorithm solutions two-step: find the product of two matrices, then again find the product of two matrices.

1) We use the formula

First action:

Second action:

2) We use the formula

First action:

Second action:

Answer:

More familiar and standard, of course, is the first solution, there is “everything in order”. By the way, about the order. In the problem under consideration, the illusion often arises that we are talking about some kind of permutations of matrices. They are not here. I remind you again that in general MATRIX CANNOT BE REPLACED... So, in the second paragraph, in the second step, we perform multiplication, but in no case. With ordinary numbers, such a number would have passed, but with matrices - no.

The property of associativity of multiplication is true not only for square, but also for arbitrary matrices - as long as they are multiplied:

Example 7

Find the product of three matrices

This is an example for a do-it-yourself solution. In the sample solution, the calculations are carried out in two ways, analyze which way is more profitable and shorter.

The associativity property of matrix multiplication also holds for more multipliers.

Now is the time to return to the powers of matrices. The square of the matrix is ​​considered at the very beginning and on the agenda is the question:

How to cube a matrix and higher powers?

These operations are also defined only for square matrices. To build a square matrix to a cube, you need to calculate the product:

In fact it is special case multiplication of three matrices, by the associativity property of matrix multiplication:. And the matrix multiplied by itself is the square of the matrix:

Thus, we get a working formula:

That is, the task is performed in two steps: first, the matrix must be squared, and then the resulting matrix must be multiplied by the matrix.

Example 8

Convert the matrix to a cube.

This is a small task for an independent solution.

The raising of the matrix to the fourth power is carried out in a natural way:

Using the associativity of matrix multiplication, we derive two working formulas. First: is the product of three matrices.

1) . In other words, first we find, then we multiply it by "bh" - we get a cube, and, finally, we perform multiplication again - there will be the fourth degree.

2) But there is a solution one step shorter:. That is, at the first step, we find the square and, bypassing the cube, we perform the multiplication

Additional activity for Example 8:

Raise the matrix to the fourth power.

As just noted, there are two ways to do this:

1) As soon as the cube is known, then we perform multiplication.

2) However, if, according to the condition of the problem, it is required to construct the matrix only to the fourth degree, then it is advantageous to shorten the path - find the square of the matrix and use the formula.

Both solutions and the answer are at the end of the lesson.

Similarly, the matrix is ​​raised to the fifth and higher powers. From practical experience I can say that sometimes I come across examples of raising to the 4th degree, but I can't remember the fifth degree. But just in case, I will give the optimal algorithm:

1) find;
2) find;
3) we raise the matrix to the fifth power:.

These are, perhaps, all the main properties of matrix operations that can be useful in practical problems.

In the second section of the lesson, an equally colorful party is expected.

Matrix expressions

Let's repeat the usual school expressions with numbers. A numeric expression consists of numbers, mathematical symbols, and parentheses, for example: ... When calculating, the familiar algebraic priority is valid: brackets, then it runs exponentiation / root extraction, after multiplication / division and last but not least - addition / subtraction.

If a numeric expression makes sense, then the result of its evaluation is a number, for example:

Matrix expressions are arranged in much the same way! With the difference that the main characters are matrices. Plus some matrix-specific operations such as transpose and inverse matrix finding.

Consider the matrix expression , where are some matrices. In this matrix expression, three terms and addition / subtraction operations are performed last.

In the first term, you first need to transpose the matrix "bie":, then perform the multiplication and add the "two" to the resulting matrix. note that the transpose operation has more high priority than multiplication... The parentheses, as in numerical expressions, change the order of actions: - here, multiplication is performed first, then the resulting matrix is ​​transposed and multiplied by 2.

In the second term, first of all, matrix multiplication is performed, and the inverse matrix is ​​already from the product. If the brackets are removed:, then first you need to find the inverse matrix, and then multiply the matrices:. Finding the inverse of a matrix also takes precedence over multiplication.

With the third term, everything is obvious: we raise the matrix to a cube and add a “five” to the resulting matrix.

If the matrix expression makes sense, then the result of its calculation is the matrix.

All tasks will be from real tests, and we will start with the simplest:

Example 9

Given matrices ... Find:

Solution: The order is obvious, multiplication is performed first, then addition.


The addition is not possible because the matrices are of different sizes.

Do not be surprised, deliberately impossible actions are often suggested in tasks of this type.

Trying to evaluate the second expression:

Everything is fine here.

Answer: the action cannot be performed, .

Linear Algebra for Dummies

To study linear algebra, you can read and delve into the book by IV Belousov "Matrices and Determinants". However, it is written in a strict and dry mathematical language, which is difficult for people with an average mind to perceive. Therefore, I have made a retelling of the most difficult parts of this book to understand, trying to present the material as clearly as possible, using pictures as much as possible for this. I have omitted the proofs of the theorems. Frankly, I myself did not delve into them. I believe Mr. Belousov! Judging by his work, he is a literate and intelligent mathematician. You can download his book at http://eqworld.ipmnet.ru/ru/library/books/Belousov2006ru.pdf If you are going to delve into my work, this must be done, because I will often refer to Belousov.

Let's start with the definitions. What is a Matrix? It is a rectangular table of numbers, functions, or algebraic expressions. Why are matrices needed? They greatly facilitate complex mathematical calculations. The matrix can be distinguished by rows and columns (Fig. 1).

Rows and columns are numbered starting from the left

from above (Figure 1-1). When they say: a matrix of size m n (or m by n), they mean m number of lines and under n number of columns... For example, the matrix in Figure 1-1 is 4-by-3 rather than 3-by-4.

See fig. 1-3, what are the matrices. If a matrix consists of one row, it is called a row matrix, and if it consists of one column, then a column matrix. A matrix is ​​called square n-th order if the number of rows in it is equal to the number of columns and is equal to n. If all elements of the matrix are equal to zero, then this is a zero matrix. A square matrix is ​​called diagonal if all its elements are equal to zero, except those located on the main diagonal.

I explain right away what the main diagonal is. On it, the row and column numbers are the same. It goes from left to right, top to bottom. (Fig. 3) Elements are called diagonal if they are located on the main diagonal. If all diagonal elements are equal to one (and the rest are zero), the matrix is ​​called the identity matrix. Two matrices A and B the same size are called equal if all their elements are the same.

2 Operations on matrices and their properties

The product of a matrix by x is a matrix of the same size. To get this product, you need to multiply each element by this number (Figure 4). To get the sum of two matrices of the same size, you need to add their corresponding elements (Fig. 4). To get the difference A - B of two matrices of the same size, you need to multiply matrix B by -1 and add the resulting matrix with matrix A (Fig. 4). For operations on matrices, the following properties are true: A + B = B + A (commutative property).

(A + B) + C = A + (B + C) (associativity property). In simple terms, the sum does not change from a change in the places of the terms. For operations on matrices and numbers, the following properties are true:

(we denote the numbers by the letters x and y, and the matrices by the letters A and B) x (yA) = (xy) A

These properties are similar to those for operations on numbers. Look

examples in figure 5. See also examples 2.4 - 2.6 by Belousov on page 9.

Matrix multiplication.

Multiplication of two matrices is defined only if (translated into Russian: matrices can be multiplied only if), when the number of columns of the first matrix in the product is equal to the number of rows of the second (Fig. 7, above, blue brackets). To remember better: the number 1 is more like a column. As a result of multiplication, a matrix of size is obtained (see figure 6). To make it easier to remember what to multiply by, I propose the following algorithm: see Figure 7. Multiply matrix A by matrix B.

matrix A is two columns,

matrix B has two rows - you can multiply.

1) Let us deal with the first column of matrix B (she has only one). We write this column into a row (transpose

column, about transposition just below).

2) Copy this line so that we get a matrix the size of matrix A.

3) We multiply the elements of this matrix by the corresponding elements of the matrix A.

4) We add the resulting works in each line and get a product matrix of two rows and one column.

Figure 7-1 gives examples of larger matrix multiplication.

1) Here the first matrix has three columns, so the second should have three lines. The algorithm is exactly the same as in the previous example, only here in each line there are three terms, not two.

2) Here the second matrix has two columns. First, we perform the algorithm with the first column, then with the second, and we get a two-by-two matrix.

3) Here, the second matrix has a column consisting of one element, the column will not change from transposition. And you don't need to add anything, since there is only one column in the first matrix. We run the algorithm three times and get a three-by-three matrix.

The following properties take place:

1. If the sum B + C and the product AB exist, then A (B + C) = AB + AC

2. If the product AB exists, then x (AB) = (xA) B = = A (xB).

3. If products AB and BC exist, then A (BC) = (AB) C.

If the matrix product AB exists, then the product BA may not exist. Even if the products AB and BA exist, they can turn out to be matrices of different sizes.

Both products AB and BA exist and are matrices of the same size only in the case of square matrices A and B of the same order. However, even in this case, AB may not equal BA.

Exponentiation

The exponentiation of a matrix only makes sense for square matrices (think why?). Then the positive integer power m of the matrix A is the product of m matrices equal to A. The same as for numbers. The zero degree of a square matrix A is understood as an identity matrix of the same order as A. If you have forgotten what an identity matrix is, take a look at Fig. 3.

Just like numbers, the following relations take place:

A mA k = A m + k (A m) k = A mk

See Belousov's examples on page 20.

Transpose matrices

Transpose is the transformation of matrix A to matrix AT,

in which the rows of the matrix A are written into the columns of AT with the preservation of the order. (fig. 8). You can say in another way:

the columns of the matrix A are written into the rows of the matrix AT with preservation of the order. Notice how the transposition changes the size of the matrix, that is, the number of rows and columns. Also note that the items on the first row, first column, and last row, last column remain in place.

The following properties hold: (AT) T = A (transpose

matrix twice - you get the same matrix)

(xA) T = xAT (x means a number, A, of course, a matrix) (if you need to multiply the matrix by a number and transpose, you can first multiply, then transpose, or vice versa)

(A + B) T = AT + BT (AB) T = BT AT

Symmetric and antisymmetric matrices

Figure 9 shows a symmetric matrix at the top left. Its elements, symmetrical about the main diagonal, are equal. And now the definition: Square matrix

A is called symmetric if AT = A. That is, the symmetric matrix does not change when transposed. In particular, any diagonal matrix is ​​symmetric. (Such a matrix is ​​shown in Fig. 2).

Now look at the antisymmetric matrix (Figure 9, bottom). How does it differ from symmetrical? Note that all of its diagonal elements are zero. For antisymmetric matrices, all diagonal elements are equal to zero. Think why? Definition: A square matrix A is called

antisymmetric if AT = -A. Let us note some properties of operations on symmetric and antisymmetric

matrices. 1. If A and B are symmetric (antisymmetric) matrices, then A + B is also a symmetric (antisymmetric) matrix.

2. If A is a symmetric (antisymmetric) matrix, then xA is also a symmetric (antisymmetric) matrix. (in fact, if you multiply the matrices from Figure 9 by some number, the symmetry will still be preserved)

3. The product AB of two symmetric or two antisymmetric matrices A and B is a matrix symmetric for AB = BA and antisymmetric for AB =-BA.

4. If A is a symmetric matrix, then A m (m = 1, 2, 3,...) is a symmetric matrix. If A

An antisymmetric matrix, then Am (m = 1, 2, 3,...) Is a symmetric matrix for even m and an antisymmetric matrix for odd m.

5. An arbitrary square matrix A can be represented as the sum of two matrices. (let's call these matrices, for example, A (s) and A (a))

A = A (s) + A (a)