Another way in which matrix multiplication differs from multiplication of numbers, which we have already seen, is the following: It is possible for some non-zero matrices to have but , or but . In other words, we can't ``cancel out'' in the equation .

Also, it is possible for neither nor to be zero, but for to be zero.

There's some good news as well. Many of the familiar laws for arithmetic of numbers do hold in the case of matrix arithmetic. In order to say what these are, we will first define two other operations on matrices, which we have not used so far but which will be useful in many contexts.

The
**sum** of two matrices
and
is defined when
and
have the same
dimensions. To add
and
, add their corresponding entries:

The
**product** of a scalar (number)
and a matrix
is computed by multiplying each entry of
by :

These operations on matrices are just like the corresponding operations on vectors and obey the same sorts of rules:

Finally, an important property of matrix multiplication: Matrix multiplication is associative:

In fact, the definition of matrix multiplication is designed to make sure that matrix multiplication is associative. This is because the mathematicians who came up with this definition were interested in linear functions of the sort

If you study multivariable calculus, you will see that this gives a nice ``chain rule'' for derivatives of functions from -dimensional space to -dimensional space.