After you do the eliminations you will be left with a bunch of rows which start with 0s and then a 1 then some stuff after it, or possibly rows which are all 0s. For example:
[tex]\left(\begin{array}{ccccc|c}
0 & 0 & 0 & 1 & 4 & 2\\
0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 7 & 0 & 8 & 9\\
0 & 1 & 5 & 0 & 6 & 3
\end{array}\right)[/tex]
(If you have a row which is all zeros except for the last column then you know the equations were inconsistent and there is no solution.) Any rows which are all zeros you can just eliminate (they tell you nothing useful). So we get:
[tex]\left(\begin{array}{ccccc|c}
0 & 0 & 0 & 1 & 4 & 2\\
1 & 0 & 7 & 0 & 8 & 9\\
0 & 1 & 5 & 0 & 6 & 3
\end{array}\right)[/tex]
If the positions of the "leading" 1s aren't in order (i.e. the column index doesn't increase as the row index increases) you can always swap rows about, to get:
[tex]\left(\begin{array}{ccccc|c}
1 & 0 & 7 & 0 & 8 & 9\\
0 & 1 & 5 & 0 & 6 & 3\\
0 & 0 & 0 & 1 & 4 & 2
\end{array}\right)[/tex]
Note that every column that has a "leading 1" in it has zeros everywhere else, if this is not the case you can do row operations (subtracting one row from another) to make this happen. In our example we don't need to do any further operations. The matrix is said to be in "reduced row echelon form"
http://en.wikipedia.org/wiki/Row_echelo ... helon_formOnce it is in this form it is easy to work out all the solutions to the equations. Because we have more variables than equations (/constraints) the solution will not be unique, there will be multiple solutions. Let the variables we are solving for be [tex]a,b,c,d,e[/tex]. Because columns 1, 2, and 4 have leading 1s in them let [tex]a,b,[/tex] and [tex]d[/tex] be called "dependent" variables and let the others be "independent" variables. From the augmented matrix we know
[tex]a+7c+8e=9\\
b+5c+6e=3\\
d+4e=2[/tex]
Rearrange so the dependent variables are on the left and the constants and independent variables are on the right
[tex]a=9-7c-8e\\
b=3-5c-6e\\
d =2-4e[/tex]
Also add in the trivial identities "c=c" and "e=e", (i.e. "independent variable" = "independent variable") to get:
[tex]\begin{array}{ccccc}
a & = & 9 & -7c & -8e\\
b & = & 3 & -5c & -6e\\
c & = & & c & \\
d & = & 2 & & -4e\\
e & = & & & e
\end{array}[/tex]
Using vector notation we get:
[tex]\begin{pmatrix}
a\\
b\\
c\\
d\\
e
\end{pmatrix}
= \begin{pmatrix}
9\\
3\\
0\\
2\\
0
\end{pmatrix}
+ c \begin{pmatrix}
-7\\
-5\\
1\\
0\\
0
\end{pmatrix}
+e\begin{pmatrix}
-8\\
-6\\
0\\
-4\\
1
\end{pmatrix}[/tex]
These are all the solutions to the equations (we are free to choose [tex]c[/tex] and [tex]e[/tex] how we like).
To explain the trick your professor used, let me substitute [tex]c=-\lambda[/tex] and [tex]e=-\mu[/tex]. Now our solution becomes:
[tex]\begin{pmatrix}
9\\
3\\
0\\
2\\
0
\end{pmatrix}
+\lambda \begin{pmatrix}
7\\
5\\
-1\\
0\\
0
\end{pmatrix}
+\mu\begin{pmatrix}
8\\
6\\
0\\
4\\
-1
\end{pmatrix}[/tex]
This is equivalent to our previous solution.
If we take our augmented matrix in reduced row echelon form we can add rows that look like 0s with a single -1 in them to make the main diagonal look like 1s or -1s:
[tex]\left(\begin{array}{ccccc|c}
1 & 0 & 7 & 0 & 8 & 9\\
0 & 1 & 5 & 0 & 6 & 3\\
0 & 0 & -1 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 4 & 2\\
0 & 0 & 0 & 0 & -1 & 0
\end{array}\right)[/tex]
Here I've added two rows (3rd and 5th). Now the right most column is a solution to our equations. It corresponds to the left most vector in our [tex]\lambda,\mu[/tex] vector equation above. The columns (in the augmented matrix) with leading -1s correspond to the vectors which have coefficients [tex]\lambda[/tex] and [tex]\mu[/tex].
So adding these extra rows is just a quick way to read off the vector solution. Hopefully you can see that this trick will always give the same answer as if you did it the long way (using dependent/independent variables).
Hope this helped,
R. Baber.