The math we learned for dealing with vectors can be applied more generally to vector-like things. We will see that several mathematical objects like matrices and polynomials behave similarly to vectors. For example, the addition of two polynomials $P$ and $Q$ is done by adding the coefficients for each power of $x$ component-wise, the same way the addition of vectors happens component-wise.
In this section, we'll learn how to use the terminology and concepts associated with regular vector spaces to study other mathematical objects. In particular we'll see that notions such as linear independence, basis, and dimension can be applied to pretty much all mathematical objects that have components.
To specify an abstract vector space $(V,F,+,\cdot)$, we must specify four things:
In this section $F=\mathbb{R}$.
how to add vectors $\mathbf{u} + \mathbf{v}$.
by an element of the field. Scalar multiplication is usually denoted
implicitly $\alpha \mathbf{u}$ (without the dot).
NOINDENT A vector space satisfies the following eight axioms, for all scalars $\alpha, \beta \in F$ and all $\mathbf{u}, \mathbf{v}, \mathbf{w} \in V$:
such that $\mathbf{u} + \mathbf{0} = \mathbf{0} +\mathbf{u} = \mathbf{u}$ for all $\mathbf{u} \in V$.
$-\mathbf{u}$ such that $\mathbf{u} + (-\mathbf{u}) = \mathbf{u} -\mathbf{u} = \mathbf{0}$.
(associativity of scalar multiplication).
If you know anything about vectors, then the above properties should be familiar to you. Indeed, these are standard properties for the vector space $\mathbb{R}^n$ (and its subsets), where the field $F$ is $\mathbb{R}$ and we use the standard vector addition and scalar multiplication operations.
In this section, we'll see that the many of the things we learned about $\mathbb{R}^n$ vectors apply to other mathematical objects which are vector like.
Consider the vector space of $m\times n$ matrices over the real numbers $\mathbb{R}^{m \times n}$. The addition operation for two matrices $A,B \in \mathbb{R}^{m \times n}$ is the usual rule matrix addition: $(A+B)_{ij} = a_{ij}+b_{ij}$.
This vector space is $mn$-dimensional. This can be seen by explicitly constructing a basis for this space. The standard basis for this space consists of matrices with zero entries everywhere except for a single one in the $i$th row and the $j$th column. This set is a basis because any matrix $A \in \mathbb{R}^{m \times n}$ can be written as a linear combination of the standard basis and since each of them is manifestly independent from the others.
Consider now the set of $2\times2$ symmetric matrices: \[ \mathbb{S}(2,2) \equiv \{ A \in \mathbb{R}^{2 \times 2} \ | \ A = A^T \}, \] in combination with the usual laws for matrix addition an scalar multiplication.
An explicit basis for this space is obtained as follows: \[ \mathbf{v}_1 = \begin{bmatrix} 1 & 0 \nl 0 & 0 \end{bmatrix}, \ \ \mathbf{v}_1 = \begin{bmatrix} 0 & 1 \nl 1 & 0 \end{bmatrix}, \ \ \mathbf{v}_3 = \begin{bmatrix} 0 & 0 \nl 0 & 1 \end{bmatrix}. \]
Observe how any symmetric matrix $\mathbf{s} \in \mathbb{S}(2,2)$ can be written as a linear combination: \[ \mathbf{s} = \begin{bmatrix} a & b \nl b & c \end{bmatrix} = a \begin{bmatrix} 1 & 0 \nl 0 & 0 \end{bmatrix} + b \begin{bmatrix} 0 & 1 \nl 1 & 0 \end{bmatrix} + c \begin{bmatrix} 0 & 0 \nl 0 & 1 \end{bmatrix}. \]
Since there are three vectors in the basis, the vector space of symmetric matrices $\mathbb{S}(2,2)$ is three-dimensional.
Define the vector space $P_n(t)$ of polynomials with real coefficients and degree less than or equal to $n$. The “vectors” in this space are polynomials of the form: \[ \mathbf{p} = a_0 + a_1x + a_2x^2 + \cdots + a_n x^n, \] where $a_0,a_1,\ldots,a_n$ are the coefficients of the polynomial $\mathbf{p}$.
The addition of vectors $\mathbf{p}, \mathbf{q} \in P_n(t)$ is performed component-wise: \[ \begin{align*} \mathbf{p} + \mathbf{q} & = (a_0+a_1x+\cdots+a_nx^n)+(b_0+b_1x+\cdots+b_nx^n) \nl & =(a_0+b_0)+(a_1+b_1)x+\cdots +(a_n+b_n)x^n. \end{align*} \] Similarly, scalar multiplication acts as you would expect: \[ \alpha \mathbf{p} = \alpha\cdot (a_0+a_1x+\ldots a_nx^n)=(\alpha a_0)+(\alpha a_1)x+\ldots (\alpha a_n)x^n. \]
The space $P_n(x)$ is $n+1$-dimensional since each “vector” in that space has $n+1$ coefficients.
Another interesting vector space is the set of all functions $f:\mathbb{R} \to \mathbb{R}$ in combination with the point-wise addition and scaler multiplication operations: \[ \mathbf{f}+\mathbf{g}=(f+g)(x) = f(x) + g(x), \qquad \alpha\mathbf{f} = (\alpha f)(x) = \alpha f(x). \]
The space of functions is infinite-dimensional.
In this section we saw that we can talk about linear independence and bases for more abstract vector spaces. Indeed, these notions are well defined for any vector-like object.
In the next section we will generalize the concept of orthogonality for abstract vector spaces. In order to do this, we have to define an abstract inner product operation.
[ More examples of vector spaces ]
http://en.wikibooks.org/wiki/Linear_Algebra/Definition_and_Examples_of_Vector_Spaces