Basically, cross-validation is a statistical method for evaluating learning algorithms. A fixed number of folds (groups of data) is set to run the analysis. These folds group the data into 2 sets: training and testing (validation) sets, that are cross-over in rounds, allowing each data point to be validated.
The main purpose is to test the model's ability to predict independent data that was not used in creating it. It is also useful to cope with problems like overfitting or selection bias.