**What’s covered in this lecture?**

- Types of model errors: training, test, in-sample
- Bias-variance decomposition
- Cross-validation, Bootstrap …

- In previous lectures we have learnt enough about model complexity and regularization. Regularization is a way to reduce model complexity in order to prevent model overfitting.
- Such a regularized model tends to have better
**generalization performance**in the sense of smaller prediction error on independent test sample. - That says, model should be assessed by
**test error**and prediction accuracy.

**Prediction accurary**: it is known that a less complex model tends to have high bias and low variance; while a more complex model tends to have low bias and high variance.