Neural networks regularization

Regularization is a technique where we reduce the error in real-life data(test data) using data that we collect for analysis(training data). There are various ways to do it including. There are many strategies to do it and we will go through the code notebooks and theoretical guidelines that guide these strategies. In this lecture, we will go through the following topics: L1 and L2 regularization: a) Data Augmentation. b) Semi Supervised Learning. c) Multitask Learning. d) Parameter Tying and Sharing. e) Bagging and other Ensemble methods. f) Dropout. g) Noise Robustness. h) Early stopping.