Page 42 -
P. 42

20 Bias and Variance: The two big sources of

             error




             Suppose your training, dev and test sets all come from the same distribution. Then you
             should always try to get more training data, since that can only improve performance, right?


             Even though having more data can’t hurt, unfortunately it doesn’t always help as much as
             you might hope. It could be a waste of time to work on getting more data. So, how do you
             decide when to add data, and when not to bother?

             There are two major sources of error in machine learning: bias and variance. Understanding

             them will help you decide whether adding data, as well as other tactics to improve
             performance, are a good use of time.

             Suppose you hope to build a cat recognizer that has 5% error. Right now, your training set
             has an error rate of 15%, and your dev set has an error rate of 16%. In this case, adding
             training data probably won’t help much. You should focus on other changes. Indeed, adding

             more examples to your training set only makes it harder for your algorithm to do well on the
             training set. (We explain why in a later chapter.)

             If your error rate on the training set is 15% (or 85% accuracy), but your target is 5% error
             (95% accuracy), then the first problem to solve is to improve your algorithm​’​s performance
             on your training set. Your dev/test set performance is usually worse than your training set
             performance. So if you are getting 85% accuracy on the examples your algorithm has seen,

             there’s no way you’re getting 95% accuracy on examples your algorithm hasn’t even seen.

             Suppose as above that your algorithm has 16% error (84% accuracy) on the dev set. We break
             the 16% error into two components:

             • First, the algorithm’s error rate on the training set. In this example, it is 15%. We think of
               this informally as the algorithm’s ​bias​.


             • Second, how much worse the algorithm does on the dev (or test) set than the training set.
               In this example, it does 1% worse on the dev set than the training set. We think of this
                                                          6
               informally as the algorithm’s ​variance​.



             6  The field of statistics has more formal definitions of bias and variance that we won’t worry about.
             Roughly, the bias is the error rate of your algorithm on your training set when you have a very large
             training set. The variance is how much worse you do on the test set compared to the training set in

             Page 42                            Machine Learning Yearning-Draft                       Andrew Ng
   37   38   39   40   41   42   43   44   45   46   47