patternMajor
Why is overfitting bad?
Viewed 0 times
whybadoverfitting
Problem
I've studied this lots, and they say overfitting the actions in machine learning is bad, yet our neurons do become very strong and find the best actions/senses that we go by or avoid, plus can be de-incremented/incremented from bad/good by bad or good triggers, meaning the actions will level and it ends up with the best(right), super strong confident actions. How does this fail? It uses positive and negative sense triggers to de/re-increment the actions say from 44pos. to 22neg.
Solution
The best explanation I've heard is this:
When you're doing machine learning, you assume you're trying to learn from data that follows some probabilistic distribution.
This means that in any data set, because of randomness, there will be some noise: data will randomly vary.
When you overfit, you end up learning from your noise, and including it in your model.
Then, when the time comes to make predictions from other data, your accuracy goes down: the noise made its way into your model, but it was specific to your training data, so it hurts the accuracy of your model. Your model doesn't generalize: it is too specific to the data set you happened to choose to train.
When you're doing machine learning, you assume you're trying to learn from data that follows some probabilistic distribution.
This means that in any data set, because of randomness, there will be some noise: data will randomly vary.
When you overfit, you end up learning from your noise, and including it in your model.
Then, when the time comes to make predictions from other data, your accuracy goes down: the noise made its way into your model, but it was specific to your training data, so it hurts the accuracy of your model. Your model doesn't generalize: it is too specific to the data set you happened to choose to train.
Context
StackExchange Computer Science Q#51554, answer score: 47
Revisions (0)
No revisions yet.