Filter Based Feature Selection -> Build Counting Transform -> Test hypothesis with t-Test. This sequence deals with multicollinearity then adds useful categorical features, before statistically validating them. Pretty clear order for feature engineering steps in regression, seen similar in practice tests. Nice question structure.
Q: 19
You are building a recurrent neural network to perform a binary classification. You review the training
loss, validation loss, training accuracy, and validation accuracy for each training epoch.
You need to analyze model performance.
Which observation indicates that the classification model is over fitted?
Options
Discussion
Probably C here. I've seen similar questions on practice sets, and when training loss goes down but validation loss goes up, that's a textbook sign of overfitting. Anyone else see this pattern come up?
C since when training loss keeps dropping but validation loss starts to rise, that's classic overfitting. Model memorizes the train set and can't generalize. Practice exams highlight this pattern a lot for ML questions, agree?
Be respectful. No spam.
Question 19 of 35