Machine Learning - Classifier Evaluation -


in general steps follow when accuracy of supervised learning classifier model have obtained after training not per expectation? example steps: feature re-engineering, removing noise, dimensionality reduction, overfitting , on. tests (carried out after have obtained % accuracy of classifier) make arrive @ conclusion (say there lot of noise because of accuracy low) makes perform action (remove noisy words/features etc.)? after performing action re-train classifier , cycle goes on until have achieved results.

i have read question on - feature selection , reduction text classification has great accepted answer doesn't talk steps followed make arrive @ conclusion (as described above)

there various metrics can use depending on classifier have. binary classifier? multi-class classifier? or multi-label multi-class classifier? common metrics precision, recall, f-score , accuracy there host of other more detailed metrics when comes multi-label classifiers.

most machine learning toolkits implement standard evaluation metrics (precision, recall, etc) have found metrics multi-label classifiers aren't implemented in many machine learning toolkits.

the paper a systematic analysis of performance measures classification tasks comprehensive listing of metrics classifiers.

a paper on multi-label classifier metrics is: a literature survey of algorithms multi-label learning

depending on metrics, may want either handle issues such overfitting, underfitting, or more data (or more accurate data) or (in extreme situations) switch machine learning algorithms or approaches. see domingo's a few useful things know machine learning


Comments

Popular posts from this blog

c# - DetailsView in ASP.Net - How to add another column on the side/add a control in each row? -

javascript - firefox memory leak -

Trying to import CSV file to a SQL Server database using asp.net and c# - can't find what I'm missing -