Description
The metrics field defines the metrics used to evaluate your model's performance. Metrics vary depending on the task type, and attempting to use incompatible metrics will result in a validation error.
Supported Task Types
- All
Available Extra Metrics
| Task Type | Options |
|---|---|
| Binary Classification | acc,auroc, auprc, ap, f1,ndcg, ndcg@k,precision,precision@k,recall,recall@k; for k = 1, 10, and 100 |
| Multiclass Classification | acc, f1, precision, recall |
| Multilabel Classification | acc,f1, precision, recall; auroc,auprc, ap supported only with suffixes _macro, _micro, and _per_label |
| Multilabel Ranking | f1@k, map@k, mrr@k, ndcg@k, precision@k, recall@k; for k = 1, 10, and 100 |
| Link Prediction | f1@k, map@k, mrr@k, ndcg@k, precision@k, recall@k; for k = 1, 10, and 100 |
| Regression | mae, mape, mse, rmse, smape |
| Forecasting | mae, mse, rmse, smape, mape, neg_binomial, normal, lognormal |
Example
The metrics for link prediction include map@1, map@10, and map@100 by default. However, you can customize these metrics as shown below:
metrics: [map@12]Updated about 2 months ago
What’s Next
