Description
This field specifies the metrics to use for evaluating your model. Available metrics vary per task, with incompatible metrics resulting in a validation error.
Supported Task Types
- All
Available Extra Metrics
Task Type | Options |
---|---|
Binary Classification | acc ,auroc , auprc , ap , f1 ,ndcg , ndcg@k ,precision ,precision@k ,recall ,recall@k ; for k = 1, 10, and 100 |
Multiclass Classification | acc , f1 , precision , recall |
Multilabel Classification | acc ,f1 , precision , recall ; auroc ,auprc , ap supported only with suffixes _macro , _micro , and _per_label |
Multilabel Ranking | f1@k , map@k , mrr@k , ndcg@k , precision@k , recall@k ; for k = 1, 10, and 100 |
Link Prediction | f1@k , map@k , mrr@k , ndcg@k , precision@k , recall@k ; for k = 1, 10, and 100 |
Regression | mae , mape , mse , rmse , smape |
Forecasting | mae , mse , rmse , smape , mape , neg_binamial , normal , lognormal |
Example
In the case of link prediction, the default metrics are map@1
, map@10
, and map@100
, but you can use:
metrics: [map@12]
to report map@12
.
Updated 3 days ago