turicreate.evaluation.precision — Turi Create API 6.4.1 documentation (original) (raw)

turicreate.evaluation. precision(targets, predictions, average='macro')

Compute the precision score for classification tasks. The precision score quantifies the ability of a classifier to not label a negative example aspositive. The precision score can be interpreted as the probability that a positive prediction made by the classifier is positive. The score is in the range [0,1] with 0 being the worst, and 1 being perfect.

The precision score is defined as the ratio:

\[\frac{tp}{tp + fp}\]

where tp is the number of true positives and fp the number of false positives.

Parameters: targets : SArray Ground truth class labels. predictions : SArray The prediction that corresponds to each target value. This SArray must have the same length as targets and must be of the same type as the targets SArray. average : string, [None, ‘macro’ (default), ‘micro’] Metric averaging strategies for multiclass classification. Averaging strategies can be one of the following: None: No averaging is performed and a single metric is returned for each class. ‘micro’: Calculate metrics globally by counting the total true positives, and false positives. ‘macro’: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
Returns: out : float (for binary classification) or dict[float] Score for the positive class (for binary classification) or an average score for each class for multi-class classification. Ifaverage=None, then a dictionary is returned where the key is the class label and the value is the score for the corresponding class label.

Notes

Examples

Targets and Predictions

targets = turicreate.SArray([0, 1, 2, 3, 0, 1, 2, 3]) predictions = turicreate.SArray([1, 0, 2, 1, 3, 1, 0, 1])

Micro average of the precision scores for each class.

turicreate.evaluation.precision(targets, predictions, ... average = 'micro') 0.25

Macro average of the precision scores for each class.

turicreate.evaluation.precision(targets, predictions, ... average = 'macro') 0.3125

Precision score for each class.

turicreate.evaluation.precision(targets, predictions, ... average = None) {0: 0.0, 1: 0.25, 2: 1.0, 3: 0.0}

This metric also works for string classes.

Targets and Predictions

targets = turicreate.SArray( ... ["cat", "dog", "foosa", "snake", "cat", "dog", "foosa", "snake"]) predictions = turicreate.SArray( ... ["dog", "cat", "foosa", "dog", "snake", "dog", "cat", "dog"])

Micro average of the precision scores for each class.

turicreate.evaluation.precision(targets, predictions, ... average = 'micro') 0.25

Macro average of the precision scores for each class.

turicreate.evaluation.precision(targets, predictions, ... average = 'macro') 0.3125

Precision score for each class.

turicreate.evaluation.precision(targets, predictions, ... average = None) {0: 0.0, 1: 0.25, 2: 1.0, 3: 0.0}