metrics
Defines functions which compute metrics.
Module
Submodules
- bitfount.metrics.etdrs - Metrics regarding OCTs.
- bitfount.metrics.types - Types regarding OCT metrics.
Classes
ClassificationMetric
class ClassificationMetric( func: Callable[[np.ndarray, np.ndarray], float], probabilities: bool,):
A classification metric used for assessing ML model performance.
Arguments
probabilities
: Whether y_pred needs to be classes or probabilities.
Attributes
func
: A function which computes the metric. Must take two arguments: y_true and y_pred and return a metric as a float.
Variables
- static
probabilities : bool
Metric
class Metric(func: Callable[[np.ndarray, np.ndarray], float]):
A metric used for assessing ML model performance.
Attributes
func
: A function which computes the metric. Must take two arguments: y_true and y_pred and return a metric as a float.
Subclasses
Variables
- static
func : collections.abc.Callable
MetricCollection
class MetricCollection( metrics: Optional[MutableMapping[str, Metric]] = None, problem: Optional[MetricsProblem] = None,):
Container class for metrics to calculate.
Arguments
metrics
: A list of metrics to calculate.problem
: The problem type. If metrics are not specified, the problem type will be used to determine the metrics to calculate.
Attributes
metrics
: A list of metrics to calculate.problem
: The problem type.optimal_threshold
: The optimal threshold to separate classes (only used for classification problems).thresholds
: The thresholds to separate classes (only used for classification problems).threshold_metrics
: The metrics for each threshold (only used for classification problems).
Raises
ValueError
: If neither one ofproblem
normetrics
is specified.
Static methods
create_from_model
def create_from_model( model: Union[_BaseModel, DistributedModelProtocol], metrics: Optional[MutableMapping[str, Metric]] = None,) ‑> MetricCollection:
Creates a MetricCollection object from a _BaseModel.
Arguments
model
: A _BaseModel.metrics
: The metrics dictionary. Defaults to None.
Returns Instance of MetricCollection.
Methods
compute
def compute( self, test_target: np.ndarray, test_preds: np.ndarray, metric_to_optimise: str = 'F1', threshold: Optional[float] = None,) ‑> dict:
Compute list of metrics and save results in self.results.
note
Thresholds do not apply to multiclass problems.
Arguments
test_target
: A list of targets.test_preds
: A list of predictions.metric_to_optimise
: What metric to optimize in order to compute the optimal threshold. This will have no effect if there aren't any metrics to which a threshold is applied. Must be present in 'self.metrics'.threshold
: If this argument is provided, this threshold will be used instead of optimising the threshold as per 'optimise'
MetricsProblem
class MetricsProblem( value, names=None, *, module=None, qualname=None, type=None, start=1,):
Simple wrapper for different problem types for MetricCollection.
Variables
- static
BINARY_CLASSIFICATION
- static
MULTICLASS_CLASSIFICATION
- static
MULTILABEL_CLASSIFICATION
- static
REGRESSION
- static
SEGMENTATION