Skip to main content

onnx

ONNX backend for Bitfount.

Module

Submodules

Classes

ONNXModel

class ONNXModel(    *,    datastructure: DataStructure,    schema: BitfountSchema,    batch_size: int = 32,    session_config: Optional[ONNXSessionConfig] = None,    **kwargs: Any,):

ONNX inference model using onnxruntime.

This implementation is inference-only. It creates a Bitfount BitfountDataBunch and test dataloader for the provided datasource, converts any backend tensors to numpy arrays, and feeds them to an ONNX Runtime session.

The entrypoint for execution is predict, which returns a PredictReturnType. Training and evaluation are not supported for this backend.

Initialise an ONNXModel.

Arguments

  • onnx_path: Path to the .onnx model file.
  • batch_size: Batch size to use for test dataloader. Defaults to 32.
  • session_config: Optional onnxruntime session configuration.
  • input_names: Optional explicit ONNX input name ordering.
  • output_names: Optional explicit ONNX output name ordering.
  • input_build_fn: Optional callable to map a dataloader batch into a
  • {input_name: np.ndarray}: feed dictionary for onnxruntime. If omitted, a default heuristic is used.
  • datastructure: Bitfount DataStructure describing inputs/targets.
  • schema: Bitfount BitfountSchema associated with the datasource.
  • **kwargs: Forwarded to \_BaseModel base class.

Variables

  • initialised : bool - Return True if the model has been initialised.

Methods


deserialize

def deserialize(self, content: Union[str, os.PathLike, bytes], **_: Any)> None:

Deserialise ONNX model from a path or bytes content.

Arguments

  • content: Path to the ONNX file or a bytes object containing the model.

initialise_model

def initialise_model(    self,    data: Optional[BaseSource] = None,    data_splitter: Optional[DatasetSplitter] = None,    context: Optional[TaskContext] = None,)> None:

Initialise ORT session and prepare dataloaders for inference.

Arguments

  • data: Optional datasource for inference. If provided, a test dataloader is created using an inference-only splitter.
  • data_splitter: Optional splitter to use instead of _InferenceSplitter.
  • context: Optional execution context (unused).

predict

def predict(    self, data: Optional[BaseSource] = None, **_: Any,)> PredictReturnType:

Run inference and return predictions.

Arguments

  • data: Optional datasource to run inference on. If provided, the model may be (re-)initialised to use this datasource.

Returns PredictReturnType containing predictions and optional data keys. Data keys must be present if the datasource is file-based.

Raises

  • ValueError: If no test dataloader is available.