Referencing a model
Bitfount currently supports referencing models from the following providers:
Bitfount-hosted models
Models hosted in Bitfount are referenced inside an algorithm that requires a model (such as bitfount.ModelInference) via the model.bitfount_model block:
- model_ref: the identifier of the model in the Bitfount Hub (excluding the username)
- model_version: integer version to pin. Otherwise, the latest version will be used. It is recommended to pin the version to avoid unexpected changes to the model as well as to avoid access issues.
- username: model owner/namespace
Hyperparameters for the model can be set separately within the model block:
- hyperparameters: arguments to pass to the model constructor. For instance batch size is a commonly set hyperparameter. The specific hyperparameters accepted will differ between different models. If you have access to view the code, you can check the constructor arguments for the model to determine its hyperparameters.
task:
algorithm:
- name: bitfount.ModelInference
model:
bitfount_model:
model_ref: CatDogImageClassifier
model_version: 3
username: research-user
hyperparameters:
batch_size: 8
Bitfount models can be used for inference, evaluation and fine-tuning tasks and can be toggled between public and private. Models from the Hugging Face model hub however must be made public in order to be used in Bitfount. More information about uploading your models to the Bitfount Hub can be found here.
Hugging Face models
Hugging Face models are invoked via dedicated Bitfount algorithms that are specific to the model type. You pass a model_id (e.g., google/vit-base-patch16-224) in the algorithm arguments.
Do not use the model block for Hugging Face models. Confusingly, Hugging Face refers to these model types as tasks.
Available task types
The task types defined by Hugging Face and supported by Bitfount can be found below with their corresponding Bitfount algorithm:
- Image classification:
bitfount.HuggingFaceImageClassificationInference - Image segmentation:
bitfount.HuggingFaceImageSegmentationInference - Text classification:
bitfount.HuggingFaceTextClassificationInference - Text generation:
bitfount.HuggingFaceTextGenerationInference
Hugging Face models are currently only supported for inference tasks within Bitfount.
Make sure to choose a model that is compatible with the algorithm you are using. The links above will take you to the Hugging Face model hub filtered for that specific task type.
Image segmentation example
task:
algorithm:
- name: bitfount.HuggingFaceImageSegmentationInference
arguments:
model_id: CIDAS/clipseg-rd64-refined
dataframe_output: true
batch_size: 1
data_structure:
select:
include:
- image_path
TIMM models
TIMM is a popular PyTorch image models library that provides a collection of the latest pretrained image models. Originally developed independently by Ross Wightman, it has now been brought under the Hugging Face umbrella.
TIMM models are supported by Bitfount for both inference and fine-tuning tasks via the bitfount.TIMMInference
and bitfount.TIMMFineTuning algorithms respectively. The model is specified in the same way as Hugging Face models via the model_id argument.
TIMM fine-tuning example
Hyperparameters for the model can be set separately within the args block. The full list of hyperparameters can be found
here. As mentioned in the timm documentation,
the variety of training args is large and not all combinations of options (or even options) have been fully tested.
task:
protocol:
name: bitfount.ResultsOnly
algorithm:
- name: bitfount.TIMMFineTuning
arguments:
model_id: bitfount/RETFound_MAE
labels:
- "0"
- "1"
- "2"
- "3"
- "4"
args:
epochs: 1
batch_size: 32
num_classes: 5
TIMM inference example
task:
protocol:
name: bitfount.InferenceAndCSVReport
algorithm:
- name: bitfount.TIMMInference
arguments:
model_id: bitfount/RETFound_MAE
num_classes: 5
- name: bitfount.CSVReportAlgorithm