Skip to main content

Open in Colab

Using Custom Models

Welcome to the Bitfount federated learning tutorials! In this sequence of tutorials, you will learn how federated learning works on the Bitfount platform.

In this tutorial you will learn how to train a model using a custom model by extending a base model in the Bitfount framework. We will use the Pod you set up in "Running a Pod" tutorial, so double check that it is online in the Bitfount Hub prior to executing this notebook. If it is offline, you can bring it back online by repeating the "Running a Pod" tutorial.


!pip install bitfount


In this tutorial we will first show you how to test your custom model using local training on your machine and then we will move on to training on a Pod. Note, to run a custom model on a Pod, you must have Super Modeller permissions or General Modeller with permission to run specified models on the Pod. For the purposes of this tutorial, you already have the correct permissions because Pod owners have Super Modeller permissions to their own Pods by default.

import logging  # isort: splitfrom pathlib import Pathimport nest_asyncioimport torchfrom torch import nn as nnfrom torch.nn import functional as Ffrom torchmetrics.functional import accuracyfrom bitfount import (    BitfountModelReference,    BitfountSchema,    CSVSource,    DataStructure,    ModelTrainingAndEvaluation,    PyTorchBitfountModel,    PyTorchClassifierMixIn,    ResultsOnly,    get_pod_schema,    setup_loggers,)nest_asyncio.apply()  # Needed because Jupyter also has an asyncio loop

Let's import the loggers, which allow you to monitor progress of your executed commands and raise errors in the event something goes wrong.

loggers = setup_loggers([logging.getLogger("bitfount")])

Creating a custom model

For this tutorial we will create a custom model, extending and overriding the built-in BitfountModel class (specifically, we will use the PyTorchBitfountModel class). Details on this can be found in the documentation in the bitfount.backends.pytorch.models.bitfount_model module. Note, Bitfount does not vet custom models, nor are custom models private. Custom models saved to the Hub are searchable by users who know the URL for the custom model.

The PyTorchBitfountModel uses the PyTorch Lightning library to provide high-level implementation options for a model in the PyTorch framework. This enables you to only implement the methods you need to dictate how the model training should be performed.

For our custom model we need to implement the following methods:

  • __init__(): how to setup the model
  • configure_optimizers(): how optimizers should be configured in the model
  • forward(): how to perform a forward pass in the model, how the loss is calculated
  • training_step(): what one training step in the model looks like
  • validation_step(): what one validation step in the model looks like
  • test_step(): what one test step in the model looks like

Now we implement the custom model, feel free to try out your own model here:

# Update the class name for your Custom modelclass MyCustomModel(PyTorchClassifierMixIn, PyTorchBitfountModel):    # A custom model built using PyTorch Lightning.    def __init__(self, **kwargs):        super().__init__(**kwargs)        self.learning_rate = 0.001        # Initializes the model and sets hyperparameters.        # We need to call the parent __init__ first to ensure base model is set up.        # Then we can set our custom model parameters.    def create_model(self):        self.input_size = self.datastructure.input_size        return nn.Sequential(            nn.Linear(self.input_size, 500),            nn.ReLU(),            nn.Dropout(0.1),            nn.Linear(500, self.n_classes),        )    def forward(self, x):        # Defines the operations we want to use for prediction.        x, sup = x        x = self._model(x.float())        return x    def training_step(self, batch, batch_idx):        # Computes and returns the training loss for a batch of data.        x, y = batch        y_hat = self(x)        loss = F.cross_entropy(y_hat, y)        return loss    def validation_step(self, batch, batch_idx):        # Operates on a single batch of data from the validation set.        x, y = batch        preds = self(x)        loss = F.cross_entropy(preds, y)        preds = F.softmax(preds, dim=1)        acc = accuracy(preds, y)        # We can log out some useful stats so we can see progress        self.log("val_loss", loss, prog_bar=True)        self.log("val_acc", acc, prog_bar=True)        return {            "val_loss": loss,            "val_acc": acc,        }    def test_step(self, batch, batch_idx):        x, y = batch        preds = self(x)        preds = F.softmax(preds, dim=1)        # Output targets and prediction for later        return {"predictions": preds, "targets": y}    def configure_optimizers(self):        # Configure the optimizer we wish to use whilst training.        optimizer = torch.optim.AdamW(self.parameters(), lr=self.learning_rate)        return optimizer

Training locally with a custom model

With the above model we can now change our config to use this custom model. The configuration is for the most part the same as before.

First, let's import and test the model on a local dataset.

datasource = CSVSource(    path="",    ignore_cols=["fnlwgt"],)schema = BitfountSchema(    datasource,    table_name="census-income-demo-dataset",    force_stypes={        "census-income-demo-dataset": {            "categorical": [                "TARGET",                "workclass",                "marital-status",                "occupation",                "relationship",                "race",                "native-country",                "gender",                "education",            ],        },    },)datastructure = DataStructure(target="TARGET", table="census-income-demo-dataset")model = MyCustomModel(datastructure=datastructure, schema=schema, epochs=2)local_results =

Training on a Pod with your own custom model

If you have your model defined locally, you can train it on a remote Pod. The example below uses the same hosted Pod as in "Running a Pod", so make sure this is online before running. You can use any Pod to which you have access by providing the Pod identifier as an argument to the .fit method.

NOTE: Your model will be uploaded to the Bitfount Hub during this process. You can view your uploaded models at:

pod_identifier = "census-income-demo-dataset"schema = get_pod_schema(pod_identifier)results =    pod_identifiers=[pod_identifier],    model_out=Path(""),    extra_imports=["from torchmetrics.functional import accuracy"],)
pretrained_model_file = Path("")model.serialize(pretrained_model_file)

If you want to train on a Pod with a custom model you have developed previously and is available on the Hub (, you may reference it using BitfountModelReference. Within BitfountModelReference you can specify username and model_ref. In the previous case, the username is our own username, so we don't need to specify it. If you wanted to use a model uploaded by someone else you would need to specify their username and the name of their model.

NOTE: To run a custom model on a Pod for which you are not the owner, the Pod owner must have granted you the Super Modeller role. To learn about our Pod policies see :

reference_model = BitfountModelReference(    # username=<INSERT USERNAME>, # NOTE: if the Custom Model was developed by another user you must provide their username    model_ref="MyCustomModel",    datastructure=datastructure,    schema=schema,    #     model_version=1,  # NOTE: use a specific version of the model (default is latest uploaded),    hyperparameters={"epochs": 2},)protocol = ResultsOnly(algorithm=ModelTrainingAndEvaluation(model=reference_model))protocol_results =[pod_identifier])

model_ref is either the name of an existing custom model (one that has been uploaded to the Hub) or, if using a new custom model, the path to the model file. The code will handle the upload of the model to the Hub the first time it is used, after which you can refer to it by name.

Uploading private models

Custom models are by default uploaded to be publicly accessible. That means that if you upload your model another user of the bitfount platform could reference your model using BitfountModelReference by specifying your username, the model name and version number. If you want to control the usage of the Custom Model you upload you must set the private=True:

reference_model = BitfountModelReference(    model_ref=Path(""),    datastructure=datastructure,    schema=schema,    hyperparameters={"epochs": 2},    private=True,)protocol = ResultsOnly(algorithm=ModelTrainingAndEvaluation(model=reference_model))[pod_identifier])

Uploading pretained model weights file

Perhaps you would like to enable another user you are collaborating with to run inference tasks on their own data with a custom model you have trained. Bitfount supports this by allowing you to upload your custom model along with a pretrained file. You can only upload pretrained model files to models that you own. Once a pretrained file is associated with a model, the pretrained file will by default be applied to the model each time it is referenced. If a different pretrained file is provided via .run this pretrained file will take precedence.

uploaded_reference_model = BitfountModelReference(    model_ref="MyCustomModel",    datastructure=datastructure,    schema=schema,    model_version=1,  # NOTE: model_version must be populated in order to upload pretrained model file    hyperparameters={"epochs": 2},)uploaded_reference_model.send_weights(pretrained_model_file)
protocol = ResultsOnly(    algorithm=ModelTrainingAndEvaluation(model=uploaded_reference_model))# NOTE: if you want to run the model with a pretrained file other than the one uploaded by the owner,#      the pretrained_file parameter must be    pod_identifiers=[pod_identifier],    # pretrained_file = differentpretrained_model_file)

You've now successfully learned how to run a custom model!

Contact our support team at if you have any questions.