FAQs
What does it mean when the “Run task” button is greyed out?
The Run task button is disabled when the selected dataset cannot accept a new task. This can occur for the following reasons:
-
The dataset is not connected to the EHR
This project requires an active EHR connection for every dataset. If the site has not logged into their EHR through the Bitfount app, the task cannot be started.
→ Please contact the site and ask them to log in to their EHR instance within Bitfount.
-
The dataset is currently at capacity
The dataset is already processing another task and cannot accept a second one.
→ You will need to wait until the current task completes before trying again.
-
The dataset is offline
The dataset is not online and therefore cannot receive a task.
→ Contact the dataset owner and request that they bring the dataset back online.
The dataset I want to run a task on is marked in red as offline. What should I do?
If a dataset is marked offline, it means Bitfount cannot connect to the dataset at the site. You will need to contact the site that owns the dataset. Common reasons for an offline status include:
-
The dataset owner has logged out of the Bitfount application.
-
The Bitfount laptop is powered off, or the Bitfount OS user is logged out.
-
The imaging drive is unmounted locally at the site and can no longer be accessed.
→ The site’s IT Support team will need to remount the drive.
-
The Bitfount laptop has lost the read permissions required to access the imaging drive.
→ The site’s IT Support team will need to restore these credentials.
-
The site’s internet connection is unstable, preventing Bitfount from retrieving the model from the Bitfount Hub and preparing it for local analysis.
→ The site should check and restore their internet connection.
Can I run multiple task runs at a time?
At present, Bitfount does not support running multiple task runs in parallel from a single modeller instance. This limitation applies whether you are connecting to remote datasets or running a local dataset on your own machine. In both cases, one modeller can initiate a task for only one dataset at a time, so task runs must be started sequentially.
The same limitation applies on the dataset side:
a single remote machine hosting Bitfount can process only one dataset task run at a time, even if multiple datasets are configured on that machine.
If you need to run multiple task runs simultaneously across different sites, the current workaround is to use separate Bitfount instances on separate devices. Each device can initiate a task toward a different remote dataset, allowing those runs to proceed in parallel.
The dotted line indicates that multiple task runs are not supported concurrently.
It’s been a long time since I kicked off the task and it is still going - when will it finish?
Task run times in Bitfount can vary widely depending on several factors, including:
- The dataset owner’s local network speed
- The dataset owner’s internet connection quality
- The hardware specifications of the machine hosting the dataset
- The size and number of files being processed
- The complexity of the task or analysis being performed
Because of the variety of these conditions, Bitfount cannot predict the exact duration of a task. Once the task completes, the Collaborator who initiated it will automatically receive an email notification confirming that the task has finished and is ready for review.
How will I know when the task is complete?
The Collaborator that initiates a task run will be notified by an email alert when the task has either completed or the task has aborted.
How do I pause or quit a task run?
Bitfount does not currently support pausing an active task. If a task needs to be stopped, it must be terminated by the dataset owner, as Bitfount’s execution flow is designed so that control originates from the machine hosting the data.
To end a task run, the dataset owner should use the Windows Task Manager on the machine where the dataset is connected. From there, they must manually select “End task” for both the Bitfount application and the Bitfount Orchestrator processes. Ending both ensures that the environment restarts cleanly the next time a task is run.
Once terminated, the task initiator will see a notification indicating that the task has aborted. There is no risk to data or system stability when ending these processes.
If the dataset owner is unavailable or the task does not terminate as expected, please contact support@bitfount.com for assistance.
How do I know that the correct filters have been applied to the dataset, by the dataset owner?
Bitfount is designed to give dataset owners full control over their data and to support strong Information Governance practices. As part of this, only the dataset owner can view or modify the dataset-level filters applied at connection time. These filters are not visible to collaborators, and any changes made by the dataset owner will apply only to future task runs, not tasks that have already been completed.
If you believe the current filters need to be updated—for example, to include additional data or adjust the criteria being queried—you will need to contact the dataset owner. They can reconnect the dataset using the same data source but with updated filters applied. This ensures that any modifications are governed and explicitly approved by the data custodian.
Filters that may be applied include:
- Modality - derived from the image headers
- Date of birth - derived from the image headers
- Date created - derived from the file metadata
- Date modified - derived from the file metadata
- B-scan Min and Max - derived from the image headers
- File size Min and Max - derived from the file metadata
- Filter files missing required fields for calculations - this filter skips files that do not have values in the image header fields that are needed in order to derive a calculated value.
As a dataset owner, a dataset I had previously connected with my Bitfount account is offline and listed as remote. How do I reconnect to it?
Bitfount marks a dataset as remote/offline when it cannot find the local configuration file—called the pod-config file—that stores the details required to connect to your dataset.
The pod-config file is created on the specific machine and OS user account that originally connected the dataset. It is not shared across devices or operating system accounts. Bitfount uses this local file by design to ensure that your data remains on the custodian’s system, and that no connection or configuration information leaves your secure environment. This is part of Bitfount’s privacy-preserving model, which ensures that datasets are never transferred or centrally stored.
Your dataset may appear as remote or offline if:
- You are logged into a different OS user account on the same machine
- You are logged into Bitfount on another computer
- The original pod-config file is not accessible, was deleted, or has been moved
Because these environments do not have a copy of the pod-config file, they cannot establish the secure connection required, and Bitfount marks the dataset as remote.
To reconnect the dataset:
- Log in to Bitfount on the same machine where the dataset was originally connected
- Log in as the same OS user who performed the initial connection
- Then reopen Bitfount and navigate to your dataset to bring it back online
If you no longer have access to the original machine or OS account, you will need to reconnect the dataset as new by repeating the initial dataset connection steps.
Can changes be made to the parameters of the demo projects available in Bitfount?
No. Demo projects are fixed and cannot be customised.
If your use case requires different settings or functionality, we’d be happy to discuss options. Please contact us at support@bitfount.com.
How should my data be structured for model fine-tuning projects?
Your dataset must follow a standard machine-learning folder structure with three top-level folders:
- train/
- validation/
- test/
Inside each folder, images must be placed into separate subfolders representing the labels you want the model to learn (e.g., train/DRUSEN/…, train/NORMAL/…).
Each subfolder should contain the relevant images.
When connecting a dataset, what variables are mandatory?
The following fields must be provided when creating a dataset connection:
- Dataset name
- DICOM folder location — the file path to the directory containing your imaging data
- Use folder names and structure for training tasks — this option must be enabled so Bitfount can correctly infer labels and structure
I want to try your demo projects but I'm unsure about connecting my own data first. Where can I find data to trial a project?
All demo projects include a small, built-in sample dataset so you can explore Bitfount without connecting your own data.
For peace of mind, remember that no imaging data ever leaves your institution, and all analysis occurs locally on the device connected to Bitfount.
When setting up a fine-tuning task run, what are the configurable task parameters for?
These parameters allow you to control how the model trains. A quick overview:
-
Learning rate — How quickly the model updates during training.
- Lower = slower but more stable training
- Higher = faster but riskier
-
Epochs — Number of full passes through the dataset.
- Increasing epochs can improve learning but may cause overfitting.
-
Labels — The set of classes the model should learn from your folder structure.
-
Batch size — How many images are processed at once.
- Larger batches improve training speed but require more memory.
-
Image column — The dataset column containing image references (for dataset-driven training workflows).
-
Target column — The column containing labels.
If your folders follow a structure like
test/class_1/image_1.jpg, selecting BITFOUNT_INFERRED_LABEL will automatically extract labels from folder names.
What is the suggested Bitfount workflow for using the fine-tuning and classification demo projects?
The best workflow depends on whether you want to try classification, fine-tuning, or a full end-to-end pipeline:
1. If you want to try classification only:
Start with:
- Classify retinal images using RETFound fine tuned on Kermany
2. If you want to try fine-tuning a retinal model:
Choose a demo based on your imaging modality:
- Colour fundus → Fine tune the RETFound retinal colour fundus foundation model
- OCT → Fine tune the RETFound retinal OCT foundation model
3. If you want the end-to-end workflow (fine-tune → classify):
- Fine-tune using one of the RETFound fine-tuning demos
- Then classify using:
- Classify retinal images using RETFound and a local checkpoint file
Your fine-tuning task must generate a checkpoint named model_best to be used in the classification project.
Available demo projects:
- Fine tune the RETFound retinal colour fundus foundation model
- Fine-tune Moorfields Eye Hospital’s RETFound model on colour fundus images.
- Fine tune the RETFound retinal OCT foundation model
- Fine-tune the RETFound model on OCT images.
- Classify retinal images using RETFound fine tuned on Kermany
- Classify OCT images into:
CNV,DME,DRUSEN,NORMAL.
- Classify OCT images into:
- Classify retinal images using RETFound and a local checkpoint file
- Classify fundus or OCT images using your own fine-tuned model checkpoint (
model_best).
- Classify fundus or OCT images using your own fine-tuned model checkpoint (