Skip to main content

differential

Contains classes for marking differential privacy on models.

Classes

DPModellerConfig

class DPModellerConfig(    epsilon: float,    max_grad_norm: Union[float, List[float]] = 1.0,    noise_multiplier: float = 0.4,    alphas: List[float] = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7, 10.8, 10.9, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63],    delta: float = 1e-06,    loss_reduction: "Literal['mean', 'sum']" = 'mean',    auto_fix: bool = True,):

Modeller configuration options for Differential Privacy.

info

epsilon and delta are also set by the Pods involved in the task and take precedence over the values supplied here.

Arguments

  • epsilon: The maximum epsilon value to use.
  • max_grad_norm: The maximum gradient norm to use. Defaults to 1.0.
  • noise_multiplier: The noise multiplier to control how much noise to add. Defaults to 0.4.
  • alphas: The alphas to use. Defaults to floats from 1.1 to 63.0 (inclusive) with increments of 0.1 up to 11.0 followed by increments of 1.0 up to 63.0. Note that none of the alphas should be equal to 1.
  • delta: The target delta to use. Defaults to 1e-6.
  • loss_reduction: The loss reduction to use. Available options are "mean" and "sum". Defaults to "mean".
  • auto_fix: Whether to automatically fix the model if it is not DP-compliant. Currently, this just converts all BatchNorm layers to GroupNorm. Defaults to True.

Raises

  • ValueError: If loss_reduction is not one of "mean" or "sum".

Variables

  • static alphas : List[float]
  • static auto_fix : bool
  • static delta : float
  • static epsilon : float
  • static fields_dict : ClassVar[Dict[str, Any]]
  • static loss_reduction : Literal['mean', 'sum']
  • static max_grad_norm : Union[float, List[float]]
  • static nested_fields : ClassVar[Dict[str, Dict[str, Any]]]
  • static noise_multiplier : float

DPPodConfig

class DPPodConfig(epsilon: float, delta: float = 1e-06):

Pod configuration options for Differential Privacy.

Primarily used as caps and bounds for what options may be set by the modeller.

Arguments

  • epsilon: The maximum epsilon value to use.
  • delta: The maximum target delta to use. Defaults to 1e-6.

Ancestors

  • bitfount.federated.privacy.differential._BaseDPConfig

Variables

  • static delta : float
  • static epsilon : float