Skip to main content

hugging_face_text_generation

Hugging Face Text Generation Algorithm.

Classes

HuggingFaceTextGenerationInference

class HuggingFaceTextGenerationInference(    model_id: str,    text_column_name: Optional[str] = None,    prompt_format: Optional[str] = None,    max_length: int = 50,    num_return_sequences: int = 1,    seed: int = 42,    min_new_tokens: int = 1,    repetition_penalty: float = 1.0,    num_beams: int = 1,    early_stopping: bool = True,    pad_token_id: Optional[int] = None,    eos_token_id: Optional[int] = None,    device: Optional[str] = None,    torch_dtype: "Literal['bfloat16', 'float16', 'float32', 'float64']" = 'float32',):

Hugging Face Text Generation Algorithm.

Arguments

  • device: The device to use for the model. Defaults to None. On the worker side, will be set to the environment variable BITFOUNT_DEFAULT_TORCH_DEVICE if specified, otherwise "cpu".
  • early_stopping: Whether to stop the generation as soon as there are num_beams complete candidates. Defaults to True.
  • eos_token_id: The id of the token to use as the last token for each sequence. If None (default), it will default to the eos_token_id of the tokenizer.
  • max_length: The maximum length of the sequence to be generated. Defaults to 50.
  • min_new_tokens: The minimum number of new tokens to add to the prompt. Defaults to 1.
  • model_id: The model id to use for text generation. The model id is of a pretrained model hosted inside a model repo on huggingface.co. Accepts models with a causal language modeling head.
  • num_beams: Number of beams for beam search. 1 means no beam search. Defaults to 1.
  • num_return_sequences: The number of sequence candidates to return for each input. Defaults to 1.
  • pad_token_id: The id of the token to use as padding token. If None (default), it will default to the pad_token_id of the tokenizer.
  • prompt_format: The format of the prompt as a string with a single {context} placeholder which is where the pod's input will be inserted. For example, You are a Language Model. This is the context: {context}. Please summarize it.. This only applies if text_column_name is provided, it is not used for dynamic prompting. Defaults to None.
  • repetition_penalty: The parameter for repetition penalty. 1.0 means no penalty. Defaults to 1.0.
  • seed: Sets the seed of the algorithm. For reproducible behaviour it defaults to 42.
  • text_column_name: The single column to query against. Should contain text for generation. If not provided, the algorithm must be used with a protocol which dynamically provides the text to be used for prompting.
  • torch_dtype: The torch dtype to use for the model. Defaults to "float32".

Attributes

  • class_name: The name of the algorithm class.
  • device: The device to use for the model. Defaults to None. On the worker side, will be set to the environment variable BITFOUNT_DEFAULT_TORCH_DEVICE if specified, otherwise "cpu".
  • early_stopping: Whether to stop the generation as soon as there are num_beams complete candidates. Defaults to True.
  • eos_token_id: The id of the token to use as the last token for each sequence. If None (default), it will default to the eos_token_id of the tokenizer.
  • fields_dict: A dictionary mapping all attributes that will be serialized in the class to their marshamllow field type. (e.g. fields_dict = {"class_name": fields.Str()}).
  • max_length: The maximum length of the sequence to be generated. Defaults to 50.
  • min_new_tokens: The minimum number of new tokens to add to the prompt. Defaults to 1.
  • model_id: The model id to use for text generation. The model id is of a pretrained model hosted inside a model repo on huggingface.co. Accepts models with a causal language modeling head.
  • nested_fields: A dictionary mapping all nested attributes to a registry that contains class names mapped to the respective classes. (e.g. nested_fields = {"datastructure": datastructure.registry})
  • num_beams: Number of beams for beam search. 1 means no beam search. Defaults to 1.
  • num_return_sequences: The number of sequence candidates to return for each input. Defaults to 1.
  • pad_token_id: The id of the token to use as padding token. If None (default), it will default to the pad_token_id of the tokenizer.
  • prompt_format: The format of the prompt as a string with a single {context} placeholder which is where the pod's input will be inserted. For example, You are a Language Model. This is the context: {context}. Please summarize it.. This only applies if text_column_name is provided, it is not used for dynamic prompting. Defaults to None.
  • repetition_penalty: The parameter for repetition penalty. 1.0 means no penalty. Defaults to 1.0.
  • seed: Sets the seed of the algorithm. For reproducible behaviour it defaults to 42.
  • text_column_name: The single column to query against. Should contain text for generation. If not provided, the algorithm must be used with a protocol which dynamically provides the text to be used for prompting.
  • torch_dtype: The torch dtype to use for the model. Defaults to "float32".

Raises

  • ValueError: If prompt_format is provided without text_column_name.
  • ValueError: If prompt_format does not contain a single {context} placeholder.

Ancestors

Variables

Methods


create

def create(self, role: Union[str, Role], **kwargs: Any)> Any:

Create an instance representing the role specified.

modeller

def modeller(    self,    **kwargs: Any,)> bitfount.federated.algorithms.hugging_face_algorithms.hugging_face_text_generation._ModellerSide:

Returns the modeller side of the HuggingFaceTextGenerationInference algorithm.

worker

def worker(    self,    **kwargs: Any,)> bitfount.federated.algorithms.hugging_face_algorithms.hugging_face_text_generation._WorkerSide:

Returns the worker side of the HuggingFaceTextGenerationInference algorithm.