nip.language_model_server.types.LmTrainingConfig

Contents

nip.language_model_server.types.LmTrainingConfig#

class nip.language_model_server.types.LmTrainingConfig(*, model_name: str, method: ~typing.Literal['dpo'], dpo_config: ~nip.language_model_server.types.LmDpoTrainingConfig = <factory>, training_lora_config: ~nip.language_model_server.types.LmLoraAdapterConfig | None = None, seed: int = 6198, per_device_train_batch_size: int = 2, model_already_lora_strategy: ~typing.Literal['reuse', 'stack'] = 'reuse', mixed_precision: ~typing.Literal['fp16', 'bf16', 'no'] = 'fp16', gradient_checkpointing: bool = True, use_liger_kernel: bool = True, logging_steps: int = 1)[source]#

Configuration for training a language model with the language model server.

Attributes

__fields_set__

model_computed_fields

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_extra

Get extra fields set during validation.

model_fields

model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

model_name

The name of the model to be trained, typically a Hugging Face identifier.

method

The method to be used for training.

dpo_config

Configuration specific to DPO training.

training_lora_config

Configuration for the LoRA adapter to use when training.

seed

The random seed to use for training, for reproducibility.

per_device_train_batch_size

The batch size per device (GPU) for training.

model_already_lora_strategy

Strategy for handling models that are already LoRA-adapted.

mixed_precision

The mixed precision to use during training.

gradient_checkpointing

Whether to use gradient checkpointing to save memory during training.

use_liger_kernel

Whether to use the Liger loss function during training.

logging_steps

The period (in steps) at which to log training metrics.

Methods