Training Client CLI
nupic_train.main()
Parse commandline and excute fine-tuning example.
nupic_train.run_finetuning(url: str, train_path: str, test_path: str, task_type: str, epochs: int, learning_rate: float, model: str, batch_size: int, seed: int = -1, cache_dir: str = None, overwrite_cache: bool = False, wandb_api_key: str = '', wandb_project_name: str = '', wandb_upload_files: bool = False, hf_access_token: str = '', use_bf16: bool = False, use_lora: bool = False, lora_r: int = 16, lora_alpha: int = 32, lora_dropout: float = 0.05, download_dataset: bool = False, dataset_name: str = '', input_prompt: str = '', response_prompt: str = '', max_steps: int = 100)
Provide CLI to run finetuning job using NuPIC Training Server.
Example:
python -m nupic.client.nupic_train \
--url [http://localhost:8321](http://localhost:8321) \
--train_path <path_to_train_set> \
--test_path <path_to_test_set> \
--task_type classification \
--epochs 5 \
--learning_rate 1e-5 \
--model “nupic-sbert.base-v3” \
--batch_size 32
- Parameters:
- url (str) – URL of the API server.
- train_path (str) – path to the train data file.
- test_path (str) – path to the train data file.
- task_type (str) – task type. Options are: classification, qa.
- epochs (int) – number of epochs.
- learning_rate (float) – learning rate.
- model (str) – model name.
- batch_size (int) – batch size.
- seed (int , optional) – random seed, defaults to -1
- cache_dir (str , optional) – cache directory for QA, None for no cache, defaults to None
- overwrite_cache (bool , optional) – overwrite existing cache for QA, defaults to False
- wandb_api_key (str , optional) – WandB API key, defaults to “”. Required for WandB
- wandb_project_name (str , optional) – WandB project name, defaults to “”. Required for WandB
- wandb_upload_files (bool , optional) – To upload model and results to WandB, defaults to False
- hf_access_token (str , optional) – Hugging Face access token for models that require login.
- use_bf16 (bool , optional) – whether the LLM should be loaded using bf16 for train.
- use_lora (bool , optional) – whether the LLM should be using peft with Lora.
- lora_r (int , optional) – Lora attention dimension (the “rank”).
- lora_alpha (int , optional) – The alpha parameter for Lora scaling.
- lora_dropout (float , optional) – The dropout probability for Lora layers.
- download_dataset (bool) – dataset will be downloaded instead of uploaded.
- dataset_name (str) – name of the dataset to be downloaded from hf datasets repo.
- input_prompt (str) – custom input prompt to be presented before the “text” column.
- response_prompt (str) – custom response prompt to be presented before the “label”.
- max_steps (int) – number of train steps for LLMs.
Updated 2 months ago