Guides
Log In
Guides
These docs are for v1.1.2. Click to read the latest docs for v2.2.0.

Installation: Additional GPT Models

The NuPIC Model Library comes pre-installed with our own optimized GPT model, but Gemma and Llama-2 are supported too.

Please follow the optional instructions below to install these models. You will need a Hugging Face account to download these models.

Downloading Gemma

  1. Create an access token to download models programmatically:
    Follow the steps written in this Hugging Face page. This will generate a token associated with your account that will allow you to download models programatically.

  2. Request access to Gemma: Go to the models card. Accept the terms and conditions to download the model.

  3. Download the model: Run the following command on the terminal from the nupic/inference/scripts/download_gemma directory:

python -m venv ./env
source ./env/bin/activate
pip install -r requirements.txt
python download_gemma.py
deactivate
rm -rf ./env

Note that you will be prompted to enter your HF token. This is the one created at step 1

  1. Verify download: Check that the necessary model files have been added to the Model Library at nupic/inference/models/gemma2.it-2b-v0-wtokenizer:
inference/models/gemma2.it-2b-v0-wtokenizer
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00002.safetensors
│   ├── model-00002-of-00002.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt

Downloading Llama-2

To add Llama-2 to the Model Library, please follow these instructions:

  1. Request access: Go to Meta’s Request Page. Fill the form, select the model by checking the Llama-2 & Llama Chat checkbox, read the terms and conditions, check the I accept the terms and condition checkbox, and click on Accept and Continue button. Note that the email you put on the request form must match the email of your Hugging Face account.

  2. Access Llama-2 on Hugging Face: Go to Llama-2 on Hugging Face, log in with your Hugging Face account, and click on the Submit button. In order to proceed, you must wait until you receive an email confirming the approval of your request.

  3. Download the model: Run the following command on the terminal from the nupic/inference/scripts/download_llama/ directory:

python -m venv ./env
source ./env/bin/activate
pip install -r requirements.txt
python download_llama.py
deactivate
rm -rf ./env
  1. Verify download: Check that the necessary model files have been added to the Model Library at nupic/inference/models/llama-7b-v0-wtokenizer/:
inference/models/llama-7b-v0-wtokenizer/
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00003.safetensors
│   ├── model-00002-of-00003.safetensors
│   ├── model-00003-of-00003.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt