Guides
Log In
Guides

Installation: Additional GPT Models

The NuPIC Model Library comes pre-installed with our own optimized NuPIC-GPT model, but Gemma, Gemma 2, Llama 2 and Llama 3 are supported too.

Please follow the optional instructions below to install these models. You will need a Hugging Face account to download these models.

Downloading Gemma and Gemma2

  1. Create an access token to download models programmatically:
    Follow the steps written in this Hugging Face page. This will generate a token associated with your account that will allow you to download models programatically.

  2. Request access to Gemma: Go to the Gemma or Gemma 2 model card. Accept the terms and conditions, and wait for the email indicating that you've been granted access to the model.

  3. Download the model: Run the following command on the terminal from the nupic/inference/scripts/download_gemma directory:

python -m venv ./env
source ./env/bin/activate
pip install -r requirements.txt
python download_gemma.py              # OR download_gemma2.py
deactivate
rm -rf ./env

Note that you will be prompted to enter your Hugging Face token. This is the one created at step 1.

  1. Verify download: Check that the necessary model files have been added to the Model Library at nupic/inference/models/.

For Gemma:

inference/models/gemma2.it.2b
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00002.safetensors
│   ├── model-00002-of-00002.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt

For Gemma 2:

inference/models/gemma2.it.9b
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00004.safetensors
│   ├── model-00002-of-00004.safetensors
│   ├── model-00003-of-00004.safetensors
│   ├── model-00004-of-00004.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt

Downloading Llama 2 and Llama 3

To add Llama 2 or Llama 3 to the Model Library, please follow these instructions:

  1. Create an access token to download models programmatically:
    Follow the steps written in this Hugging Face page. This will generate a token associated with your account that will allow you to download models programmatically.

  2. Request access: Go to Meta’s Request Page. Fill out the form, select the models by checking Llama 2 and/or Llama 3, read the terms and conditions, check I accept the terms and condition, and click on Accept and Continue. Note that the email you put on the request form must match the email of your Hugging Face account.

  3. Access Llama 2/Llama 3 on Hugging Face: Go to the Llama 2 or Llama 3 Hugging Face page, log in with your Hugging Face account, and click on the Submit button. In order to proceed, you must wait until you receive an email confirming the approval of your request.

  4. Download the model: Run the following command on the terminal from the nupic/inference/scripts/download_llama/ directory:

python -m venv ./env
source ./env/bin/activate
pip install -r requirements.txt
python download_llama2.py          # OR download_llama3.py
deactivate
rm -rf ./env

You will be prompted to enter the Hugging Face token created at step 1.

  1. Verify download: Check that the necessary model files have been added to the Model Library at the respective folders undernupic/inference/models.

For Llama 2:

inference/models/llama2.chat.7b
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00003.safetensors
│   ├── model-00002-of-00003.safetensors
│   ├── model-00003-of-00003.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt

For Llama 3:

inference/models/llama3.chat.8b
├── 1
│   ├── config.json
│   ├── generation_config.json
│   ├── model-00001-of-00004.safetensors
│   ├── model-00002-of-00004.safetensors
│   ├── model-00003-of-00004.safetensors
│   ├── model-00004-of-00004.safetensors
│   ├── model.py
│   ├── model.safetensors.index.json
│   ├── special_tokens_map.json
│   ├── tokenizer_config.json
│   └── tokenizer.json
└── config.pbtxt