Guides
Log In
Guides

System Requirements

Welcome to the Getting Started guide for installing NuPIC. Our goal is to get you up and running in no time. Before you can start using NuPIC, you must have suitable hardware, and set up your software environment to include Python tools and the NuPIC components.

Minimum System Requirements

These are minimum requirements for NuPIC which will allow our pre-worked examples to run. Specific use cases may require larger systems. For example, fine-tuning on large datasets may be more practical GPUs with greater memory. Similarly, scaling deployment may require a larger machine or a cluster of machines to support multiple models.

ComponentRequirements
ProcessorA CPU with AMX, AVX512 or AVX2 instruction sets for optimized CPU inference
Memory & Storage16GB RAM is sufficient for non-generative models
32GB RAM is recommended for generative models
200GB of storage
Operating SystemUbuntu 22.04, or other recent Linux versions with kernel support for AMX
GPU Inference workloads do not require a GPU.

While fine-tuning is possible on CPU, we recommend a GPU with at least 12GB of RAM.
Larger datasets or models may require a GPU with more memory.
Software- C++ compilers (e.g., build-essential)
- Docker Engine, including post-installation configurations
- Docker Compose plugin
- Python 3.10 or later
- venv or Miniconda
- pip
- unzip

If using a GPU:
- GPU drivers that are compatible with at least CUDA 11.7
- Container Toolkit
MiscellaneousStable internet connection for container and assets download

📘

Some recommendations

If you are using AWS, we recommend installing both the NuPIC Inference Server and Training Module on an AWS g4dn.2xlarge instance to quickly test both components. The CPU on this instance type supports AVX512 for optimized CPU inference. The instance also has a Tesla T4 GPU to aid in fine-tuning.

CPU instruction sets for optimized inference

Using Intel's Advanced Matrix Extensions (AMX) instructions can significantly enhance the performance of NuPIC for large document processing. These instructions accelerate matrix operations, which are central to natural language processing machine learning tasks. With NuPIC on AMX, you can achieve higher throughput and lower latency, enabling rapid and efficient processing of large document volumes. Additionally, AMX is designed for energy efficiency, reducing operational costs and promoting better resource utilization, allowing for scalable processing without substantial hardware additions.

NuPIC also supports the older Advanced Vector Extensions (AVX) instructions. NuPIC has the best inference performance on AMX, followed by AVX512 and then AVX2.

If you are using AWS, AMX is available on c7i, m7i , m7i-flex, r7i, and r7iz instances types. Alternatively, AVX512 is available on a wide range of instance types from multiple families. Please see this page for an up-to-date list of features for each instance type.