Guides
Log In
Guides
These docs are for v1.1.2. Click to read the latest docs for v2.2.0.

System Requirements

Welcome to the Getting Started guide for installing NuPIC. Our goal is to get you up and running in no time. Before you can start using NuPIC, you must have suitable hardware, and set up your software environment to include Python tools and the NuPIC components.

Minimum System Requirements

These are minimum requirements for NuPIC which will allow all of our examples to run. Specific use cases may require larger systems. For example, fine-tuning on large datasets may require GPUs with greater memory. Similarly, scaling deployment may require a larger machine or a cluster of machines to support multiple models.

ComponentRequirements
ProcessorA CPU with AMX, AVX512 or AVX2 instruction sets for optimized CPU inference
Memory & Storage- 16GB RAM (larger models may require more RAM)

- 200GB of storage
Operating SystemUbuntu 22.04, or other recent Linux versions with kernel support for AMX
GPU Inference workloads do not require a GPU.

While fine-tuning is possible on CPU, we recommend a GPU with at least 12GB of RAM.
Larger datasets or models may require a GPU with more memory.
Software- C++ compilers (e.g., build-essential)
- Docker Engine, including post-installation configurations
- Python 3.8 or later
- venv or Miniconda
- pip
- unzipIf using a GPU:

- GPU drivers that are compatible with CUDA 11.7
- Container Toolkit
MiscellaneousStable internet connection for container and assets download

📘

Some recommendations

If you are using AWS, we recommend installing both the NuPIC Inference Server and Training Module on an AWS g4dn.2xlarge instance to quickly test both components. The CPU on this instance type supports AVX512 for optimized CPU inference. The instance also has a Tesla T4 GPU to aid in fine-tuning.

CPU instruction sets for optimized inference

Using Intel's Advanced Matrix Extensions (AMX) instructions can significantly enhance the performance of NuPIC for large document processing. These instructions accelerate matrix operations, which are central to natural language processing machine learning tasks. With NuPIC on AMX, you can achieve higher throughput and lower latency, enabling rapid and efficient processing of large document volumes. Additionally, AMX is designed for energy efficiency, reducing operational costs and promoting better resource utilization, allowing for scalable processing without substantial hardware additions.

NuPIC also supports the older Advanced Vector Extensions (AVX) instructions. NuPIC has the best inference performance on AMX, followed by AVX512 and then AVX2.

If you are using AWS, AMX is available on c7i, m7i and r7iz instances types. Alternatively, AVX512 is available on c6i and g4dn instance types.