Frequently Asked Questions
Still confused about Numenta and NuPIC? Here are our answers to some frequently asked questions. If you have a question that is not answered in our documentation, please contact us at [email protected].
General FAQs
What is Numenta?
Numenta is a leader in deploying large AI models on CPUs. Our AI platform, the Numenta Platform for Intelligent Computing (NuPIC), is designed to help businesses build and scale AI applications on CPUs that are efficient, scalable and secure. Rooted in two decades of neuroscience research, by mapping our neuroscience-based advances to modern CPU architectures, we are redefining what’s possible in AI.
What use-cases are best suited for NuPIC?
NuPIC is suited for a variety of natural language processing (NLP) use-cases. The NuPIC Model Library includes a selection of optimized and pre-trained embedding and generative AI models that you can use off the shelf to power your AI applications, or fine-tune to your specific needs. Learn more about what you can use NuPIC for here.
Do I need prior experience with machine learning or AI to use NuPIC?
No, users do not need any prior experience with AI/ML. NuPIC is designed to be user-friendly and accessible to developers without specialized expertise in these fields. Our example codes serve as a starting point for users to build upon and customize according to their specific needs and requirements. Get started with NuPIC by requesting a demo.
How can I test NuPIC?
To test NuPIC, you can contact our sales team here. If our solutions seem like a good fit for your use-case, we offer a technical evaluation which allows you to internally test the platform and validate our benchmarks firsthand.
Model FAQs
Can I load my own model into NuPIC, and what performance gains can I expect?
Yes, you can import your own models to NuPIC. The performance gains will vary depending on the capability and nature of your model. For optimal performance, we recommend using one of our pre-trained NuPIC models as they are specifically optimized for our infrastructure. You can find our list of optimized models here.
In addition, we are actively working to automatically optimize any model you bring. We expect this to feature to be integrated in a future release later this year.
How are you planning to keep your Model Library up to date?
We are constantly expanding our set of optimized models, ensuring that our users have access to the most cutting-edge models available for their needs. If you can’t find what you need from our Model Library, feel free to reach out at [email protected].
Can NuPIC process images and videos?
We don’t support images or videos at this time, but this is on our future roadmap.
Infrastructure FAQs
Is NuPIC exclusive to Intel hardware?
No, NuPIC is not limited to Intel hardware. It is compatible with any x86 architecture that supports AMX and AVX instruction sets. While NuPIC supports older AVX instructions, it has the best inference performance on AMX (which requires the Intel Xeon 4th Gen processors or later), followed by AVX512, and then AVX2. For reference, running NuPIC with AMX is approximately three times faster than that on AVX512.
Performance FAQs
Are there any accuracy trade-offs with NuPIC models?
We haven’t seen significant accuracy tradeoffs with our models when deployed in real-world applications. Our optimization techniques enable us to run larger models at speeds that were previously achievable only for smaller models. This means that NuPIC-optimized models deliver higher accuracies at any given speed, leading to faster processing times with no compromise on accuracy.
How do NuPIC models impact latency?
You can smoothly tradeoff throughput and latency in NuPIC by allocating CPU cores to specific models. For instance, allocating more cores to a particular model can significantly reduce its latency, making it ideal for time-sensitive application. Conversely, you can distribute cores among models to optimize overall throughput. This level of control and customization is something that you cannot do on GPUs. Check out this page for more details on how to optimize your model for throughput or latency.
The performance of your models seems too good to be true. How are you getting these results?
Our AI technology is rooted in two decades of proprietary neuroscience research. What you may not realize is that our brains are incredibly efficient, using only 20 watts of power, or enough to power a lightbulb. Based on the brain’s efficient and sustainable mechanisms, we have defined novel neuroscience-based algorithms and data structures, and mapped these advances to modern CPU architectures. We encourage you to validate our results firsthand during your evaluation period. You can request a demo here.
Updated 5 months ago