Guides
Log In
Guides

GPT with LangChain

While the NuPIC Python inference client is capable of interacting directly with GPT chat models, you may alternatively prefer using the LangChain framework for ease of integration with other components such as vector databases. Our Python client contains integrations with LangChain to facilitate this process. We'll begin here with an example of a simple GPT chat use-case using LangChain.

Quick Start

Before you start, make sure the NuPIC Inference Server is up and running, and the Python environment is set up.

Now navigate to the directory containing the LangChain examples:

cd nupic.examples/examples/langchain

Open llm_example.py in a text editor, and ensure that the Inference Server URL and GPT model name are correctly specified. The code below assumes the Inference Server and Python client and running on the same machine, and that we are using the default NuPIC-GPT model.

model = "nupic-gpt"
long_model_name = model_naming_mapping[model]
prompt_formatter = get_prompt_formatter(long_model_name)
...

Let's run the Python script and see what it returns.

python llm_example.py

Expected output:

What is the answer to the life, the universe and everything?

LLM Answer:
According to the fictional character, Deep Thought, the answer to the ultimate question of life, the universe, and everything is 42. However, in the novel "The Hitchhiker's Guide to the Galaxy" by Douglas Adams, the real purpose of this answer is left unclear, as Deep Thought suggests that the real answer is actually something more fundamental that humans are not yet ready to understand.

In More Detail

The NuPIC Python inference client interfaces with LangChain by implementing a custom subclass of base LLM class in LangChain. With this, you can interact with the instances of the NuPICLLM class as though they're regular LangChain LLMs, but with the performance and flexibility of NuPIC.

from nupic.client.langchain import NuPICLLM
...

# Load the NuPIC LLM model
llm = NuPICLLM(
    url="localhost:8000",
    model=long_model_name,
)
...

answer = llm(full_prompt)

Non-Generative Models

Non-generative NuPIC models work well with LangChain too. Check it out on this page.