Embeddings with LangChain
Apart from generative models, did you know LangChain works with non-generative models too? In this section, we see how the NuPIC Python client makes it easy to bring optimized non-generative models into your LangChain workflow. Specifically, we examine the case of using an optimized BERT-style model to populate a local vector database, and to subsequently query this database. In practice, this can be helpful for document retrieval, or simple question-answering use-cases.
Quick Start
Before you start, make sure the NuPIC Inference Server is up and running, and the Python environment is set up.
Now navigate to the directory containing the LangChain examples:
cd nupic.examples/examples/langchain
Open embedding_example.py
in a text editor, and check that the Inference Server URL and model name are correctly specified. In the example below, we assume the inference server is running on the same machine as the Python client, and indicate that NuPIC's optimized SBERT model is to be used for generating text embeddings.
embeddings = NuPICEmbedding(
url="localhost:8000",
model="nupic-sbert-2-v1-wtokenizer",
)
This Python script will populate a vector database with embeddings of the following strings:
sentences = [
"A man with a hard hat is dancing.",
"A man wearing a hard hat is dancing.",
"My father has a banana pie.",
"My mother has cooked an apple pie.",
"The answer to the life, the universe and everything.",
Let's see what we get when we run the Python script to perform a query against the vector database:
python embedding_example.py
In the expected output below, we see that query contains a simple question, which returns the most similar and relevant sentence from the database.
Question: Who cooked the apple pie?
Answer: My mother has cooked an apple pie.
In more detail
Let's break down what just happened. First, each of the original five sentences are passed through the SBERT model, which produces an embedding vector for each sentence. These embeddings are produced by NuPIC's custom implementation of LangChain's Embeddings
class.
The sentences and their respective embeddings are then used to populate the locally-hosted vector database. In the database, the embedding vectors are assigned as keys, and their corresponding sentence strings are set as values.
During query time, the query string is converted to an embedding vector. This is then matched against the keys of the vector database (which are also embedding vectors), which calls the most similar entry and returns its value.
Generative Models
Generative models in NuPIC work with LangChain too. Find out more here.
Generative models can also be combined with non-generative models and vector databases to enable more advanced retrieval/QA systems with chatbot-like/virtual assistant capabilities. Check it out here.
Updated 9 months ago