Question Answering
Question-answering (QA) allows you to obtain specific information or insights from large volumes of data or text. It’s particularly useful in customer support and information retrieval tasks, where quick, accurate answers are needed for specific user queries.
Questions that require short and simple answers can be handled by non-generative models. Instructions for this approach are provided on this page.
On the other hand, questions that require longer and more complex answers, and/or with reference to longer documents may require a generative model, potentially as part of a Retrieval Augmented Generation system. Please see the linked pages for the respective instructions.
Getting Started
Before you start, make sure the NuPIC Inference Server and Training Module are up and running, and the Python environment is set up.
Data Preparation
Navigate to the directory containing the QA fine-tuning example:
cd nupic.examples/examples/qa
The directory should look like this:
qa/
├── datasets/
├── down_sample_data.py
├── print_results.py
├── qa.sh
├── README.md
└── requirements.txt
Check if the datasets/
directory already contains train_cuad_example.json
and test_cuad_example.json
. If not, please download from the Contract Understanding Atticus Dataset (CUAD), after which you may consider running python down_sample_data.py
to subset the data to a more manageable volume.
The CUAD dataset used in this example follows the SQuAD format. CUAD contains a collection of legal documents, and each document is associated with multiple questions and their corresponding answers, which forms the basis for tuning a general-purpose BERT model into a QA model.
Fine-Tuning
We want to fine-tune a BERT model such that given inputs of a document excerpt and a relevant question, it will return an answer based on the provided document.
Fine-tuning required
A fine-tuned model is required for the non-generative question-answering approach. By default, NuPIC BERT models simply return an embedding vector, so the Training Module helps to add a model head to generate tokens for answering the question.
Open qa.sh
in a text editor, and look for the following block:
python -m nupic.client.nupic_train \
--train_path $train_dataset \
--test_path $test_dataset \
--model nupic-sbert.base-v3 \
--task_type $task_type \
--seed $seed \
--url http://127.0.0.1:8321 \ <------------------
--batch_size $batch_size \ <---------------------
--epochs 20 \
--learning_rate 0.0002 \
--cache_dir ./cache3
Ensure that the URL points correctly to the Training Module. In the arguments shown above, we assume that the Training Module resides on the same machine as the Python fine-tuning client. You may also consider adjusting the batch size to match your GPU memory (we found that batches of 8 work well with 16GB of GPU memory).
Now we can start the fine-tuning process:
chmod +x qa
./qa.sh
Once fine-tuning is complete, the expected output looks like this:
Question 5:
Question: Highlight the parts (if any) of this contract related to "Parties" that shouldshould be reviewed by a lawyer. Details: The name of the contract be reviewed by a lawyer. Details: The two or more parties who signed the contract
Correct answer: Electric City of Illinois L.L.C.
Model's answer 1: If Distributor does not exercise its option as her...
Model's answer 2: Should Company introduce other products or devices...
Question 9: be reviewed by a lawyer. Details: The two or more parties who signed the contract
Question: Highlight the parts (if any) of this contract related to "Agreement Date" that should be reviewed by a lawyer. Details: The date of the contract
Correct answer: 7th day of September, 1999.
Model's answer 1: 7th day of September, 1999.
Model's answer 2: Unless earlier terminated otherwise provided there...
Note that the two most relevant answer strings are returned. The full training results are saved to qa/results.csv
and can be used for subsequent analysis. The fine-tuned model is also saved to the same folder (model_xxx.tar.gz
).
Inference
To perform inference using the newly-tuned model, we need to copy it to the model library and extract it
mv model_xxx.tar.gz <path_to_nupic>/inference/models
cd <path_to_nupic>/inference/models
tar -xzvf model_xxx.tar.gz
Now navigate to the directory containing the QA inference example:
cd nupic/nupic.examples/qa_inference
Open qa_inference.py
in a text editor. Let's check some configurations:
URL = "localhost:8000" <-------------------------- Inference Server URL
PROTOCOL = "http"
CERTIFICATES = {}
MODEL = "model_xxx" <----------------------------- Tuned model
MODEL_TOKENIZER = f"{MODEL}-wtokenizer"
CONCURRENT_REQUESTS = 1
MAX_BATCH_SIZE = 64
TOKENIZER_CONFIG = {"max_tokens_per_string": MAX_NUMBER_TOKENS, "skip_imports": True}
Ensure the URL points to the Inference Server (the example above indicates a locally-hosted setup). Also update the configurations with the name of your fine-tuned QA model.
Now let's run inference!
python qa_inference.py
You should get an output that looks like this:
Question 9:
Question: Highlight the parts (if any) of this contract related to "Expiration Date" that should be reviewed by a lawyer. Details: On what date will the contract's initial term expire?
Correct answer: The term of this Agreement shall be ten (10) years (the "Term") which shall commence on the date upon which the Company delivers to Distributor the last Sample, as defined hereinafter.
Model's answer: The term of this Agreement shall be ten ( 10 ) years ( the " Term " ) which shall commence on the date upon which the Company delivers to Distributor the last Sample, as defined hereinafter.
Updated 6 months ago