Building Devops AI Assistant with Langchain, Ollama, and PostgreSQL
Introduction
Vector databases emerge as a powerful tool for storing and searching high-dimensional data like document embeddings, offering lightning-fast
similarity queries. This article delves into leveraging PostgreSQL, a popular relational database, as a vector database with the pgvector
extension.
We'll explore how to integrate it into a LangChain workflow for building a robust question-answering (QA) system.
What are Vector Databases?
Imagine a vast library holding countless documents. Traditional relational databases might classify them by subject or keyword. But what if you want to find documents most similar to a specific concept or question, even if keywords don't perfectly align? Vector databases excel in this scenario. They store data as numerical vectors in a high-dimensional space, where closeness in the space reflects semantic similarity. This enables efficient retrieval of similar documents based on their meaning, not just exact keyword matches.
PostgreSQL as Vector Database
PostgreSQL, a widely adopted and versatile relational database system, can be empowered with vector search capabilities using the pgvector
extension.
You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp,
which offers managed PostgreSQL databases, simplifying setup and maintenance.
Create a free database in Rapidapp in seconds here
If you maintain PostgreSQL database on your own, you can enable pgvector
extension by executing the following command for each database as shown below.
CREATE EXTENSION vector;
LangChain: Building Flexible AI Pipelines
LangChain is a powerful framework that facilitates the construction of modular AI pipelines. It allows you to chain together various AI components seamlessly, enabling the creation of complex and customizable workflows.
Your Use Case: Embedding Data for AI-powered QA
In your specific scenario, you're aiming to leverage vector search to enhance a question-answering system. Here's how the components might fit together:
-
Data Preprocessing: Process your documents (e.g., web pages) using Natural Language Processing (NLP) techniques to extract relevant text content. Generate vector representations of your documents using an appropriate AI library (e.g., OllamaEmbeddings in your code).
-
Embedding Storage with pgvector: Store the document vectors and their corresponding metadata (e.g., titles, URLs) in your PostgreSQL database table using pgvector.
-
Building the LangChain Workflow: Construct a LangChain pipeline that incorporates the following elements:
- Retriever: This component retrieves relevant documents from your PostgreSQL database using vector similarity search powered by pgvector. When a user poses a question, the retriever searches for documents with vector representations closest to the query's vector.
- Question Passage Transformer: (Optional) This component can further process the retrieved documents to extract snippets most relevant to the user's query.
- Language Model (LLM): This component uses the retrieved context (potentially augmented with question-specific passages) to formulate a comprehensive response to the user's question.
DevOps AI Assistant: Step-by-step Implementation
We will implement the application by using Pyhton, and will use Poetry for dependency management.
Project Creation
Create a directory and initiate a project by running the following command:
poetry init
This will create a pyproject.toml
file in the current directory.
Dependencies
You can install dependencies by running the following command:
poetry add langchain-cohere \
langchain-postgres \
langchain-community \
html2text \
tiktoken
Once you installed dependencies, you can create a empty main.py
file to implement our business logic.
Preparing the PostgreSQL Connection URL
Once you create your database on Rapidapp, or use your own database, you can construct the PostgreSQL connection URL as follows.
postgresql+psycopg://<user>:<pass>@<host>:<port>/<db>
Defining the Vector Store
connection = "<connection_string>"
collection_name = "prometheus_docs"
embeddings = OllamaEmbeddings()
vectorstore = PGVector(
embeddings=embeddings,
collection_name=collection_name,
connection=connection,
use_jsonb=True,
)
As you can see, we use embedding
in the codebase. Your implementation can interact with different AI providers like
OpenAI
, HuggingFace
, HuggingFace
, Ollama
, etc. Embedding provides a standard interface for all of them. In our case, we use OllamaEmbeddings
,
since we will be using Ollama as AI provider.
Line 2: This is the collection name in the PostgreSQL database where we store vector documents. In our case, we will store a couple of Prometheus document to help AI provider to decide answers to user's questions.
Line 5: LangChain has lots of vector store implementations and PGVector is one of them. This will help us to interact vector search with PostgreSQL database.
Indexing Documents
urls = ["https://prometheus.io/docs/prometheus/latest/getting_started/", "https://prometheus.io/docs/prometheus/latest/federation/"]
loader = AsyncHtmlLoader(urls)
docs = loader.load()
htmlToText = Html2TextTransformer()
docs_transformed = htmlToText.transform_documents(docs)
splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
)
docs = splitter.split_documents(docs_transformed)
vectorstore.add_documents(docs)
Line 1-3: With the help of AsyncLoader, we simply load 2 documentation pages of Prometheus.
Line 5-6: Since we cannot use raw html files, we will convert them to text using Html2TextTransformer
.
Line 8-11: RecursiveCharacterTextSplitter
helps by chunking large text documents into manageable pieces that comply
with vector store limitations, improve embedding efficiency, and potentially enhance retrieval accuracy.
Line 12: Store processed documents into vector store.
Building the LangChain Workflow
retriever = vectorstore.as_retriever()
llm = Ollama()
message = """
Answer this question using the provided context only.
{question}
Context:
{context}
"""
prompt = ChatPromptTemplate.from_messages([("human", message)])
rag_chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | llm
response = rag_chain.invoke("how to federate on prometheus")
print(response)
Above code snippet demonstrates how to use LangChain to retrieve information from a vector store and generate a response using a large language model (LLM) based on the retrieved information. Let's break it down step-by-step:
Line 1: This line assumes you have a vector store set up and imports a function to use it as a retriever within LangChain. The retriever will be responsible for fetching relevant information based on a query.
Line 2: This line initializes an instance of the Ollama LLM, which will be used to generate the response to the question.
Line 4: The code defines a multi-line string variable named message. This string uses a template format to include two sections: question: This section will hold the specific question you want to answer. context: This section will contain the relevant background information for the question.
Line 13: Generates chat prompt template.
Line 15: Here the question and context is piped to template to generate prompt, then passed to llm to generate the response. Be sure this is a runnable chain.
Line 16: We invoke the chain with a question and get the response.
Conclusion
In this practical guide, we've delved into using PostgreSQL as a vector database, leveraging the pgvector extension. We explored how this approach can be used to build a context-aware AI assistant, focusing on Prometheus documentation as an example. By storing document embeddings alongside their metadata, we enabled the assistant to retrieve relevant information based on semantic similarity, going beyond simple keyword matching. LangChain played a crucial role in this process. Its modular framework allowed us to effortlessly connect various AI components, like PGVector for vector retrieval and OllamaEmbeddings for interacting with our chosen AI provider. Furthermore, LangChain's ability to incorporate context within user questions significantly enhances the relevance and accuracy of the assistant's responses.
You can find the complete source code for this project on GitHub.