LangChain integration

Learn how to integrate Rememberizer as a LangChain retriever to provide your LangChain application with access to powerful vector database search.

Rememberizer integrates with LangChain through the RememberizerRetriever class, allowing you to easily incorporate Rememberizer's semantic search capabilities into your LangChain-powered applications. This guide explains how to set up and use this integration to build advanced LLM applications with access to your knowledge base.

Introduction

LangChain is a popular framework for building applications with large language models (LLMs). By integrating Rememberizer with LangChain, you can:

  • Use your Rememberizer knowledge base in RAG (Retrieval Augmented Generation) applications

  • Create chatbots with access to your documents and data

  • Build question-answering systems that leverage your knowledge

  • Develop agents that can search and reason over your information

The integration is available in the langchain_community.retrievers module.

Getting Started

Prerequisites

Before you begin, you need:

  1. A Rememberizer account with Common Knowledge created

  2. An API key for accessing your Common Knowledge

  3. Python environment with LangChain installed

For detailed instructions on creating Common Knowledge and generating an API key, see Registering and Using API Keys.

Installation

Install the required packages:

If you plan to use OpenAI models (as shown in examples below):

Authentication Setup

There are two ways to authenticate the RememberizerRetriever:

  1. Environment Variable: Set the REMEMBERIZER_API_KEY environment variable

  2. Direct Parameter: Pass the API key directly when initializing the retriever

Configuration Options

The RememberizerRetriever class accepts these parameters:

Parameter
Type
Default
Description

top_k_results

int

10

Number of documents to return from search

rememberizer_api_key

str

None

API key for authentication (optional if set as environment variable)

Behind the scenes, the retriever makes API calls to Rememberizer's search endpoint with additional configurable parameters:

Advanced Parameter
Description

prev_chunks

Number of chunks before the matched chunk to include (default: 2)

next_chunks

Number of chunks after the matched chunk to include (default: 2)

return_full_content

Whether to return full document content (default: true)

Basic Usage

Here's a simple example of retrieving documents from Rememberizer using LangChain:

Understanding Document Structure

Each document returned by the retriever has:

  • page_content: The text content of the matched document chunk

  • metadata: Additional information about the document

Example of metadata structure:

Advanced Examples

Building a RAG Question-Answering System

This example creates a question-answering system that retrieves information from Rememberizer and uses GPT-3.5 to formulate answers:

Building a Conversational Agent with Memory

This example creates a conversational agent that can maintain conversation history:

Best Practices

Optimizing Retrieval Performance

  1. Be specific with queries: More specific queries usually yield better results

  2. Adjust top_k_results: Start with 3-5 results and adjust based on application needs

  3. Use context windows: The retriever automatically includes context around matched chunks

Security Considerations

  1. Protect your API key: Store it securely using environment variables or secret management tools

  2. Create dedicated keys: Create separate API keys for different applications

  3. Rotate keys regularly: Periodically generate new keys and phase out old ones

Integration Patterns

  1. Pre-retrieval processing: Consider preprocessing user queries to improve search relevance

  2. Post-retrieval filtering: Filter or rank retrieved documents before passing to the LLM

  3. Hybrid search: Combine Rememberizer with other retrievers using EnsembleRetriever

Troubleshooting

Common Issues

  1. Authentication errors: Verify your API key is correct and properly configured

  2. No results returned: Ensure your Common Knowledge contains relevant information

  3. Rate limiting: Be mindful of API rate limits for high-volume applications

Debug Tips

  • Set the LangChain debug mode to see detailed API calls:

  • Examine raw search results before passing to LLM to identify retrieval issues

Last updated