Search for documents by semantic similarity

Semantic search endpoint with batch processing capabilities

Initiate a search operation with a query text of up to 400 words and receive the most semantically similar responses from the stored knowledge. For question-answering, convert your question into an ideal answer and submit it to receive similar real answers.

Query parameters
qstringOptional

Up to 400 words sentence for which you wish to find semantically similar chunks of knowledge.

nintegerOptional

Number of semantically similar chunks of text to return. Use 'n=3' for up to 5, and 'n=10' for more information. If you do not receive enough information, consider trying again with a larger 'n' value.

fromstring · date-timeOptional

Start of the time range for documents to be searched, in ISO 8601 format.

tostring · date-timeOptional

End of the time range for documents to be searched, in ISO 8601 format.

Responses
200
Successful retrieval of documents
application/json
get
GET /api/v1/documents/search/ HTTP/1.1
Host: api.rememberizer.ai
Accept: */*
{
  "data_sources": [
    {
      "name": "text",
      "documents": 1
    }
  ],
  "matched_chunks": [
    {
      "document": {
        "id": 18,
        "document_id": "text",
        "name": "text",
        "type": "text",
        "path": "text",
        "url": "text",
        "size": 1,
        "created_time": "2025-06-25T19:51:37.177Z",
        "modified_time": "2025-06-25T19:51:37.177Z",
        "indexed_on": "2025-06-25T19:51:37.177Z",
        "integration": {
          "id": 1,
          "integration_type": "text"
        }
      },
      "matched_content": "text",
      "distance": 1
    }
  ]
}

Example Requests

curl -X GET \
  "https://api.rememberizer.ai/api/v1/documents/search/?q=How%20to%20integrate%20Rememberizer%20with%20custom%20applications&n=5&from=2023-01-01T00:00:00Z&to=2023-12-31T23:59:59Z" \
  -H "Authorization: Bearer YOUR_JWT_TOKEN"

Replace YOUR_JWT_TOKEN with your actual JWT token.

Query Parameters

Parameter
Type
Description

q

string

Required. The search query text (up to 400 words).

n

integer

Number of results to return. Default: 3. Use higher values (e.g., 10) for more comprehensive results.

from

string

Start of the time range for documents to be searched, in ISO 8601 format.

to

string

End of the time range for documents to be searched, in ISO 8601 format.

prev_chunks

integer

Number of preceding chunks to include for context. Default: 2.

next_chunks

integer

Number of following chunks to include for context. Default: 2.

Response Format

{
  "data_sources": [
    {
      "name": "Google Drive",
      "documents": 3
    },
    {
      "name": "Slack",
      "documents": 2
    }
  ],
  "matched_chunks": [
    {
      "document": {
        "id": 12345,
        "document_id": "1aBcD2efGhIjK3lMnOpQrStUvWxYz",
        "name": "Rememberizer API Documentation.pdf",
        "type": "application/pdf",
        "path": "/Documents/Rememberizer/API Documentation.pdf",
        "url": "https://drive.google.com/file/d/1aBcD2efGhIjK3lMnOpQrStUvWxYz/view",
        "size": 250000,
        "created_time": "2023-05-10T14:30:00Z",
        "modified_time": "2023-06-15T09:45:00Z",
        "indexed_on": "2023-06-15T10:30:00Z",
        "integration": {
          "id": 101,
          "integration_type": "google_drive"
        }
      },
      "matched_content": "To integrate Rememberizer with custom applications, you can use the OAuth2 authentication flow to authorize your application to access a user's Rememberizer data. Once authorized, your application can use the Rememberizer APIs to search for documents, retrieve content, and more.",
      "distance": 0.123
    },
    // ... more matched chunks
  ],
  "message": "Search completed successfully",
  "code": "success"
}

Search Optimization Tips

For Question Answering

When searching for an answer to a question, try formulating your query as if it were an ideal answer. For example:

Instead of: "What is vector embedding?" Try: "Vector embedding is a technique that converts text into numerical vectors in a high-dimensional space."

For a deeper understanding of how vector embeddings work and why this search approach is effective, see What are Vector Embeddings and Vector Databases?

Adjusting Result Count

  • Start with n=3 for quick, high-relevance results

  • Increase to n=10 or higher for more comprehensive information

  • If search returns insufficient information, try increasing the n parameter

Time-Based Filtering

Use the from and to parameters to focus on documents from specific time periods:

  • Recent documents: Set from to a recent date

  • Historical analysis: Specify a specific date range

  • Excluding outdated information: Set an appropriate to date

Batch Operations

For efficiently handling large volumes of search queries, Rememberizer supports batch operations to optimize performance and reduce API call overhead.

import requests
import time
import json
from concurrent.futures import ThreadPoolExecutor

def batch_search_documents(queries, num_results=5, batch_size=10):
    """
    Perform batch searches with multiple queries
    
    Args:
        queries: List of search query strings
        num_results: Number of results to return per query
        batch_size: Number of queries to process in parallel
    
    Returns:
        List of search results for each query
    """
    headers = {
        "Authorization": "Bearer YOUR_JWT_TOKEN",
        "Content-Type": "application/json"
    }
    
    results = []
    
    # Process queries in batches
    for i in range(0, len(queries), batch_size):
        batch = queries[i:i+batch_size]
        
        # Create a thread pool to send requests in parallel
        with ThreadPoolExecutor(max_workers=batch_size) as executor:
            futures = []
            
            for query in batch:
                params = {
                    "q": query,
                    "n": num_results
                }
                
                future = executor.submit(
                    requests.get,
                    "https://api.rememberizer.ai/api/v1/documents/search/",
                    headers=headers,
                    params=params
                )
                futures.append(future)
            
            # Collect results as they complete
            for future in futures:
                response = future.result()
                results.append(response.json())
        
        # Rate limiting - pause between batches to avoid API throttling
        if i + batch_size < len(queries):
            time.sleep(1)
    
    return results

# Example usage
queries = [
    "How to use OAuth with Rememberizer",
    "Vector database configuration options",
    "Best practices for semantic search",
    # Add more queries as needed
]

results = batch_search_documents(queries, num_results=3, batch_size=5)

Performance Considerations

When implementing batch operations, consider these best practices:

  1. Optimal Batch Size: Start with batch sizes of 5-10 queries and adjust based on your application's performance characteristics.

  2. Rate Limiting: Include delays between batches to prevent API throttling. A good starting point is 1 second between batches.

  3. Error Handling: Implement robust error handling to manage failed requests within batches.

  4. Resource Management: Monitor client-side resource usage, particularly with large batch sizes, to prevent excessive memory consumption.

  5. Response Processing: Process batch results asynchronously when possible to improve user experience.

For high-volume applications, consider implementing a queue system to manage large numbers of search requests efficiently.

This endpoint provides powerful semantic search capabilities across your entire knowledge base. It uses vector embeddings to find content based on meaning rather than exact keyword matches.

Last updated