Search for documents by semantic similarity
Semantic search endpoint with batch processing capabilities
Initiate a search operation with a query text of up to 400 words and receive the most semantically similar responses from the stored knowledge. For question-answering, convert your question into an ideal answer and submit it to receive similar real answers.
Up to 400 words sentence for which you wish to find semantically similar chunks of knowledge.
Number of semantically similar chunks of text to return. Use 'n=3' for up to 5, and 'n=10' for more information. If you do not receive enough information, consider trying again with a larger 'n' value.
Start of the time range for documents to be searched, in ISO 8601 format.
End of the time range for documents to be searched, in ISO 8601 format.
Successful retrieval of documents
Bad request
Unauthorized
Not found
Internal server error
Example Requests
curl -X GET \
"https://api.rememberizer.ai/api/v1/documents/search/?q=How%20to%20integrate%20Rememberizer%20with%20custom%20applications&n=5&from=2023-01-01T00:00:00Z&to=2023-12-31T23:59:59Z" \
-H "Authorization: Bearer YOUR_JWT_TOKEN"const searchDocuments = async (query, numResults = 5, from = null, to = null) => {
const url = new URL('https://api.rememberizer.ai/api/v1/documents/search/');
url.searchParams.append('q', query);
url.searchParams.append('n', numResults);
if (from) {
url.searchParams.append('from', from);
}
if (to) {
url.searchParams.append('to', to);
}
const response = await fetch(url.toString(), {
method: 'GET',
headers: {
'Authorization': 'Bearer YOUR_JWT_TOKEN'
}
});
const data = await response.json();
console.log(data);
};
searchDocuments('How to integrate Rememberizer with custom applications', 5);Query Parameters
q
string
Required. The search query text (up to 400 words).
n
integer
Number of results to return. Default: 3. Use higher values (e.g., 10) for more comprehensive results.
from
string
Start of the time range for documents to be searched, in ISO 8601 format.
to
string
End of the time range for documents to be searched, in ISO 8601 format.
prev_chunks
integer
Number of preceding chunks to include for context. Default: 2.
next_chunks
integer
Number of following chunks to include for context. Default: 2.
Response Format
Search Optimization Tips
For Question Answering
When searching for an answer to a question, try formulating your query as if it were an ideal answer. For example:
Instead of: "What is vector embedding?" Try: "Vector embedding is a technique that converts text into numerical vectors in a high-dimensional space."
Adjusting Result Count
Start with
n=3for quick, high-relevance resultsIncrease to
n=10or higher for more comprehensive informationIf search returns insufficient information, try increasing the
nparameter
Time-Based Filtering
Use the from and to parameters to focus on documents from specific time periods:
Recent documents: Set
fromto a recent dateHistorical analysis: Specify a specific date range
Excluding outdated information: Set an appropriate
todate
Batch Operations
For efficiently handling large volumes of search queries, Rememberizer supports batch operations to optimize performance and reduce API call overhead.
Batch Search
Performance Considerations
When implementing batch operations, consider these best practices:
Optimal Batch Size: Start with batch sizes of 5-10 queries and adjust based on your application's performance characteristics.
Rate Limiting: Include delays between batches to prevent API throttling. A good starting point is 1 second between batches.
Error Handling: Implement robust error handling to manage failed requests within batches.
Resource Management: Monitor client-side resource usage, particularly with large batch sizes, to prevent excessive memory consumption.
Response Processing: Process batch results asynchronously when possible to improve user experience.
For high-volume applications, consider implementing a queue system to manage large numbers of search requests efficiently.
This endpoint provides powerful semantic search capabilities across your entire knowledge base. It uses vector embeddings to find content based on meaning rather than exact keyword matches.
Last updated