A comprehensive open-source framework for building production-ready Retrieval-Augmented Generation (RAG) systems. This blueprint simplifies the development of RAG applications while providing full control over performance, resource usage, and evaluation capabilities.
While building or buying RAG systems has become increasingly accessible, deploying them as production-ready data products remains challenging. Our framework bridges this gap by providing a streamlined development experience with easy configuration and customization options, while maintaining complete oversight of performance and resource usage.
It comes with built-in monitoring and observability tools for better troubleshooting, integrated LLM-based metrics for evaluation, and human feedback collection capabilities. Whether you're building a lightweight knowledge base or an enterprise-grade application, this blueprint offers the flexibility and scalability needed for production deployments.
- Multiple Knowledge Base Integration: Seamless extraction from several Data Sources(Confluence, Notion, PDF)
- Wide Models Support: Availability of numerous embedding and language models
- Vector Search: Efficient similarity search using vector stores
- Interactive Chat: User-friendly interface for querying knowledge on Chainlit
- Performance Monitoring: Query and response tracking with Langfuse
- Evaluation: Comprehensive evaluation metrics using RAGAS
- Setup flexibility: Easy and flexible setup process of the pipeline
Python • LlamaIndex • Chainlit • Langfuse • RAGAS
Notion • Confluence • PDF files
VoyageAI • OpenAI • Hugging Face
OpenAI • Any OpenAI-compatible API models
Check the detailed Quickstart Setup
-
Extraction:
- Fetches content from the data sources pages through their respective APIs
- Handles rate limiting and retries
- Extracts metadata (title, creation time, URLs, etc.)
-
Processing:
- Markdown-aware chunking using LlamaIndex's MarkdownNodeParser
- Embedding generation using the selected embedding model
- Vector storage in Qdrant
-
Retrieval & Generation:
- Context-aware retrieval with configurable filters
- LLM-powered response generation
- Human feedback collection
The system includes comprehensive evaluation capabilities:
-
Automated Metrics (via RAGAS):
- Faithfulness • Answer Relevancy • Context Precision • Context Recall • Harmfulness
-
Human Feedback:
- Integrated feedback collection through Chainlit
- Automatic dataset creation from positive feedback
- Manual expert feedback support
-
Observability:
- Full tracing and monitoring with Langfuse
- Separate traces for chat completion and deployment evaluation
- Integration between Chainlit and Langfuse for comprehensive tracking
.
├── build/ # Build and deployment scripts
│ └── workstation/ # Build scripts for workstation setup
├── configurations/ # Configuration and secrets files
├── res/ # Assets
└── src/ # Source code
├── augmentation/ # Retrieval and UI components
├── common/ # Shared utilities
├── embedding/ # Data extraction and embedding
└── evaluate/ # Evaluation system
├── tests/ # Unit tests
For detailed documentation on setup, configuration, and development: