⚡ GenBI (Generative BI) queries any database in natural language, generates accurate SQL (Text-to-SQL), charts (Text-to-Chart), and AI-powered business intelligence in seconds. ️
| What you get | Why it matters | |
|---|---|---|
| Talk to Your Data | Ask in any language → precise SQL & answers | Slash the SQL learning curve |
| GenBI Insights | AI-written summaries, charts & reports | Decision-ready context in one click |
| Semantic Layer | MDL models encode schema, metrics, joins | Keeps LLM outputs accurate & governed |
| Embed via API | Generate queries & charts inside your apps (API Docs) | Build custom agents, SaaS features, chatbots |
Using Legible is super simple, you can set it up within 3 minutes, and start to interact with your data!
If your data source is not listed here, vote for it in our GitHub discussion thread. It will be a valuable input for us to decide on the next supported data sources.
- Athena (Trino)
- Redshift
- BigQuery
- DuckDB
- Databricks
- PostgreSQL
- MySQL
- Microsoft SQL Server
- ClickHouse
- Oracle
- Trino
- Snowflake
Wren AI supports integration with various Large Language Models (LLMs), including but not limited to:
- OpenAI Models
- Azure OpenAI Models
- DeepSeek Models
- Google AI Studio – Gemini Models
- Vertex AI Models (Gemini + Anthropic)
- Bedrock Models
- Anthropic API Models
- Groq Models
- Ollama Models
- Databricks Models
Check configuration examples here!
Caution
The performance of Legible depends significantly on the capabilities of the LLM you choose. We strongly recommend using the most powerful model available for optimal results. Using less capable models may lead to reduced performance, slower response times, or inaccurate outputs.
Visit Legible documentation to view the full documentation.
Legible includes a full agent sandbox system powered by NVIDIA OpenShell and NemoClaw. Agents run in isolated containers with policy-enforced access to your semantic layer via MCP.
OpenShell is NVIDIA's open-source sandbox runtime. It provisions lightweight containers on your local machine, each with its own network policy, credential injection, and resource limits. Legible uses OpenShell to run AI coding agents (Claude Code, Codex, OpenCode, Copilot) that can query your data through the Legible MCP server.
# Create an agent sandbox
legible agent create my-analyst --type claude
# From a community sandbox image
legible agent create my-analyst --from ollama
# From a blueprint with an inference profile
legible agent create my-analyst --blueprint legible-default --profile nvidiaNemoClaw provides inference routing and network policy enforcement for agent sandboxes. It controls which endpoints an agent can reach and routes LLM inference requests through configurable provider profiles (NVIDIA, OpenAI, Anthropic, local Ollama, etc.).
Blueprints are declarative YAML specs that define everything an agent needs: sandbox image, inference profiles, network policies, and resource limits. Legible ships with built-in blueprints and supports custom ones.
# Example blueprint structure
agent:
type: claude
components:
sandbox:
image: ghcr.io/nvidia/openshell/sandbox-base:latest
inference:
profiles:
nvidia:
model: meta/llama-3.3-70b-instruct
anthropic:
model: claude-sonnet-4-20250514
policies:
network: policy.yamlYou can also use Community Sandboxes — pre-built environments from the OpenShell Community catalog including base, Ollama, OpenClaw, and SDG images.
For more details, see the Agents documentation and the CLI guide.
Subscribe to our blog and Follow our LinkedIn
- Star ⭐ the repo to show support (it really helps).
- Open an issue for bugs, ideas, or discussions.
- Read Contribution Guidelines for setup & PR guidelines.
- Join everyone in our Discord for real-time help and previews.
- If there are any issues, please visit GitHub Issues.
- Explore our public roadmap to stay updated on upcoming features and improvements!
Please note that our Code of Conduct applies to all Legible community channels. Users are highly encouraged to read and adhere to them to avoid repercussions.


