Hi everyone,
I built Intelligence Studio, an open-source workbench that lets you browse, test, analyse, and integrate with 640+ Databricks REST APIs -- all from one interface. No more juggling docs, curl commands, Postman collections, and multiple browser tabs. I'd love your feedback.
The Problem
Working with Databricks APIs today means:
- 640+ endpoints scattered across documentation -- hard to find the right one
- Manual testing with curl, Postman, or custom scripts for every request
- Token management across multiple workspaces is tedious and error-prone
- Cryptic error messages with no guidance on root cause or fix
- No built-in SQL editor for quick queries without switching to another tool
- Writing boilerplate code every time you want to call an API from your app
What Intelligence Studio Does
A single app that combines API explorer, SQL editor, AI assistant, and code generator -- all connected to your Databricks workspace.
Core Features
Feature What It Does
| API Catalog | 640+ Databricks endpoints (500+ workspace + account APIs), searchable and organised |
| Request Playground | Auto-generated request bodies, method/path builder, response formatting, pagination |
| SQL Query Editor | Full editor with syntax highlighting, Unity Catalog browser (Catalog > Schema > Table > Column) |
| Azure Multi-Workspace Login | OAuth-based progressive selection (Tenant > Subscription > Workspace) |
| Request History | Full replay capability with favourites and version tracking |
| Data Visualisation | 12 chart types with auto-detection from API responses |
| Code Export | Python, cURL, JavaScript, TypeScript, Go, PowerShell -- one click |
| Integration Export | Postman, Insomnia, OpenAPI 3.0, GitHub Actions |
13 AI Assistant Features
This is where it gets interesting. Intelligence Studio has 13 AI-powered features built throughout (not bolted on), powered by Databricks Foundation Models:
- Data Q&A -- Ask questions in plain English, get SQL generated and executed ("Show me all tables in catalog X with row counts")
- Find Endpoint -- Describe what you want in natural language ("list all running jobs") and get the right API endpoint
- Analyse Response -- Pattern detection and insights from any API response
- Error Analysis -- Root cause diagnosis with step-by-step fix suggestions for cryptic API errors
- Code Generation -- Export tested requests as production-ready code in 6 languages
- Workflow Builder -- Chain multi-step API calls where one step's output feeds into the next
- Agent Chat -- AI agent that calls Databricks APIs on your behalf
- Test Data Generator -- Schema-aware synthetic test payloads
- API Docs -- Browsable, searchable endpoint documentation
- Query Builder -- Natural language to SQL conversion with execution
- Data Visualisation -- 12 chart types with auto-detection, dashboards, lineage graphs
- AI Scripting -- Python script generation with sandboxed execution
- Prompt Manager -- Customise 14 system prompts for your organisation
Tech Stack
Frontend: React 18 + TypeScript 5 + Vite 6 + Zustand + Tailwind CSS 4 + Recharts
Backend: FastAPI + Uvicorn + httpx + Pydantic v2 + azure-identity
Desktop: Electron 28 (macOS arm64 + Windows)
CLI: Python Click + Rich
AI: Databricks Foundation Models (Llama, Claude, Gemma)
Security First
- No server-side token storage -- each request carries its own credentials, tokens are never persisted
- Sandboxed script execution -- AI-generated Python scripts validated against blocklist before running
- Pydantic v2 validation -- all inputs validated before processing
- Per-request auth -- backend acts as an authenticated proxy, nothing stored
Run It Your Way
Mode How
| Web App | make dev -- React + FastAPI on localhost |
| macOS Desktop | Download .zip, double-click to launch |
| Windows Desktop | NSIS installer or portable .exe |
| CLI | make cli-install for terminal workflows |
Quick Start (5 minutes)
git clone https://github.com/viral0216/Intelligence-Studio.git
cd Intelligence-Studio
make install
make dev
# Open http://localhost:5173
# Configure Databricks host + token in Settings
Requirements: Python 3.11+, Node.js 18+, Databricks workspace + PAT
Enterprise Features
- 14 feature flags for team policies and governance
- Cost tracking per AI model call with token usage metadata
- 14 Customisable AI prompts -- tailor the AI to your organisation's context
- Full export suite -- PDF, Word, Markdown, Excel, CSV, JSON
By the Numbers
Metric Value
| API endpoints cataloged | 640+ |
| AI assistant features | 13 |
| Code generation languages | 6 |
| Export formats | 10+ |
| Chart types | 12 |
| Feature flags | 14 |
| Customisable prompts | 14 |
| Platforms | Web, macOS, Windows, CLI |
What I'm Looking For
- Feedback -- Does this solve real pain points in your Databricks workflow?
- Feature requests -- What's missing? New chart types? More export formats?
- Bug reports -- It's v1.0, so expect some rough edges
- Contributors -- PRs welcome! Areas: new endpoint presets, AI prompt templates, chart types, tests, docs
Links
If you spend any time working with Databricks APIs, I'd love to hear how you're doing it today and what would make your life easier. Thanks!