Many Databricks engineers have asked whether it's possible to use Claude Code CLI directly against Databricks-hosted Claude models instead of Anthropic's cloud API. This enables repo-aware AI workflowsโnavigation, diffs, testing, MCP toolsโright inside their Databricks projects.
I recently built an open-source tool called Lynkr, which acts as a Claude Codeโcompatible backend that runs locally or inside a Databricks environment. The proxy forwards /v1/messages requests to Databricks Serving Endpoints, while maintaining Claude Codeโs structure for chat, tools, context, and workspace actions.
GitHub Repo:
๐https://github.com/vishalveerareddy123/Lynkr
The goal is to make Databricks a first-class environment for LLM-driven development workflowsโwhile keeping everything transparent and configurable.
What Lynkr Enables for Databricks Users
With Lynkr running locally or in a VM, you can:
โ Use the Claude Code CLI with Databricks models
No need for Anthropic cloud access. Just point the CLI at:
export ANTHROPIC_BASE_URL=http://localhost:8080
โ Connect to Databricks Serving Endpoints
The proxy normalizes requests into the Databricks format and returns a Claude-compatible response.
โ Enable repo-aware intelligence
Lynkr maintains a lightweight SQLite index of your repo, including:
symbol search
cross-file references
framework/language detection
auto-generated CLAUDE.md project summary
This feeds richer context into the model.
โ Use Git + workspace tools
The proxy implements many of the Git + tooling features you get with Claude Code:
status, diff, stage, commit, push
automated diff summaries
test-gating & policies
release-note generation
โ Integrate Model Context Protocol (MCP) servers
Lynkr automatically discovers MCP manifests (e.g., GitHub, Jira, internal tools) and exposes them as Claude Code tools.
โ Use prompt caching
You can cache repeated prompts (configurable TTL + LRU size), drastically reducing Databricks compute calls for iterative work.
Architecture (High Level)
Claude Code CLI
โ
Lynkr Proxy
โ
Databricks Model Serving
+
Repo Indexing
+
MCP Tools
+
Git / Diff ToolsEverything is visible and tweakableโno hidden backend logic.
Getting Started (Databricks Setup)
1. Install Lynkr
npm install -g lynkr lynkr start2. Configure environment
Create an .env file:
MODEL_PROVIDER=databricks DATABRICKS_API_BASE=https://<your-workspace>.cloud.databricks.com DATABRICKS_API_KEY=<your-databricks-pat> WORKSPACE_ROOT=/path/to/your/repo PORT=8080 PROMPT_CACHE_ENABLED=true
3. Point Claude Code CLI to Lynkr
export ANTHROPIC_BASE_URL=http://localhost:8080/
export ANTHROPIC_API_KEY="#dummy"
4. Use Claude Code normally
Commands like:
claude explain file.js
claude diff
claude review
claude apply
will now run against Databricks models.
Example: Rebuilding the Repo Index
This is one tool exposed by the proxy:
curl http://localhost:8080/v1/messages \ -H "Content-Type: application/json" \ -H "x-session-id: test" \ ...
This refreshes CLAUDE.md, symbol search tables, and all metadata.
Why This Matters for Databricks
Databricks is becoming a powerful environment for:
LLM-assisted development
agent workflows
code automation
data/ETL debugging with AI
internal tooling built on Claude models
Lynkr helps bridge the gap between:
โI have a Databricks model endpointโ
and
โI want Claude Code-style interactions with my repoโ
without relying on a closed backend.
Roadmap
Upcoming features:
deeper LSP integration (for even smarter repo context)
richer diff-thread reviews
expanded MCP tooling
fine-grained Git risk scoring
historical test dashboards
Links
Dev.to
DeepWiki
Closing
If youโre exploring AI-assisted development inside Databricks, or want to experiment with Claude tools locally, Iโd love feedback. Feel free to reply here or open issues/PRs on GitHub.