Hi everyone,
Iām trying to set up a smooth local-development workflow for Databricks and would love to hear how others are doing it.
My Current Setup
I do most of my development in Cursor (VS Code-based editor) because the AI agents make coding much faster.
After development, I push code to Git, then open Databricks and pull the repo, and only then can I run and test the code inside a Databricks notebook or job.
This back-and-forth is slow ā Iād like to run / test directly from my local IDE if possible.
What I Tried
1. Databricks VS Code Extension
I saw that the Databricks docs mention a VS Code extension, but:
Has anyone successfully used the Databricks VS Code extension inside Cursor?
2. Databricks Connect
I also tried Databricks Connect. Tutorials show connecting to a Personal Compute cluster, but:
In my organization, compute is owned by a principal / service account.
Iām added as a user, but when I list clusters through Databricks Connect, I donāt see any clusters.
So the connect step fails.
Not sure if this is a permissions issue, or if Databricks Connect requires personal compute only.
My Questions
How are you all developing locally and executing code in Databricks?
Do you run code locally against DBFS / clusters, or do you push to repos and test in notebooks?
Does Databricks Connect work with shared or service-principalāowned clusters?
Or only with Personal Compute?
Is there any known workaround to make the VS Code extension work in Cursor?
Is there any other method Iām missing for local development + remote execution?
Any advice, examples, or even your workflow setups would be super helpful.
Thanks!