To do that, Databricks needs access to your local LAN.
This means configuring network security groups or a firewall.
Setting up a private endpoint is also a good idea.
You also have to make sure that your databricks cluster can connect to your on-prem database.
Now, assuming all this is working, what will actually happen is this:
databricks connects to your database, fetches the data you defined (in your spark script) and moves it to the databricks workers which reside in the cloud.
Databricks does the transformationts etc and finally moves the data back to your on-prem system.
Depending on the sizing of the underlying on-prem database this can take a while.
So your data will be sent to the cloud, no matter what (best case = worker RAM only).
I am not sure if that is what you want.
Most of the times, on-prem data is copied to cloud storage, and processed from there.