cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

adrianhernandez
by New Contributor III
  • 290 Views
  • 3 replies
  • 2 kudos

Convert notebook to Python library

Looking for ways to convert a Databricks notebook to Python library. Some context :Don't want to give execute permissions to shared notebooks as we want to hide code from users.Proposed solution is to have our shared notebook converted into a Python ...

  • 290 Views
  • 3 replies
  • 2 kudos
Latest Reply
mark_ott
Databricks Employee
  • 2 kudos

The best way to share code from a Databricks notebook as a reusable module while hiding implementation details from users—without using wheels or granting direct notebook execution permissions—is to convert your notebook into a Python module, store i...

  • 2 kudos
2 More Replies
rabbitturtles
by New Contributor II
  • 202 Views
  • 2 replies
  • 2 kudos

Best Practice: Data Modeling for Customer 360 with Refined/Gold Source Data

Hi community,I'm looking for advice on the best data modeling approach for a Customer 360 (C360) project where our source data is already highly refined.I understand the standard Medallion architecture guidelines, which often recommend using Data Vau...

  • 202 Views
  • 2 replies
  • 2 kudos
Latest Reply
rabbitturtles
New Contributor II
  • 2 kudos

@BS_THE_ANALYST Thank you so much for your response.The goal is to keep it flexible as a platform rather than a data product mindset. Keeping this in mind, essentially the customer data platform should enable contribution from different teams prevent...

  • 2 kudos
1 More Replies
pinikrisher
by New Contributor II
  • 189 Views
  • 3 replies
  • 0 kudos

SQL Editor Auto complete

HiFrom time to time the SQL Editor Auto complete works and from time to time not.few times it knows the table columns and few time not - what is the rule for it?

  • 189 Views
  • 3 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @pinikrisher ,To be honest I didn't notice this behaviour. Are you using SQL Editor v2 or legacy one?

  • 0 kudos
2 More Replies
Akshay_Petkar
by Valued Contributor
  • 155 Views
  • 4 replies
  • 4 kudos

How to Read Shared Drive Data in Databricks

Hi everyone,I am working on a project where the data is stored on a Shared Drive. How can I read an Excel file from the Shared Drive into a Databricks notebook?Thanks,

  • 155 Views
  • 4 replies
  • 4 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 4 kudos

Hi @Akshay_Petkar ,Could you provide more information. Share drive is pretty broad term. It could be Windows SMB / CIFS share , AWS FSx,, Google Shared Drive etc.

  • 4 kudos
3 More Replies
NehaR
by New Contributor III
  • 4228 Views
  • 5 replies
  • 3 kudos

Set time out or Auto termination for long running query

Hi ,We want to set auto termination for long running queries in data bricks adhoc cluster.I attempted below two approaches in my notebook. Despite my understanding that queries should automatically terminate after one hour, with both the approaches q...

  • 4228 Views
  • 5 replies
  • 3 kudos
Latest Reply
vinaypvsn
New Contributor
  • 3 kudos

Hi @NehaR  are the configurations(spark.sql.broadcastTimeout or spark.sql.execution.timeout) working when we set at cluster level. I am currently trying to do a similar configuration for compute clusters but it dosent work.

  • 3 kudos
4 More Replies
turagittech
by Contributor
  • 115 Views
  • 1 replies
  • 0 kudos

split parse_url output for the information

Hi All,I have data in blobs which I am loading from blob store to Databricks delta tables. One of the blob types contains urls. From the Urls I want to extract knowledge from the path and query parts I can get those out easily with parse url. the pro...

  • 115 Views
  • 1 replies
  • 0 kudos
Latest Reply
Isi
Honored Contributor III
  • 0 kudos

Hello @turagittech ,Honestly, it all depends on how complex your URLs can get.UDFs will always be more flexible but less performant than native SQL functions.That said, if your team mainly works with SQL, trying to solve it natively in Databricks SQL...

  • 0 kudos
fellipeao
by New Contributor II
  • 1929 Views
  • 7 replies
  • 1 kudos

How to create parameters that works in Power BI Report Builder (SSRS)

Hello!I'm trying to create an item in Power Bi Report Server (SSRS) connected to Databricks. I can connect normally, but I'm having trouble using a parameter that Databricks recognizes.First, I'll illustrate what I do when I connect to SQL Server and...

fellipeao_0-1747918499426.png fellipeao_1-1747918679264.png fellipeao_2-1747918734966.png fellipeao_3-1747918927934.png
  • 1929 Views
  • 7 replies
  • 1 kudos
Latest Reply
J-Usef
New Contributor II
  • 1 kudos

@fellipeao This is the only way I found that works well with databricks since positional arguments (?) was a fail for me. This is the latest version of paginated report builder.https://learn.microsoft.com/en-us/power-bi/paginated-reports/report-build...

  • 1 kudos
6 More Replies
adrianhernandez
by New Contributor III
  • 118 Views
  • 1 replies
  • 0 kudos

Create wheels and install/configure automation

Can a notebook be created that pushes new versions of code w/o having to go thru the manual process of creating a whl and other configuration files? In other words, can I create a notebook that will setup/configure and install the wheel? So far all t...

  • 118 Views
  • 1 replies
  • 0 kudos
Latest Reply
Isi
Honored Contributor III
  • 0 kudos

Hey @adrianhernandez ,Technically yes, but it’s not recommended.You could technically build everything needed to compile the wheel directly from a Databricks notebook using a setup.py, and store it in a volume, CodeArtifact, or any supported cloud st...

  • 0 kudos
shanisolomonron
by New Contributor III
  • 360 Views
  • 5 replies
  • 1 kudos

Table ID not preserved using CREATE OR REPLACE TABLE

The When to replace a table documentation states that using CREATE OR REPLACE TABLE should preserve the table’s identity:Table contents are replaced, but the table identity is maintained.However, in my recent test the table ID changed after running t...

  • 360 Views
  • 5 replies
  • 1 kudos
Latest Reply
shanisolomonron
New Contributor III
  • 1 kudos

@Krishna_S thanks for your reply. In a non UC-managed table, is it valid to see a table ID change throughout the life time of the table?(Also, what value gives me to utilize UC to manage my tables?)

  • 1 kudos
4 More Replies
skd217
by New Contributor
  • 1532 Views
  • 4 replies
  • 0 kudos

Is there any way to connect polaris catalog from unity catalog?

Hi databricks community, I'd like to access data managed by polaris catalog through unity catalog to manage all data one place. But is there any way to connect? (I could access the data with all-purpose cluster without unity catalog.)

  • 1532 Views
  • 4 replies
  • 0 kudos
Latest Reply
banderson272
New Contributor II
  • 0 kudos

Hey @chandu402240 , we're looking at a very similar problem. Wondering if you were able to access the Polaris catalog from a Databricks cluster? Was the External Location documentation linked by @Alberto_Umana relevant?

  • 0 kudos
3 More Replies
daan_dw
by New Contributor III
  • 127 Views
  • 1 replies
  • 1 kudos

Resolved! Injecting Databricks secrets into Databricks Asset Bundles.

Hey,I want to inject Databricks secrets into my Databricks Asset Bundles in order to avoid exposing secrets.I tried it as shown in the code block below but it gives the error below the code block.When I hardcode my instance_profile_arn it does work.H...

  • 127 Views
  • 1 replies
  • 1 kudos
Latest Reply
HariSankar
Contributor III
  • 1 kudos

Hey @daan_dw ,Possible reason for your problem:Databricks Asset Bundles use Terraform under the hood, and Terraform cannot resolve Databricks secret references (like ${secrets.aws_secrets.cluster_profile_arn})at deployment time. Secrets are only acce...

  • 1 kudos
DBU100725
by New Contributor II
  • 153 Views
  • 1 replies
  • 0 kudos

URGENT: Delta writes to S3 fail after workspace migrated to Premium

Delta writes to S3 fail after workspace migrated to Premium (401 “Credential was not sent or unsupported type”)SummaryAfter our Databricks workspace migrated from Standard to Premium, all Delta writes to S3 started failing with:com.databricks.s3commi...

  • 153 Views
  • 1 replies
  • 0 kudos
Latest Reply
DBU100725
New Contributor II
  • 0 kudos

The update/append to delta on s3 fails with both Databrciks Runtime 13.3 and 15.4.

  • 0 kudos
databricks1111
by New Contributor II
  • 372 Views
  • 4 replies
  • 0 kudos

Databricks unable to read ADLS external location

Hey Databricks forum, We are seeing a bit of issue in our azure databricks environment, from this sunday, that we are unable to list the files inside the containers. We have our unity catalogues and all configured in our external location, while we m...

  • 372 Views
  • 4 replies
  • 0 kudos
Latest Reply
HariSankar
Contributor III
  • 0 kudos

Hey @databricks1111,thanks for the extra details The behavior you’re seeing (works fine on personal compute but fails on shared compute) usually comes down to which identity Databricks uses to access Azure Storage.When you use personal compute, opera...

  • 0 kudos
3 More Replies
Hritik_Moon
by New Contributor II
  • 254 Views
  • 6 replies
  • 3 kudos

Resolved! create delta table in free edition

table_name = f"project.bronze.{file_name}"spark.sql(    f"""    CREATE TABLE IF NOT EXISTS {table_name}    USING DELTA    """) what am I getting wrong?

  • 254 Views
  • 6 replies
  • 3 kudos
Latest Reply
Hritik_Moon
New Contributor II
  • 3 kudos

yes, multiline solved it. .Is there any better approach to this scenario?

  • 3 kudos
5 More Replies
ravimaranganti
by New Contributor
  • 1982 Views
  • 1 replies
  • 1 kudos

Resolved! How can I execute a Spark SQL query inside a Unity Catalog Python UDF so I can run downstream ML?

I want to build an LLM-driven chatbot using Agentic AI framework within Databricks. The idea is for the LLM to generate a SQL text string which then passed to a Unity Catalog-registered Python UDF tool. Within this tool,  I need the SQL to be execute...

  • 1982 Views
  • 1 replies
  • 1 kudos
Latest Reply
mark_ott
Databricks Employee
  • 1 kudos

There is currently no supported method for SQL-defined Python UDFs in Unity Catalog to invoke Spark SQL or access a SparkSession directly from within the SafeSpark sandbox. This limitation is by design: the SafeSpark/Restricted Python Execution Envir...

  • 1 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels