cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ThiagoRosetti
by New Contributor
  • 93 Views
  • 1 replies
  • 0 kudos

Serverless Compute connectivity issues with .com.br domains vs. Classic Clusters Spark hangs

Hi everyone,I'm facing two specific issues in my Databricks Premium workspace (AWS - sa-east-1).Serverless Connectivity Issue: When using Serverless compute, I can successfully call APIs ending in .com, but calls to .com.br domains fail with connecti...

  • 93 Views
  • 1 replies
  • 0 kudos
Latest Reply
GaneshI
New Contributor
  • 0 kudos

Hi there,Great breakdown of the symptoms — these are actually two distinct issues likely sharing a common root cause in your VPC/network configuration. Let me address both:Issue 1: Serverless Compute — .com.br DNS Resolution FailureRoot CauseServerle...

  • 0 kudos
andytate
by New Contributor
  • 120 Views
  • 2 replies
  • 0 kudos

Lakebase not showing up

I am fairly new to Databricks and am learning it because a company I am working on is going to use it. One of the things they are going to use is Lakebase postgres so I thought I'd set it up on my personal account. First I don't see app switcher, sec...

  • 120 Views
  • 2 replies
  • 0 kudos
Latest Reply
rdokala
New Contributor
  • 0 kudos

If it is available, you would see at Compute->Lakebase and tabs for Provisioned and Autoscaling. This option Lakebase is next to Apps. There is another option dotted grid on the top right corner, the option just before your profile name, if you expan...

  • 0 kudos
1 More Replies
GaneshI
by New Contributor
  • 103 Views
  • 1 replies
  • 0 kudos

What is the recommended approach to enforce row-level security in Unity Catalog for external BI tool

We connect Tableau and Power BI to our Databricks SQL warehouse via OAuth tokens. Does Unity Catalog row filters apply at the SQL layer regardless of the BI tool, or do we need additional enforcement at the warehouse level?

  • 103 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lu_Wang_ENB_DBX
Databricks Employee
  • 0 kudos

Unity Catalog row filters apply at the SQL/query layer, so if Tableau or Power BI is querying a Databricks SQL warehouse, the filters are enforced there — you do not need a separate warehouse-level row-filter feature. Row filters and column masks are...

  • 0 kudos
DazzaiDe
by New Contributor III
  • 133 Views
  • 2 replies
  • 1 kudos

Best Practices: 1 job per 1 target table

We’re currently designing our Medallion Architecture pipelines using Lakeflow Jobs, and I wanted to get some opinions on orchestration best practices.Right now, our approach is essentially 1 job per target table (for example, each Bronze/Silver/Gold ...

  • 133 Views
  • 2 replies
  • 1 kudos
Latest Reply
LBoydston
New Contributor II
  • 1 kudos

We typically organize our workloads with one job per catalog, and then use one or more pipelines to load tables into the appropriate schemas. As our data engineers ingest raw data, this structure is primarily applied in the Silver and Gold layers of ...

  • 1 kudos
1 More Replies
Garybary
by New Contributor III
  • 1566 Views
  • 3 replies
  • 2 kudos

Resolved! Scheduling jobs with table update triggers

Hi all,Lately I've been experimenting with the newish feature of scheduling jobs on a table update trigger. There's one thing thats blokcing me from implementing it however and I was hoping someone found a solution to it.We occasionally perform a vac...

  • 1566 Views
  • 3 replies
  • 2 kudos
Latest Reply
SteveOstrowski
Databricks Employee
  • 2 kudos

Hi @Garybary, Quick clarification on how table update triggers actually behave, because this changes the answer significantly. Table update triggers fire on data-changing operations only (writes, merges, updates, deletes). A standalone VACUUM does NO...

  • 2 kudos
2 More Replies
TalessRocha
by New Contributor II
  • 6053 Views
  • 11 replies
  • 8 kudos

Resolved! Connect to azure data lake storage using databricks free edition

Hello guys, i'm using databricks free edition (serverless) and i am trying to connect to a azure data lake storage.The problem I'm having is that in the free edition we can't configure the cluster so I tried to make the connection via notebook using ...

  • 6053 Views
  • 11 replies
  • 8 kudos
Latest Reply
pjvi
New Contributor II
  • 8 kudos

If you want to read from your Azure storage account using Databricks Free Edition, you can add a specific option when reading:spark.read.option("fs.azure.account.key.<storage-account-name>.dfs.core.windows.net",                  "your_storage_account...

  • 8 kudos
10 More Replies
maikel
by Contributor II
  • 450 Views
  • 4 replies
  • 1 kudos

Resolved! Uploading file to volume and start ingestion job

Hello Community!I am writing to you with my idea about data ingestion job which we have to implement in our project.The data which we have are in CSV file format and depending on the case it differs a little bit. Before uploading we pivoting csv file...

  • 450 Views
  • 4 replies
  • 1 kudos
Latest Reply
maikel
Contributor II
  • 1 kudos

Yeah, understood. Thank you very much once again! 

  • 1 kudos
3 More Replies
maikel
by Contributor II
  • 91 Views
  • 0 replies
  • 0 kudos

Job tasks monitoring

Hello Community,We have a case in our project that we would like to solve in an elegant and scalable manner. As always, I would really appreciate your suggestions and experience.In short:We have a multi-step job consisting of 4 stages. In one of the ...

  • 91 Views
  • 0 replies
  • 0 kudos
Danish11052000
by Contributor
  • 1200 Views
  • 7 replies
  • 1 kudos

Resolved! How should I correctly extract the full table name from request_params in audit logs?

’m trying to build a UC usage/refresh tracking table for every workspace. For each workspace, I want to know how many times a UC table was refreshed or accessed each month. To do this, I’m reading the Databricks audit logs and I need to extract only ...

  • 1200 Views
  • 7 replies
  • 1 kudos
Latest Reply
SteveOstrowski
Databricks Employee
  • 1 kudos

Hi @Danish11052000, You are on the right track with the COALESCE approach. The reason for the inconsistency is that different Unity Catalog action types populate different keys in request_params. Here is a breakdown of the key fields and which action...

  • 1 kudos
6 More Replies
mnissen1337
by New Contributor II
  • 117 Views
  • 1 replies
  • 0 kudos

Managing Unity Catalog Permissions for Databricks Apps via DABs

I’m currently developing a Databricks App, and the app’s service principal needs access to Unity Catalog tables. From what I can tell, it doesn’t seem possible to grant Unity Catalog permissions through DABs yet — only through the UI, based on the cu...

  • 117 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @mnissen1337 ,But there is a way to do this in DABs. Look at following section in documentation:Manage Databricks apps using Declarative Automation Bundles | Databricks on AWSIf my answer was helpful, please consider marking it as accepted solutio...

  • 0 kudos
sminamioka
by New Contributor III
  • 288 Views
  • 5 replies
  • 1 kudos

Compute tab doesn't show and doesn't give the option to create a cluster

I've just created an Azure Databricks workspace, tier (Premium) and when trying to create a cluster, when I click on compute, the UI opens automatically the menu SQL Warehouse, not sure if it's a glitch as shown below. Someone said "Ask the admin to ...

sminamioka_0-1778276402869.png
Data Engineering
cluster
clusters
  • 288 Views
  • 5 replies
  • 1 kudos
Latest Reply
gcj0310
Databricks Partner
  • 1 kudos

Hi @sminamioka This does not look like a UI glitch. In newer Azure Databricks workspaces, access to classic compute / clusters depends on workspace entitlements and compute policy permissions.If clicking Compute takes you directly to SQL Warehouses, ...

  • 1 kudos
4 More Replies
Guillermo-HR
by New Contributor
  • 98 Views
  • 1 replies
  • 0 kudos

Streaming read and writing with aggregation

Hi,I have the following problem: on a medallion architecture on a bronze volume I get files every month containing the data for each sensor reading during the period 1 of month 00:00 to last day 23:00. I have a manual job that calls the python files ...

  • 98 Views
  • 1 replies
  • 0 kudos
Latest Reply
Saritha_S
Databricks Employee
  • 0 kudos

Hi @Guillermo-HR  Yes — batch is usually the right fix here. What’s happening is that your query is using event-time window aggregation in Structured Streaming with append output mode. In that mode, Spark only emits a window after it is sure the wind...

  • 0 kudos
Yannick_B
by New Contributor
  • 35 Views
  • 0 replies
  • 0 kudos

[DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG]

We are testing Delta writer in our environment  to create bronze tables and recently, I just needed to add one table to the notebook code and rerun the whole notebook that failed because of this error : [DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG] Y...

  • 35 Views
  • 0 replies
  • 0 kudos
Labels