cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Siddu07
by New Contributor II
  • 6469 Views
  • 3 replies
  • 1 kudos

How to change the audit log delivery Service Account?

Hi Team,I'm trying to set up Audit log delivery based on the documentation "https://docs.gcp.databricks.com/administration-guide/account-settings-gcp/log-delivery.html". As per the document, I've created a multi-region storage bucket however I'm not ...

  • 6469 Views
  • 3 replies
  • 1 kudos
Latest Reply
Priyag1
Honored Contributor II
  • 1 kudos

Documentation helps in many tasks

  • 1 kudos
2 More Replies
Mike_016978
by New Contributor II
  • 16551 Views
  • 3 replies
  • 4 kudos

Resolved! What are differences between Materialized view and Streaming table in delta live table?

Hi,I was wondering that what are differences between Materialized view and Streaming table? which one should I use when I extract data from bronze table to silver table since I found that both CREATE LIVE TABLE and CREATE STREAMING LIVE TABLE could a...

  • 16551 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Mike Chen​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feedback wi...

  • 4 kudos
2 More Replies
J_M_W
by Contributor
  • 6148 Views
  • 2 replies
  • 4 kudos

Resolved! Can you use %run or dbutils.notebook.run in a Delta Live Table pipeline?

Hi there, Can you use a %run or dbutils.notebook.run() in a Delta Live Table (DLT) pipeline?When I try, I get the following error: "IllegalArgumentException: requirement failed: To enable notebook workflows, please upgrade your Databricks subscriptio...

  • 6148 Views
  • 2 replies
  • 4 kudos
Latest Reply
J_M_W
Contributor
  • 4 kudos

Hi all.@Kaniz Fatma​ thanks for your answer. I am on the premium pricing tier in Azure.After digging around the logs it would seem that you cannot run magic commands in a Delta Live Table pipeline. Therefore, you cannot use %run in a DLT pipeline - w...

  • 4 kudos
1 More Replies
logan0015
by Contributor
  • 6580 Views
  • 6 replies
  • 4 kudos

Resolved! Getting a key mismatch error with Delta Live Tables.

I am attempting to create a streaming delta live table. The main issue I am experiencing is the error below.com.databricks.sql.cloudfiles.errors.CloudFilesIllegalStateException: Found mismatched event: keyI have an aws appflow that is creating a fold...

  • 6580 Views
  • 6 replies
  • 4 kudos
Latest Reply
VijaC_97468
New Contributor II
  • 4 kudos

Hi, I am also facing the same issue, but I found nothing on the documentation to fix it.

  • 4 kudos
5 More Replies
Mikki007
by New Contributor II
  • 9097 Views
  • 3 replies
  • 0 kudos

How to extract the start and end time of the command line cell of the notebook using REST API in Azure Databricks?

HiI have a notebook with many command line cells in it.I want to extract the execution time of each cell using Databricks REST API? How can I do that?Please note - I managed to get the Start & End time of the Job using REST API (/2.1/jobs/runs/get) f...

  • 9097 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Milind Keer​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 0 kudos
2 More Replies
g96g
by New Contributor III
  • 2933 Views
  • 3 replies
  • 0 kudos

data is not written back to data lake

I have this strange case where data is not written back to data lake. I have 3 container- . Bronze, Silver and Gold. I have done the mounting and have not problem to read the source data and write it Bronze layer ( using hive meta store catalog). T...

  • 2933 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Givi Salu​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we ...

  • 0 kudos
2 More Replies
AmineHY
by Contributor
  • 8008 Views
  • 4 replies
  • 6 kudos

Resolved! Error When Starting the Cluster

I am having this error when running my cluster, any idea why?

  • 8008 Views
  • 4 replies
  • 6 kudos
Latest Reply
NandiniN
Databricks Employee
  • 6 kudos

@Werner Stinckens​ , I checked again, you cannot change them after your workspace is deployed. The only way right now is to recreate the workspace and migrate. It’s not possible to update CIDR range right now without migration.

  • 6 kudos
3 More Replies
DK03
by Contributor
  • 2979 Views
  • 2 replies
  • 0 kudos
  • 2979 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Deepak Kini​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 0 kudos
1 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 1897 Views
  • 0 replies
  • 4 kudos

Databricks marketplace is here. With Databricks Marketplace, you can easily browse and discover data that suits your needs and get instant access with...

Databricks marketplace is here. With Databricks Marketplace, you can easily browse and discover data that suits your needs and get instant access with Delta Sharing. Once you find a dataset you like, you can add it to your Databricks catalog. 

market1 market2 market3
  • 1897 Views
  • 0 replies
  • 4 kudos
Zara
by New Contributor II
  • 2524 Views
  • 2 replies
  • 3 kudos

loading incremental data

I want to load incremental data to the delta live table, I wrote function to load data for 10 tables, every time that I run the pipe line, some tables are empty and have a schema, and when I run again, the other tables are empty and the previous tabl...

  • 2524 Views
  • 2 replies
  • 3 kudos
Latest Reply
Annapurna_Hiriy
Databricks Employee
  • 3 kudos

@zahra Jalilpour​ How the DLT tables and views are updated depends on the update type:Refresh all: All live tables are updated to reflect the current state of their input data sources. For all streaming tables, new rows are appended to the table.Full...

  • 3 kudos
1 More Replies
Krish1
by New Contributor II
  • 10179 Views
  • 2 replies
  • 2 kudos

Deltalkake vs Delta table

Can somebody give me good definition of delta lake vs delta table? What are the use cases of each, similarities and differences? Sorry I’m new to databricks ans trying to learn.

  • 10179 Views
  • 2 replies
  • 2 kudos
Latest Reply
Annapurna_Hiriy
Databricks Employee
  • 2 kudos

Delta Lake and Delta table are related concepts in the Apache Delta Lake project. which extends Apache Spark with ACID (Atomicity, Consistency, Isolation, Durability) capabilities for data lakes. Delta Lake provides a storage layer that enables trans...

  • 2 kudos
1 More Replies
Vishal09k
by New Contributor II
  • 3387 Views
  • 1 replies
  • 3 kudos

Display Command Not showing the Result, Rather giving the Dataframe Schema

Display Command Not showing the Result, Rather giving the Dataframe Schema 

image image
  • 3387 Views
  • 1 replies
  • 3 kudos
Latest Reply
Rishabh-Pandey
Esteemed Contributor
  • 3 kudos

hey ,can you try you sql query with this methodselect * from (your sql query )

  • 3 kudos
YSF
by New Contributor III
  • 1724 Views
  • 1 replies
  • 1 kudos

Delta Live Table & Autoloader adding a non-existent column

I'm trying to setup autoloader to read some csv files. I tried with both autoloader with the DLT decorator as well as just autoloader by itself. The first column of the data is called "run_id", when I do a spark.read.csv() directly on the file it com...

  • 1724 Views
  • 1 replies
  • 1 kudos
Latest Reply
Rishabh-Pandey
Esteemed Contributor
  • 1 kudos

can you attach the exact output so that I can have a look on that .

  • 1 kudos
YSF
by New Contributor III
  • 1166 Views
  • 1 replies
  • 1 kudos

Any elegant pattern for Autoloader/DLT development?

Does anyone have a workflow or pattern that works for developing with autoloader/DLT? I'm still new to but the fact that while testing it's creating checkpoints using schema locations makes it really tricky to develop with and hammer out a working ve...

  • 1166 Views
  • 1 replies
  • 1 kudos
Latest Reply
Rishabh-Pandey
Esteemed Contributor
  • 1 kudos

what basically you are refering here as pattern .

  • 1 kudos
Nishin
by New Contributor
  • 4264 Views
  • 2 replies
  • 0 kudos

Error in SQL statement: SparkException: Table Access Control is not enabled on this cluster.

Hi Experts,Am using 'DataScience and Engineering' Workspace in Azure databricks and want to test 'table access control' on legacy Hive metastore on cluster.i did all what is mentioned in the link 'https://learn.microsoft.com/en-us/azure/databricks/da...

image image
  • 4264 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @nishin kumar​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so ...

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels