cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

akisugi
by New Contributor III
  • 4827 Views
  • 5 replies
  • 0 kudos

Resolved! Is it possible to control the ordering of the array values created by array_agg()?

Hi! I would be glad to ask you some questions.I have the following data. I would like to get this kind of result. I want `move` to correspond to the order of `hist`.Therefore, i considered the following query.```with tmp as (select * from (values(1, ...

スクリーンショット 2024-04-06 23.08.15.png スクリーンショット 2024-04-06 23.07.34.png
  • 4827 Views
  • 5 replies
  • 0 kudos
Latest Reply
akisugi
New Contributor III
  • 0 kudos

Hi @ThomazRossito This is a great idea. It can solve my problem.Thank you.

  • 0 kudos
4 More Replies
cool_cool_cool
by New Contributor II
  • 1864 Views
  • 2 replies
  • 2 kudos

Resolved! Trigger Dashboard Update At The End of a Workflow

Heya I have a workflow that computes some data and writes to a delta table, and I have a dashboard that is based on the table. How can I trigger refresh on the dashboard once the workflow is finished? Thanks!

  • 1864 Views
  • 2 replies
  • 2 kudos
Latest Reply
DanWertheimer
New Contributor
  • 2 kudos

How does one do this with the new dashboards? I only see the ability to do this with legacy dashoards.

  • 2 kudos
1 More Replies
939772
by New Contributor III
  • 1369 Views
  • 1 replies
  • 0 kudos

Resolved! DLT refresh unexpectedly failing

We're hitting an error with a delta live table refresh since yesterday; nothing has changed in our system yet there appears to be a configuration error: { ... "timestamp": "2024-04-08T23:00:10.630Z", "message": "Update b60485 is FAILED.",...

  • 1369 Views
  • 1 replies
  • 0 kudos
Latest Reply
939772
New Contributor III
  • 0 kudos

Apparently the `custom_tags` of `ResourceClass` is now extraneous -- removing it from config corrected our problem.

  • 0 kudos
brian_zavareh
by New Contributor III
  • 5485 Views
  • 4 replies
  • 3 kudos

Optimizing Delta Live Table Ingestion Performance for Large JSON Datasets

I'm currently facing challenges with optimizing the performance of a Delta Live Table pipeline in Azure Databricks. The task involves ingesting over 10 TB of raw JSON log files from an Azure Data Lake Storage account into a bronze Delta Live Table la...

Data Engineering
autoloader
bigdata
delta-live-tables
json
  • 5485 Views
  • 4 replies
  • 3 kudos
Latest Reply
standup1
Contributor
  • 3 kudos

Hey @brian_zavareh , see this document. I hope this can help.https://learn.microsoft.com/en-us/azure/databricks/compute/cluster-config-best-practicesJust keep in mind that there's some extra cost from Azure VM side, check your Azure Cost Analysis for...

  • 3 kudos
3 More Replies
standup1
by Contributor
  • 2163 Views
  • 1 replies
  • 0 kudos

Recover a deleted DLT pipeline

Hello,does anyone know how to recover a deleted dlt pipeline, or at least recover deleted tables that were managed by the dlt pipeline ? We have a pipeline that stopped working and throwing all kind of errors, so we decided to create a new one and de...

  • 2163 Views
  • 1 replies
  • 0 kudos
Latest Reply
standup1
Contributor
  • 0 kudos

Thank you, Kanzi. Just to confirm that I understood you correctly. If the pipeline is deleted [like in our case] without having version control, backup configuration..etc already implemented. There's no way to recover those tables, not the pipeline. ...

  • 0 kudos
Shas_DataE
by New Contributor II
  • 1927 Views
  • 2 replies
  • 0 kudos

Alerts and Dashboard

Hi Team,In my Databricks workspace, i have created an alerts using the query in such a way the schedule will run on daily basis and the results will get populated to dashboard. The results from dashboard will be notified via email, but i am seeing re...

  • 1927 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

HI @Shas_DataE, Good Day!  Could you please check and confirm if there are any special characters in the table column? At this moment, special characters are compatible with Excel.  If yes then please drop the column that has that special character a...

  • 0 kudos
1 More Replies
Kibour
by Contributor
  • 2185 Views
  • 2 replies
  • 1 kudos

Resolved! date_format 'LLLL' returns '1'

Hi all,In my notebook, when I run my cell with following code%sqlselect date_format(date '1970-01-01', "LLL");I get '1', while I expect 'Jan' according to the dochttps://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html I would also expect t...

  • 2185 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kibour
Contributor
  • 1 kudos

Hi @Retired_mod ,Turns out it was actually a Java 8 bug:IllegalArgumentException: Java 8 has a bug to support stand-alone form (3 or more 'L' or 'q' in the pattern string). Please use 'M' or 'Q' instead, or upgrade your Java version. For more details...

  • 1 kudos
1 More Replies
Kibour
by Contributor
  • 2806 Views
  • 1 replies
  • 0 kudos

Resolved! Trigger one workflow after completion of another workflow

Hi there,Is it possible to trigger one workflow conditionnally on the completion of another workflow? Typically, I would like to have my workflow W2 to start automatically once the workflow W1 has successfully completed.Thanks in advance for your ins...

  • 2806 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kibour
Contributor
  • 0 kudos

Found it: you build a new workflow where you connect W1 and W2 (each as a Run Job).

  • 0 kudos
Braxx
by Contributor II
  • 9507 Views
  • 6 replies
  • 2 kudos

Resolved! issue with group by

I am trying to group by a data frame by "PRODUCT", "MARKET" and aggregate the rest ones specified in col_list. There are much more column in the list but for simplification lets take the example below.Unfortunatelly I am getting the error:"TypeError:...

  • 9507 Views
  • 6 replies
  • 2 kudos
Latest Reply
Ralphma
New Contributor II
  • 2 kudos

The error you're encountering, "TypeError: unhashable type: 'Column'," is likely due to the way you're defining exprs. In Python, sets use curly braces {}, but they require their items to be hashable. Since the result of sum(x).alias(x) is not hashab...

  • 2 kudos
5 More Replies
ADBQueries
by New Contributor
  • 2330 Views
  • 1 replies
  • 0 kudos

DBEAVER Connection to Sql Warehouse in Databricks

I'm trying to connect to SQL warehouse in Azure Datebricks with DBEAVER application.I'm creating a jdbc connection string as mentioned here: https://docs.databricks.com/en/integrations/jdbc/authentication.htmlHere is a sample connection link I have c...

  • 2330 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @ADBQueries , Good Day!  Could you please try running the code again to generate another access token and, once generated, check it on this page, https://jwt.ms, to confirm that the token has not expired? Also, if not done yet, please review the f...

  • 0 kudos
acagatayyilmaz
by New Contributor
  • 2246 Views
  • 1 replies
  • 0 kudos

How to find consumed DBU

Hi All,I'm trying to understand my databricks consumption to purchase a reservation. However, I couldnt find the consumed DBU in both Azure Portal and Databricks workspace.I'm also exporting and processing Azure Cost data daily. When I check the reso...

  • 2246 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @acagatayyilmaz , Hope you are doing well!  You can refer to the Billable usage system table to find the records of consumed DBU. You can go through the below document to understand more about the System tables:  https://learn.microsoft.com/en-us/...

  • 0 kudos
vanepet
by New Contributor II
  • 16486 Views
  • 5 replies
  • 2 kudos

Is it possible to use multiprocessing or threads to submit multiple queries to a database from Databricks in parallel?

We are trying to improve our overall runtime by running queries in parallel using either multiprocessing or threads. What I am seeing though is that when the function that runs this code is run on a separate process it doesnt return a dataFrame with...

  • 16486 Views
  • 5 replies
  • 2 kudos
Latest Reply
BapsDBS
New Contributor II
  • 2 kudos

Thanks for the links mentioned above. But both of them uses raw python to achieve parallelism. Does this mean Spark (read PySpark) does exactly provisions for parallel execution of functions or even notebooks ? We used a wrapper notebook with ThreadP...

  • 2 kudos
4 More Replies
RIDBX
by New Contributor II
  • 2059 Views
  • 1 replies
  • 0 kudos

What is the bestway to handle huge gzipped file dropped to S3 ?

What is the bestway to handle huge gzipped file dropped to S3 ?=================================================I find some intereting suggestions for posted questions. Thanks for reviewing my threads. Here is the situation we have.We are getting dat...

Data Engineering
bulkload
S3
  • 2059 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
What is the bestway to handle huge gzipped file dropped to S3 ?=================================================I find some intereting suggestions for posted questions. Thanks for reviewing my threads. Here is the situation we have.We are getting dat...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
zerodarkzone
by New Contributor III
  • 1610 Views
  • 1 replies
  • 1 kudos

Cannot create vnet peering on Azure Databricks

Hi,I'm trying to create a VNET peering using to SAP hana using the default VNET created by databricks but it is not possible.I'm getting the following errorNo se pudo agregar el emparejamiento de red virtual "PeeringSAP" a "workers-vnet". Error: El c...

Data Engineering
Azure Databricks
peering
vnet
  • 1610 Views
  • 1 replies
  • 1 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 1 kudos

This widget could not be displayed.
Hi,I'm trying to create a VNET peering using to SAP hana using the default VNET created by databricks but it is not possible.I'm getting the following errorNo se pudo agregar el emparejamiento de red virtual "PeeringSAP" a "workers-vnet". Error: El c...

This widget could not be displayed.
  • 1 kudos
This widget could not be displayed.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels