cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Bin
by New Contributor
  • 682 Views
  • 0 replies
  • 0 kudos

How to do an "overwrite" output mode using spark structured streaming without deleting all the data and the checkpoint

I have this delta lake in ADLS to sink data through spark structured streaming. We usually append new data from our data source to our delta lake, but there are some cases when we find errors in the data that we need to reprocess everything. So what ...

  • 682 Views
  • 0 replies
  • 0 kudos
mp
by New Contributor II
  • 1403 Views
  • 4 replies
  • 6 kudos

Resolved! How can I convert a parquet into delta table?

I am looking to migrate my legacy warehouse data. How can I convert a parquet into delta table?

  • 1403 Views
  • 4 replies
  • 6 kudos
Latest Reply
Kaniz
Community Manager
  • 6 kudos

Hi @Manish P​ , You have three options for converting a Parquet table to a Delta table.Convert files to Delta Lake format and then create a Delta table:CONVERT TO DELTA parquet.`/data-pipeline/` CREATE TABLE events USING DELTA LOCATION '/data-pipelin...

  • 6 kudos
3 More Replies
ilarsen
by Contributor
  • 501 Views
  • 0 replies
  • 1 kudos

Trouble referencing a column that has been added by schema evolution (Auto Loader with Delta Live Tables)

Hi,I have a Delta Live Tables pipeline, using Auto Loader, to ingest from JSON files. I need to do some transformations - in this case, converting timestamps. Except one of the timestamp columns does not exist in every file. This is causing the DLT p...

  • 501 Views
  • 0 replies
  • 1 kudos
serg-v
by New Contributor III
  • 1089 Views
  • 3 replies
  • 0 kudos

Running large window spark structured streaming aggregations with small slide duration

I want to run aggregations on large windows (90 days) with small slide duration (5 minutes).Straightforward solution leads to giant state around hundreds of gigabytes, which doesn't look acceptable.Is there any best practices doing this?Now I conside...

  • 1089 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Sergey Volkov​, Thanks for your question. Here are some fantastic articles on EWMA and Event-time Aggregation in Apache Spark™’s Structured Streaming. Please have a look. Let us know if that helps.https://towardsdatascience.com/time-series-from-s...

  • 0 kudos
2 More Replies
SailajaB
by Valued Contributor III
  • 1062 Views
  • 2 replies
  • 8 kudos

Resolved! How to restrict Azure users to use launch workspace to login to ADB workspace as admin when user has owner or contributor role

HI,Is there any way to disable launch workspace option in Azure portal for ADB.We have user accesses at resource group, so we need to restrict users who are part of owner or contributor role to launch ADB worksapce as admin.Thank you

  • 1062 Views
  • 2 replies
  • 8 kudos
Latest Reply
none_ranjeet
New Contributor III
  • 8 kudos

Deny Assignments don't block subscription contributor to launch workspace and become admin. Actually I haven't find any way to block that after many tries of different methods.

  • 8 kudos
1 More Replies
Malcoln_Dandaro
by New Contributor
  • 1156 Views
  • 0 replies
  • 0 kudos

Is there any way to navigate/access cloud files using the direct abfss URI (no mount) with default python functions/libs like open() or os.listdir()?

Hello, Today on our workspace we access everything via mount points, we plan to change it to "abfss://" because of security, governance and performance reasons. The problem is sometimes we interact with files using "python only" code, and apparently ...

  • 1156 Views
  • 0 replies
  • 0 kudos
danny_edm
by New Contributor
  • 352 Views
  • 0 replies
  • 0 kudos

collect_set wired result when Proton enable

Cluster : DBR 10.4 LTS with protonSample schemaseq_no (decimal)type (string)Sample dataseq_no type1 A1 A2 A2 B2 Bcommand : F.size(F.collect_set(F.col("type")).over(Window.partitionBy("seq_no"))...

  • 352 Views
  • 0 replies
  • 0 kudos
Mamdouh_Dabjan
by New Contributor III
  • 2269 Views
  • 6 replies
  • 2 kudos

Importing a large csv file into databricks free

Basically, I have a large csv file that does not fit in a single worksheet. I can just use it in power query. I am trying to import this file into my databricks notebook. I imported it and created a table using that file. But, When I saw the table, i...

  • 2269 Views
  • 6 replies
  • 2 kudos
Latest Reply
weldermartins
Honored Contributor
  • 2 kudos

hello, manually opening one of the parts of the csv file is the view different?

  • 2 kudos
5 More Replies
yannickmo
by New Contributor III
  • 3765 Views
  • 8 replies
  • 14 kudos

Resolved! Adding JAR from Azure DevOps Artifacts feed to Databricks job

Hello,We have some Scala code which is compiled and published to an Azure DevOps Artifacts feed.The issue is we're trying to now add this JAR to a Databricks job (through Terraform) to automate the creation.To do this I'm trying to authenticate using...

  • 3765 Views
  • 8 replies
  • 14 kudos
Latest Reply
alexott
Valued Contributor II
  • 14 kudos

As of right now, Databricks can't use non-public Maven repositories as resolving of the maven coordinates happens in the control plane. That's different from the R & Python libraries. As workaround you may try to install libraries via init script or ...

  • 14 kudos
7 More Replies
User16752245312
by New Contributor III
  • 3663 Views
  • 2 replies
  • 2 kudos

How can I automatically capture the heap dump on the driver and executors in the event of an OOM error?

If you have a job that repeatedly run into Out-of-memory error (OOM) either on the driver or executors, automatically capture the heap dump on OOM event will help debugging the memory issue and identify the cause of the error.Spark config:spark.execu...

  • 3663 Views
  • 2 replies
  • 2 kudos
Latest Reply
John_360
New Contributor II
  • 2 kudos

Is it necessary to use exactly that HeapDumpPath? I find I'm unable to get driver heap dumps with a different path but otherwise the same configuration. I'm using spark_version 10.4.x-cpu-ml-scala2.12.

  • 2 kudos
1 More Replies
Serhii
by Contributor
  • 1923 Views
  • 1 replies
  • 1 kudos

Resolved! Behaviour of cluster launches in multi-task jobs

We are adapting the multi-tasks workflow example from dbx documentation for our pipelines https://dbx.readthedocs.io/en/latest/examples/python_multitask_deployment_example.html. As a part of configuration we specify cluster configuration and provide ...

  • 1923 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16873043099
Contributor
  • 1 kudos

Tasks within the same multi task job can reuse the clusters. A shared job cluster allows multiple tasks in the same job to use the cluster. The cluster is created and started when the first task using the cluster starts and terminates after the last ...

  • 1 kudos
Ashok1
by New Contributor II
  • 755 Views
  • 2 replies
  • 1 kudos
  • 755 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hey there @Ashok ch​ Hope everything is going great.Does @Ivan Tang​'s response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly? Else please let us know if you need more hel...

  • 1 kudos
1 More Replies
shubhamb
by New Contributor III
  • 2553 Views
  • 3 replies
  • 3 kudos

How to fetch environmental variables saved in one notebook into another notebook in Databricks Repos and Notebooks

I have this config.py file which is used to store environmental variablesPUSH_API_ACCOUNT_ID = '*******' PUSH_API_PASSCODE = '***********************'I am using this to fetch the variables and use it in my file.py import sys   sys.path.append("..") ...

  • 2553 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hey there @Shubham Biswas​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from ...

  • 3 kudos
2 More Replies
BradSheridan
by Valued Contributor
  • 1958 Views
  • 9 replies
  • 4 kudos

Resolved! How to use cloudFiles to completely overwrite the target

Hey there Community!! I have a client that will produce a CSV file daily that needs to be moved from Bronze -> Silver. Unfortunately, this source file will always be a full set of data....not incremental. I was thinking of using AutoLoader/cloudFil...

  • 1958 Views
  • 9 replies
  • 4 kudos
Latest Reply
BradSheridan
Valued Contributor
  • 4 kudos

I "up voted'" all of @werners suggestions b/c they are all very valid ways of addressing my need (the true power/flexibility of the Databricks UDAP!!!). However, turns out I'm going to end up getting incremental data afterall :). So now the flow wi...

  • 4 kudos
8 More Replies
Labels
Top Kudoed Authors