cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Taha_Hussain
by Databricks Employee
  • 1798 Views
  • 1 replies
  • 1 kudos

Databricks Office Hours Our next Office Hours session is scheduled for April 27 2022 - 8:00 am PT. Do you have questions about how to set up or use Da...

Databricks Office HoursOur next Office Hours session is scheduled for April 27 2022 - 8:00 am PT.Do you have questions about how to set up or use Databricks? Do you want to learn more about the best practices for deploying your use case or tips on da...

  • 1798 Views
  • 1 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 1 kudos

Just registered. Thank you and happy weekend.

  • 1 kudos
StephanieAlba
by Databricks Employee
  • 3588 Views
  • 1 replies
  • 6 kudos

Resolved! Is it possible to use Autoloader with a daily update file structure?

We get new files from a third-p@rty each day. The files could be the same or different. However, each day all csv files arrive in the same dated folder. Is it possible to use autoloader on this structure?We want each csv file to be a table that gets ...

The folders In the folders
  • 3588 Views
  • 1 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 6 kudos

@Stephanie Rivera​ , You can use pathGlobfilter, but you will need a separate autoloader for which type of file.df_alert = spark.readStream.format("cloudFiles") \.option("cloudFiles.format", "binaryFile") \.option("pathGlobfilter", alert.csv") \.load...

  • 6 kudos
User16835756816
by Databricks Employee
  • 2832 Views
  • 1 replies
  • 5 kudos

 Announcing: Delta Live Tables ! 

Databricks is excited to announce the general availability of Delta Live Tables to you, our community. Anxiously awaited, Delta Live Tables (DLT) is the first ETL framework that uses a simple, declarative approach to building reliable streaming or ...

  • 2832 Views
  • 1 replies
  • 5 kudos
Latest Reply
User16725394280
Databricks Employee
  • 5 kudos

Informative Content thanks for sharing.

  • 5 kudos
Kush22
by New Contributor
  • 2463 Views
  • 0 replies
  • 0 kudos

Delete the file

While exporting data from Databricks to Azure blob storage how can I delete the committed, started and success file? ​

  • 2463 Views
  • 0 replies
  • 0 kudos
sgannavaram
by New Contributor III
  • 4748 Views
  • 1 replies
  • 2 kudos

Resolved! How to pass variables into query string?

I have two variables StartTimeStmp and EndTimeStmp, i am going to assign the Start timestamp to it based on Last Successful Job Runtime and EndTimeStamp would be current time of system.SET StartTimeStmp = '2022-03-24 15:40:00.000';SET EndTimeStmp = '...

  • 4748 Views
  • 1 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 2 kudos

@Srinivas Gannavaram​ , in python:spark.sql(f""" SELECT CI.CORPORATE_ITEM_INTEGRATION_ID , CI.CORPORATE_ITEM_CD WHERE CI.DW_CREATE_TS < '{my_timestamp_variable}' ; """)

  • 2 kudos
Direo
by Contributor II
  • 15731 Views
  • 2 replies
  • 3 kudos
  • 15731 Views
  • 2 replies
  • 3 kudos
Latest Reply
User16873043212
Databricks Employee
  • 3 kudos

@Direo Direo​ , Yeah, this is a location inside your dbfs. The whole control is on you. Databricks do not delete something you keep in this location.

  • 3 kudos
1 More Replies
Direo
by Contributor II
  • 2539 Views
  • 1 replies
  • 5 kudos
  • 2539 Views
  • 1 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 5 kudos

@Direo Direo​ , Yes, you use Merge syntax for that https://docs.delta.io/latest/delta-update.html.And is more efficient than overwriting if you want to update only part of the data, but you need to think about the logic of what to update so overwriti...

  • 5 kudos
Constantine
by Contributor III
  • 2270 Views
  • 1 replies
  • 4 kudos

Resolved! What's the best architecture for Structured Streaming and why?

I am building an ETL pipeline which reads data from a Kafka topic ( data is serialized in Thrift format) and writes it to Delta Table in databricks. I want to have two layersBronze Layer -> which has raw Kafka dataSilver Layer -> which has deserializ...

  • 2270 Views
  • 1 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 4 kudos

@John Constantine​ , "Bronze Layer -> which has raw Kafka data"If you use confluent.io, you can also utilize a direct sink to DataLake Storage - bronze layer."Silver Layer -> which has deserialized data"Then use Delta Live Tables to process it to del...

  • 4 kudos
cal
by New Contributor
  • 776 Views
  • 0 replies
  • 0 kudos

G.I.S., Inc. is a distributor and fabricator of thermal and acoustical insulation systems for industrial, commercial, power, process, original equipme...

G.I.S., Inc. is a distributor and fabricator of thermal and acoustical insulation systems for industrial, commercial, power, process, original equipment manufacturers, plumbing and HVAC industries. In today's fast paced market, consumers have a multi...

  • 776 Views
  • 0 replies
  • 0 kudos
Anonymous
by Not applicable
  • 2427 Views
  • 1 replies
  • 1 kudos

Resolved! "policy_id" parameter in JOB API

I can't find information about that parameter in https://docs.databricks.com/dev-tools/api/latest/jobs.htmlWhere is it documented?

  • 2427 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ryan_Chynoweth
Databricks Employee
  • 1 kudos

I believe it is just "policy_id". As an incomplete example the specification via API would be something like: { "cluster_id": "1234-567890-abd35gh", "spark_context_id": 1234567890, "cluster_name": "my_cluster", "spark_version": "9.1.x-scala2....

  • 1 kudos
sgannavaram
by New Contributor III
  • 4369 Views
  • 3 replies
  • 4 kudos

Resolved! Write output of DataFrame to a file with tild ( ~) separator in Databricks Mount or Storage Mount with VM.

I need to write output of Data Frame to a file with tilde ( ~) separator in Databricks Mount or Storage Mount with VM. Could you please help with some sample code if you have any?

  • 4369 Views
  • 3 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 4 kudos

@Srinivas Gannavaram​ , Does it have to be CSV with fields separated by ~?If yes is enough to add .option("sep", "~")(df .write .option("sep", "~") .csv(mount_path))

  • 4 kudos
2 More Replies
Braxx
by Contributor II
  • 3673 Views
  • 1 replies
  • 2 kudos

Resolved! list users having access to scope credentials

Hello!How do I list all the users or groups having access to the key-vault backed scope credentials?Let's say, I have a scope called MyScope for which all the secrets are stored in MyKeyVault.I would like to see what users have access there and ideal...

  • 3673 Views
  • 1 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 2 kudos

@Bartosz Wachocki​ , As secrets use ACL for the scope, you need to make an API call (can be via CLI also) to list ACL for the given scope >> 2.0/secrets/acls/list more info here https://docs.databricks.com/dev-tools/api/latest/secrets.html#list-secre...

  • 2 kudos
BeginnerBob
by New Contributor III
  • 6906 Views
  • 2 replies
  • 2 kudos

Bronze silver gold layers

Is there a best practise guide on setting up the delta lake for these 3 layers. ​I'm looking for document or scripts to run that will assist me.

  • 6906 Views
  • 2 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

hi @Lloyd Vickery​ ,I would highly recommend to use Databricks Delta Live Tables (DLT) docs here https://databricks.com/product/delta-live-tables

  • 2 kudos
1 More Replies
AdamRink
by New Contributor III
  • 5793 Views
  • 3 replies
  • 0 kudos

Try catch multiple write streams on a job

We are having issues with checkpoints and schema versions getting out of date (no idea why), but it causes jobs to fail. We have jobs that are running 15-30 streaming queries, so if one fails, that creates an issue. I would like to trap the checkpo...

  • 5793 Views
  • 3 replies
  • 0 kudos
Latest Reply
AdamRink
New Contributor III
  • 0 kudos

The problem is that on startup if a stream fails, it would never hit the awaitAnyTermination? I almost want to take that while loop and put it on a background thread to start that at the beginning and then fire all the streams afterward... not sure ...

  • 0 kudos
2 More Replies
TS
by New Contributor III
  • 5634 Views
  • 3 replies
  • 3 kudos

Resolved! Turn spark.sql query into scala function

Hello,I'm learning Scala / Spark and try to understand what's wrong with my function:I have a spark.sql query, stored in a variable:val uViewName = spark.sql(""" SELECT v.Data_View_Name FROM apoHierarchy AS h INNER JOIN apoView AS v ON h.View_N...

  • 5634 Views
  • 3 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 3 kudos

try add .first()(0) it will return only value from first row/column as currently you are returning Dataset: var uViewName = spark.sql(s""" SELECT v.Data_View_Name FROM apoHierarchy AS h INNER JOIN apoView AS v ON h.View_Name = v.Context_View_N...

  • 3 kudos
2 More Replies
Labels