cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

KVNARK
by Honored Contributor II
  • 2085 Views
  • 4 replies
  • 2 kudos

Azure SQL date function conversion to Databricks SQL.

I need to convert the below azure sql date_add function to databricks sql. But not getting the expected output. Can anyone suggest what can be done for this.DATE_ADD(Hour,(SELECT t1.SLA FROM SLA t1 WHERE t1.Stage_Id = 2 AND t1.RNK = 1)

  • 2085 Views
  • 4 replies
  • 2 kudos
Latest Reply
Vartika
Databricks Employee
  • 2 kudos

Hi @KVNARK .​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 2 kudos
3 More Replies
Smitha1
by Valued Contributor II
  • 1410 Views
  • 1 replies
  • 2 kudos

#00244807 and #00245872 Ticket Status - HIGH Priority

Dear @Vidula Khanna​ Vidula, Databricks team, @Nadia Elsayed​ @Jose Gonzalez​ @Aden Jaxson​ What is the SLA/ETA for normal priority ticket and HIGH priority ticket?I created tickets #00244807 on 7th Dec and  #00245872 but haven't received any update ...

image.png
  • 1410 Views
  • 1 replies
  • 2 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 2 kudos

you can only create high-priority tasks if you have an enterprise plan.as a normal user you can only create normal tasksif you have enterprise plan then you can escalate case .databricks team will revert you soon there.

  • 2 kudos
AndriusVitkausk
by New Contributor III
  • 1444 Views
  • 1 replies
  • 1 kudos

Autoloader event vs directory ingestion

For a production work load containing around 15k gzip compressed json files per hour all in a YYYY/MM/DD/HH/id/timestamp.json.gz directoryWhat would be the better approach on ingesting this into a delta table in terms of not only the incremental load...

  • 1444 Views
  • 1 replies
  • 1 kudos
Latest Reply
AndriusVitkausk
New Contributor III
  • 1 kudos

@Kaniz Fatma​ So i've not found a fix for the small file problem using autoloader, seems to struggle really badly against large directories, had a cluster running for 8h stuck on "listing directory" part with no end, cluster seemed completely idle to...

  • 1 kudos
brickster_2018
by Databricks Employee
  • 1343 Views
  • 2 replies
  • 0 kudos

Why should I move to Auto-loader?

I have a streaming workload using the S3-SQS Connector. The streaming job is running fine within the SLA. Should I migrate my job to use the auto-loader? If Yes, what are the benefits? who should migrate and who should not?

  • 1343 Views
  • 2 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

That makes sense @Anand Ladda​ ! One major improvement that will have a direct impact on the performance is the architectural difference. S3-SQS uses an internal implementation of the Delta table to store the checkpoint details about the source files...

  • 0 kudos
1 More Replies
Labels