by
KVNARK
• Honored Contributor II
- 2490 Views
- 4 replies
- 2 kudos
I need to convert the below azure sql date_add function to databricks sql. But not getting the expected output. Can anyone suggest what can be done for this.DATE_ADD(Hour,(SELECT t1.SLA FROM SLA t1 WHERE t1.Stage_Id = 2 AND t1.RNK = 1)
- 2490 Views
- 4 replies
- 2 kudos
Latest Reply
Hi @KVNARK . Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
3 More Replies
- 1747 Views
- 1 replies
- 1 kudos
For a production work load containing around 15k gzip compressed json files per hour all in a YYYY/MM/DD/HH/id/timestamp.json.gz directoryWhat would be the better approach on ingesting this into a delta table in terms of not only the incremental load...
- 1747 Views
- 1 replies
- 1 kudos
Latest Reply
@Kaniz Fatma So i've not found a fix for the small file problem using autoloader, seems to struggle really badly against large directories, had a cluster running for 8h stuck on "listing directory" part with no end, cluster seemed completely idle to...
- 1599 Views
- 2 replies
- 0 kudos
I have a streaming workload using the S3-SQS Connector. The streaming job is running fine within the SLA. Should I migrate my job to use the auto-loader? If Yes, what are the benefits? who should migrate and who should not?
- 1599 Views
- 2 replies
- 0 kudos
Latest Reply
That makes sense @Anand Ladda ! One major improvement that will have a direct impact on the performance is the architectural difference. S3-SQS uses an internal implementation of the Delta table to store the checkpoint details about the source files...
1 More Replies