by
gyapar
• New Contributor II
- 7186 Views
- 0 replies
- 0 kudos
Hi all,I'm trying to do creating one job cluster with one configuration or specification which has a workflow and this workflow needs to have 3 dependent tasks as a straight line. For example, t1->t2->t3. In databricks there are some constraints also...
- 7186 Views
- 0 replies
- 0 kudos
- 1408 Views
- 0 replies
- 0 kudos
Hello,We are using DLT pipelines for many of our jobs with notifications on failures to slack.Wondering if there is a clean way to disable the alerts when in development mode. It does make sense to have it turned off in dev, doesn't it?
- 1408 Views
- 0 replies
- 0 kudos
- 2008 Views
- 0 replies
- 0 kudos
Hi Team,I have attended the Advantage Lakehouse: Fueling Innovation in the Era of Data and AI webinar.Also completed Databricks Lakehouse Fundamentals and feedback survey, but still I have not received the Databricks voucher.Could you please look i...
- 2008 Views
- 0 replies
- 0 kudos
- 3362 Views
- 2 replies
- 0 kudos
1. How to use cloudFiles.backfillInterval option in a notebook?2. Does It need to be any set of the property?3. Where is exactly placed readstream portion of the code or writestream portion of the code?4. Do you have any sample code?5. Where we find ...
- 3362 Views
- 2 replies
- 0 kudos
Latest Reply
1.Is the following code correct for specifying the .option("cloudFiles.backfillInterval", 300)?df = spark.readStream.format("cloudFiles") \.option("cloudFiles.format", "csv") \.option("cloudFiles.schemaLocation", f"dbfs:/FileStore/xyz/back_fill_opti...
1 More Replies