When adding parameters to the job under "Edit Parameters", you provide a `Key` and then the `Value`. In my example, the `Key` was `iso_datetime` (which is what was referenced in the notebook) and the `Value` was `{{job.start_time.[iso_datetime]}}`. W...
Hi,
You have different options, one could be to setup the job with multiple parameters. So one parameter called `year` is represented by `{{job.start_time.[year]}}`, another `month` represented by `{{job.start_time.[month]}}`, etc.
Alternatively, yo...
Hi,Hopefully this question is related to testing and any production data would get persisted to a table but one example is:df = (spark.range(10).write.format("delta").mode("append").save("file:/tmp/data"))ALTER TABLE delta.`file:/tmp/data` CLUSTER BY...
Hi,I tried replicating your test and it worked for me so I'm not sure what's going on without knowing more about which Runtime, python version, etc you're using. But regardless, perhaps `dbutils.fs.cp` would be an easier way of doing what you're tryi...