Do Delta Live Tables have different built-in SQL functions than the corresponding Databricks runtime?
I created a cluster with Databricks runtime 10.3 (the current DLT runtime) so I could test my Delta Live Tables code before running it as a pipeline. I was able to use the built-in SQL functions split() and element_at() within my interactive workbook, but then when I tried to use them in a DLT run, the same code produced a null value.
Is there a reference guide for Delta Live Tables that shows the built-in functions (if different from the corresponding Databricks runtime built-in functions)?
The code I ran was to split a file path to extract just the file name. It took a string like this:
"s3://example-bucket/folder/secondfolder/thirdfolder/b4618a0e-ffb8-11ec-9bbe-00163e43d8f1.json"
and extracted this:
"b4618a0e-ffb8-11ec-9bbe-00163e43d8f1"
Here's the code, which works fine in interactive 10.3 runtime, but doesn't work in DLT:
spark.sql("""
SELECT
element_at(split(element_at(split(file_name, '/'), -1), '\\.'), 1) as file_guid,
*
FROM my_tablename""")