hi @Werner Stinckensโ , This is a Apache spark notebook, which reads the contents of a file stored in Azure blob and loads into an on prem SQL table.
Databricks Runtime is 9.1 LTS (includes Apache Spark 3.1.2, Scala 2.12) with a Standard_DS3_v2 worker-driver type
The notebook reads the file content using below code
val SourceDataFrame = spark
.read
.option("header","false")
.option("delimiter", "|")
.schema(SourceSchemaStruct)
.csv(SourceFilename)
Then it writes the dataframe into a table with an overwrite mode
SourceDataFrame2
.write
.format("jdbc")
.mode("overwrite")
.option("driver", driverClass)
.option("url", jdbcUrl)
.option("dbtable", TargetTable)
.option("user", jdbcUsername)
.option("password", jdbcPassword)
.save()