- 715 Views
- 0 replies
- 0 kudos
I have created a custom transformer to be used in a ml pipeline. I was able to write the pipeline to storage by extending the transformer class with DefaultParamsWritable. Reading the pipeline back in however, does not seem possible in Scala. I have...
- 715 Views
- 0 replies
- 0 kudos
by
Nik
• New Contributor III
- 13473 Views
- 19 replies
- 0 kudos
Hi
i am reading from a text file from a blob
val sparkDF = spark.read.format(file_type)
.option("header", "true")
.option("inferSchema", "true")
.option("delimiter", file_delimiter)
.load(wasbs_string + "/" + PR_FileName)
Then i test my Datafra...
- 13473 Views
- 19 replies
- 0 kudos
Latest Reply
Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same.
fpath=output+'/'+'temp'
def file_exists(path):
try:
dbutils.fs.ls(path)
return...
18 More Replies