Hello everyone,
I would like to know if it was possible to transform, with PySpark, a flat file stored in a directory in Azure Blob storage into bytes format to be able to parse it, while using the connection already integrated into the cluster between databricks and Azure Blob storage , I already found some code that uses BlobServiceClient but I would like to do that using the already integrated connection.
regards,