Hi @Jothia
I believe you need to replicate the display format, implement the formatting logic in Spark after reading.
Use spark excel normally which will give you raw numeric/text values -
df = (spark.read.format("com.crealytics.spark.excel")
.opt...
Hi team,In interactive notebooks on personal clusters, you’re attached directly to the Spark driver inside the cluster. Spark session is the legacy PySpark session.In job clusters, especially when running with newer runtimes (e.g. DBR 14.x+ or SQL wa...
If the “reporting” layer is essentially micro-batching over bounded backlogs, run it with availableNow (or a scheduled batch job) so each run is naturally bounded and exits cleanly on its own, no manual cancel. This greatly reduces chances of partial...
hi @Srajole ,There are a bunch of possibilities as to why the data is not being written into the table -You’re writing to a path different from the table’s storage location, or using a write mode that doesn’t replace data as expected.spark.sql("DESCR...
Hi team,The underlying cause of this issue is an incorrect Network Connectivity Configuration (NCC) for Azure Storage in the Databricks environment. The NCC determines which resources are accessible from within the Databricks environment. If the NCC ...