Resolved! Spark Driver Crash Writing Large Text
I'm working with a large text variable, working it into single line JSON where Spark can process beautifully. Using a single node 256 GB 32 core Standard_E32d_v4 "cluster", which should be plenty memory for this dataset (haven't seen cluster memory u...
- 9048 Views
- 5 replies
- 2 kudos
Latest Reply
@David Toft​ Hi, The current implementation of dbutils.fs is single-threaded, performs the initial listing on the driver and subsequently launches a Spark job to perform the per-file operations. So I guess the put operation is running on a single cor...
- 2 kudos