cancel
Showing results for 
Search instead for 
Did you mean: 
jenshumrich
New Contributor III
since ‎03-13-2024
Monday

User Stats

  • 7 Posts
  • 0 Solutions
  • 3 Kudos given
  • 0 Kudos received

User Activity

I have the following code:spark.sparkContext.setCheckpointDir("dbfs:/mnt/lifestrategy-blob/checkpoints") result_df.repartitionByRange(200, "IdStation") result_df_checked = result_df.checkpoint(eager=True) unique_stations = result_df.select("IdStation...
Hello,I tried to schedule a long running job and surprisingly it does seem to neither terminate (and thus does not let the cluster shut down), nor continue running, even though the state is still "Running":But the truth is that the job has miserably ...
Yesterday I created a ton of csv files via joined_df.write.partitionBy("PartitionColumn").mode("overwrite").csv(            output_path, header=True        )Today, when working with them I realized, that they were not loaded. Upon investigation I saw...
Kudos given to