cancel
Showing results for 
Search instead for 
Did you mean: 
tanin
Contributor
since ‎08-27-2021
‎06-26-2023

User Stats

  • 7 Posts
  • 0 Solutions
  • 1 Kudos given
  • 11 Kudos received

User Activity

I profile it and it seems the slowness comes from Spark planning, especially for a more complex job (e.g. 100+ joins). Is there a way to speed it up (e.g. by disabling certain optimization)?
Here's the code:val result = spark .createDataset(List("test")) .rdd .repartition(100000) .map { _ => "test" } .collect() .toList   println(result)I write tests to test for correctness, so I wonde...
I converted a data job fro RDD to Dataset, and I've found that, in prod, the data job runs faster, which is nice.But unit test runs 3x slower than before.My best guess is that Dataset spends time doing a lot of stuffs like encoding, optimizing, query...
Kudos from
Kudos given to