cancel
Showing results for 
Search instead for 
Did you mean: 
sarvesh
Contributor III
since ‎11-16-2021
‎06-26-2023

User Stats

  • 28 Posts
  • 2 Solutions
  • 13 Kudos given
  • 12 Kudos received

User Activity

I am trying to use Audit from Vertica in spark and getting correct table size from it, but the minimum size Audit function can find is bytes, but we are getting data in bits even smaller than bytes. val size = f"select audit('table_name');"
I have a xlsx file which has a single column ;percentage30%40%50%-10%0.00%0%0.10%110%99.99%99.98%-99.99%-99.98%when i read this using Apache-Spark out put i get is,|percentage|+----------+| 0.3|| 0.4|| 0.5|| -0.1|| 0.0|| ...
I am trying to read a excel file which has 3 sheets which have integers as there names,sheet 1 name = 21sheet 2 name = 24sheet 3 name = 224i got this data from a user so I can't change the sheet name, but with spark reading these is an issue.code -v...
solution :- i don't need to add any executor or driver memory all i had to do in my case was add this : - option("maxRowsInMemory", 1000). Before i could n't even read a 9mb file now i just read a 50mb file without any error.{ val df = spark.read .f...
I am trying to read a 16mb excel file and I was getting a gc overhead limit exceeded error to resolve that i tried to increase my executor memory with,spark.conf.set("spark.executor.memory", "8g")but i got the following stack :Using Spark's default l...