I wanted to read parqet file compressed by snappy into Spark RDD
input file name is: part-m-00000.snappy.parquet
i have used sqlContext.setConf("spark.sql.parquet.compression.codec.", "snappy")
Hi there,I'm just getting started with Spark and I've got a moderately sized DataFrame created from collating CSVs in S3 (88 columns, 860k rows) that seems to be taking an unreasonable amount of time to insert (using SaveMode.Append) into Postgres. I...
Hi I have Spark job which does group by and I cant avoid it because of my use case. I have large dataset around 1 TB which I need to process/update in DataFrame. Now my jobs shuffles huge data and slows things because of shuffling and groupby. One r...
I'm building notebooks for tutorial sessions and I want to clear all the output results from the notebook before distributing it to the participants.
This functionality exists in Juypter but I can't find it in Databricks. Any pointers?