- 2151 Views
- 2 replies
- 0 kudos
SparkSession spark = SparkSession.builder() .appName("SparkS3Example") .master("local[1]") .getOrCreate(); spark.sparkContext().hadoopConfiguration().set("fs.s3a.access.key", S3_ACCOUNT_KEY); spark.sparkContext().hadoopConf...
- 2151 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Sweetnesh Dholariya,Does @Debayan Mukherjee's response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?Thanks!
1 More Replies
- 944 Views
- 0 replies
- 2 kudos
Here is an article I wrote that puts Databricks in a historical context (why was it developed?) and provides introductory steps to help a newbie get started. Feel free to copy/link as you want.https://www.linkedin.com/pulse/databricks-introduction-ch...
- 944 Views
- 0 replies
- 2 kudos
- 893 Views
- 0 replies
- 0 kudos
Hi Team,
I am facing an issue "java.io.IOException: While processing file s3://test/abc/request_dt=2021-07-28/someParquetFile. [XYZ] BINARY is not in the store"
The things i did before getting the above exception:
1. Alter table tableName1 add colum...
- 893 Views
- 0 replies
- 0 kudos
- 1210 Views
- 1 replies
- 0 kudos
I have used Ranger in Apache Hadoop and it works fine for my use case. Now that I am migrating my workloads from Apache Hadoop to Databricks
- 1210 Views
- 1 replies
- 0 kudos
Latest Reply
Currently, Table ACL does not support column-level security. There are several tools like Privcera which has better integration with Databricks.
- 11120 Views
- 1 replies
- 0 kudos
Hi All,
I wanted to read parqet file compressed by snappy into Spark RDD
input file name is: part-m-00000.snappy.parquet
i have used sqlContext.setConf("spark.sql.parquet.compression.codec.", "snappy")
val inputRDD=sqlContext.parqetFile(args(0))
whe...
- 11120 Views
- 1 replies
- 0 kudos
Latest Reply
raela
Databricks Employee
Have you tried sqlContext.read.parquet("/filePath/") ?