- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-25-2022 11:04 AM
I am trying to migrate a spark job from an on-premises Hadoop cluster to data bricks on azure. Currently, we are keeping many values in the properties file. When executing spark-submit we pass the parameter --properties /prop.file.txt. and inside the spark code we use spark.conf.get("spark.param1") to get individual parameter values .How can we implement properties file in the Databricks notebook
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-28-2022 01:46 AM
I use JSON files and .conf files which reside on the data lake or in the filestore of dbfs.
Then read those files using python/scala
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-28-2022 01:46 AM
I use JSON files and .conf files which reside on the data lake or in the filestore of dbfs.
Then read those files using python/scala

