cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

I've set the partition mode to nonstrict in hive but spark is not seeing it

max522over
New Contributor II

I've got a table I want to add some data to and it's partitoned. I want to use dynamic partitioning but I get this error

org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict

at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:168)

at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127)

at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:263)

I've set

hive.exec.dynamic.partition.mode=nonstrict

to nonstrict and I've restarted hive in ambari. But when I re run the spark-shell job I still get the error?

Should I set it elsewhere, in the hive config?

here is the command

df2.write.mode("append").partitionBy("p_date", "p_store_id").saveAsTable("TLD.ticket_pa

yment_testinsert")

df2 is a dataframe with a bunch of csv data read into it.

I've tried setting it in my spark-shell command

spark-shell --master yarn-client --packages com.databricks:spark-csv_2.11:1.4.0 --num-executors 4 --executor-cores 5 --executor-memory 8G --queue hadoop-capq --conf "hive.exec.dynamic.partition.mode=nonstrict"

but I get this warning

Warning: Ignoring non-spark config property: hive.exec.dynamic.partition.mode=nonstrict

1 ACCEPTED SOLUTION

Accepted Solutions

User16789201666
Contributor II

Try this:

hiveContext.setConf("hive.exec.dynamic.partition", "true") hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")

http://stackoverflow.com/questions/31341498/save-spark-dataframe-as-dynamic-partitioned-table-in-hiv...

View solution in original post

3 REPLIES 3

User16789201666
Contributor II

Try this:

hiveContext.setConf("hive.exec.dynamic.partition", "true") hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")

http://stackoverflow.com/questions/31341498/save-spark-dataframe-as-dynamic-partitioned-table-in-hiv...

我也遇到类似问题了,通过上面的方法解决了,谢谢@peyman !

class JavaSparkSessionSingletonUtil {
private static transient SparkSession instance = null;
public static SparkSession getInstance(String appName) {
SparkSession.clearDefaultSession();
if (instance == null) {
instance = SparkSession.builder().appName(appName)
.config("hive.exec.dynamic.partition", "true")
.config("hive.exec.dynamic.partition.mode", "nonstrict")
//.config("spark.sql.warehouse.dir", new File("spark-warehouse").getAbsolutePath())
// .config("spark.driver.allowMultipleContexts", "true")
.enableHiveSupport().getOrCreate();
}
return instance;
}
}

max522over
New Contributor II

I got it working. This was exactly what I needed. Thank you @Peyman Mohajerian​ 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.