I've got a table I want to add some data to and it's partitoned. I want to use dynamic partitioning but I get this error
org.apache.spark.SparkException: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:168)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:263)
I've set
hive.exec.dynamic.partition.mode=nonstrict
to nonstrict and I've restarted hive in ambari. But when I re run the spark-shell job I still get the error?
Should I set it elsewhere, in the hive config?
here is the command
df2.write.mode("append").partitionBy("p_date", "p_store_id").saveAsTable("TLD.ticket_pa
yment_testinsert")
df2 is a dataframe with a bunch of csv data read into it.
I've tried setting it in my spark-shell command
spark-shell --master yarn-client --packages com.databricks:spark-csv_2.11:1.4.0 --num-executors 4 --executor-cores 5 --executor-memory 8G --queue hadoop-capq --conf "hive.exec.dynamic.partition.mode=nonstrict"
but I get this warning
Warning: Ignoring non-spark config property: hive.exec.dynamic.partition.mode=nonstrict