Can you show some code to get the gist of what the code does? Are the parquet files accessed as a catalog table? Could it be that some other job makes changes to input tables?
That's exactly my words! I'd not be surprised if this were the author of DAB judging by the nickname (https://github.com/databricks/cli/commits?author=pietern) 
Why not use Substitutions and Custom variables that can be specified on command line using --var="<key>=<value>"?With all the features your databricks.yml would look as follows:variables: git_branch: default: maingit_source: git_url: https://git...
That's exactly my case!This is what I saw in `databricks bundle run my_job`:AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClientAnd...
As per the very short review session of the available source code and the SPIP itself, I think the answer is YES.It is especially clear for spark.sql.sources.v2.bucketing.partiallyClusteredDistribution.enabled that says:This is an optimization on ske...