@pooja_bhumandla delta.targetFileSize controls the file size for operations including optimise or Z-order, auto compaction, and optimised writes.
Unsetting this config during data load will not cause any failures or behave inconsistently. After unse...
Hi @susmitsircar, spark.databricks.rocksDB.verifyBeforeUpload config determines whether a verification check should be conducted prior to uploading data to RocksDB. The default value is true. Since the SST files are lost, disabling the above config w...
@fhameed The error occurs if the Iceberg metadata written by Snowflake does not match the number of files in object storage. When attempting to read the table in Databricks, there is a verification process that checks to see if the Iceberg metadata ...
@pavlosskev Could you try adding the following option as well to your read?
.option("sessionInitStatement", "ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH24:MI:SS'")
df = (
spark.read.format("jdbc")
.option("url", jdbcUrl)
.opti...
@Sainath368 OPTIMIZE and VACUUM are compute-intensive operations, so you can choose a compute-optimized instance like the F series for both drivers and workers, which has a higher CPU-to-memory ratio.
If its UC managed table, I recommend enabling Pr...