hey guys, I am stuck on a loading task, and I simply can't spot what is wrong.
The following query fails:
COPY INTO `test`.`test_databricks_tokenb3337f88ee667396b15f4e5b2dd5dbb0`.`pipeline_state`
FROM '/Volumes/test/test_databricks_tokenb3337f88ee667396b15f4e5b2dd5dbb0/_temp_load_volume/file_1737154045813408000'
FILEFORMAT = PARQUET;
with the following error:
python:
DatabaseTerminalException(ServerOperationError("The source directory did not contain any parsable files of type PARQUET. Please check the contents of '/Volumes/test/test_databricks_tokenb3337f88ee667396b15f4e5b2dd5dbb0/_temp_load_volume/file_1737154045813408000'."))
databricks query:
[COPY_INTO_SOURCE_SCHEMA_INFERENCE_FAILED] The source directory did not contain any parsable files of type PARQUET. Please check the contents of '/Volumes/test/test_databricks_tokenb3337f88ee667396b15f4e5b2dd5dbb0/_temp_load_volume/file_1737154045813408000'.
The error can be silenced by setting 'spark.databricks.delta.copyInto.emptySourceCheck.enabled' to 'false'.
and I have downloaded and read the parquet file with pandas - file is perfectly fine...
What is wrong? I am stuck...