I am trying to read an external iceberg database from s3 location using the follwing commanddf_source = (spark.read.format("iceberg")
.load(source_s3_path)
.drop(*source_drop_columns)
.filter(f"{date_column}<='{date_filter}'")
)B...
Nothing, I followed the exact steps as the article: https://www.dremio.com/subsurface/getting-started-with-apache-iceberg-in-databricks/Even I have used the same Runtime version and same library to see if the problem was related to versioning but I k...
Thanks for your answer, I have tried SQL as well and it did not work for me. It does not detect iceberg as a valid format. I might have missed something in the steps. I will give it another try