cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

vinaykumar
by New Contributor III
  • 7389 Views
  • 6 replies
  • 6 kudos

Reading Iceberg table present in S3 from databricks console using spark given none error .

Hi Team , I am facing issue while reading iceberg table from S3 and getting none error when read the data . below steps I followed .Added Iceberg Spark connector library to your Databricks cluster. 2. Cluster Configuration to Enable Iceberg ...

image image
  • 7389 Views
  • 6 replies
  • 6 kudos
Latest Reply
Ohad-upriver
New Contributor II
  • 6 kudos

I want to use both unity catalog and iceberg that is in a S3 path.To use unity catalog I can't use access mode "No isolation shared".Is there a solution for this?

  • 6 kudos
5 More Replies
lrodcon
by New Contributor III
  • 10956 Views
  • 6 replies
  • 4 kudos

Read external iceberg table in a spark dataframe within databricks

I am trying to read an external iceberg database from s3 location using the follwing commanddf_source = (spark.read.format("iceberg")   .load(source_s3_path)   .drop(*source_drop_columns)   .filter(f"{date_column}<='{date_filter}'")   )B...

  • 10956 Views
  • 6 replies
  • 4 kudos
Latest Reply
dynofu
New Contributor II
  • 4 kudos

https://issues.apache.org/jira/browse/SPARK-41344

  • 4 kudos
5 More Replies
youssefmrini
by Databricks Employee
  • 2350 Views
  • 1 replies
  • 2 kudos
  • 2350 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 2 kudos

Clone can now be used to create and incrementally update Delta tables that mirror Apache Parquet and Apache Iceberg tables. You can update your source Parquet table and incrementally apply the changes to their cloned Delta table with the clone comman...

  • 2 kudos
samrachmiletter
by New Contributor III
  • 3425 Views
  • 2 replies
  • 5 kudos

Resolved! Is it possible to set order of precedence of spark SQL extensions?

I have the iceberg SQL extension installed, but running commands such as MERGE INTO result in the error pyspark.sql.utils.AnalysisException: MERGE destination only supports Delta sources.this seems to be due to using Delta's MERGE command as opposed ...

  • 3425 Views
  • 2 replies
  • 5 kudos
Latest Reply
samrachmiletter
New Contributor III
  • 5 kudos

This does help. I tried going through the DataFrameReader as well but ran into the same error, so it seems it is indeed not possible. Thank you @Hubert Dudek​!

  • 5 kudos
1 More Replies
Labels