cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

AWS Glue Catalog w/ Delta Tables Connected to Databricks SQL Engine - Incompatible format detected.

nickagel
New Contributor III

I've posted the same question on stack overflow to try to maximize reach here & potentially raise this issue to Databricks.

I am trying to query delta tables from my AWS Glue Catalog on Databricks SQL Engine. They are stored in Delta Lake format. I have glue crawlers automating schemas. The catalog is setup & functioning with non Delta Tables. The setup via databricks loads the available tables per database via the catalog & but the query fails due to databricks using hive instead of delta to read.

Incompatible format detected.
 
A transaction log for Databricks Delta was found at `s3://COMPANY/club/attachment/_delta_log`,
but you are trying to read from `s3://COMPANY/club/attachment` using format("hive"). You must use
'format("delta")' when reading and writing to a delta table.
 
To disable this check, SET spark.databricks.delta.formatCheck.enabled=false
To learn more about Delta, see https://docs.databricks.com/delta/index.html

SQL Warehouse settings => Data Access Configuration

spark.databricks.hive.metastore.glueCatalog.enabled : true

The crawler using DELTA LAKE setup from AWS produces the following table metadata

{
    "StorageDescriptor": {
        "cols": {
            "FieldSchema": [
                {
                    "name": "id",
                    "type": "string",
                    "comment": ""
                },
                {
                    "name": "media",
                    "type": "string",
                    "comment": ""
                },
                {
                    "name": "media_type",
                    "type": "string",
                    "comment": ""
                },
                {
                    "name": "title",
                    "type": "string",
                    "comment": ""
                },
                {
                    "name": "type",
                    "type": "smallint",
                    "comment": ""
                },
                {
                    "name": "clubmessage_id",
                    "type": "string",
                    "comment": ""
                }
            ]
        },
        "location": "s3://COMPANY/club/attachment/_symlink_format_manifest",
        "inputFormat": "org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat",
        "outputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
        "compressed": "false",
        "numBuckets": "-1",
        "SerDeInfo": {
            "name": "",
            "serializationLib": "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe",
            "parameters": {}
        },
        "bucketCols": [],
        "sortCols": [],
        "parameters": {
            "UPDATED_BY_CRAWLER": "CRAWLER_NAME",
            "CrawlerSchemaSerializerVersion": "1.0",
            "CrawlerSchemaDeserializerVersion": "1.0",
            "classification": "parquet"
        },
        "SkewedInfo": {},
        "storedAsSubDirectories": "false"
    },
    "parameters": {
        "UPDATED_BY_CRAWLER": "CRAWLER_NAME",
        "CrawlerSchemaSerializerVersion": "1.0",
        "CrawlerSchemaDeserializerVersion": "1.0",
        "classification": "parquet"
    }
}

5 REPLIES 5

nickagel
New Contributor III

I have an intermediate solution to get it working with databricks. But I don't think this is a proper solution for the problem at hand

{
            "Sid": "BlockDeltaLog",
            "Effect": "Deny",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::BUCKET"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "**_delta_log/**"
                    ]
                }
            }
        }

Hubert-Dudek
Esteemed Contributor III

By default AWS Glue support only parquet.

You can use the connector from the marketplace https://go.aws/3pdgoiG

Unfortunately I'm not trying to read to and from AWS Glue jobs. This is strictly to query data through Databricks SQL Engine referenced in the Glue Catalog that points to Delta Lake Tables

Vidula
Honored Contributor

Hi @Nick Agel​ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

nickagel
New Contributor III

So furthermore, my temp solution as you may have noticed includes all version of the Delta tables & contains N number of duplicate records.

This is what I have in my production systems now.

sql = f"CREATE TABLE IF NOT EXISTS {deltalake_glue_database_name}.{table_name} USING DELTA LOCATION '{location}'"
spark.sql(sql)

It runs every batch. I'm still conducting more validation as I'm still getting reported numbers being a little off.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group