cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Problem with streaming jobs (foreachBatch) with USER_ISOLATION compute cluster

prakharcode
New Contributor II

 

We have been trying to run a streaming job on an all-purpose compute (4 cores, 16 gb) in the “user_isolation”, recommended by databricks to run with/for unity catalog. The job reads CDC files produced by a table refreshed every hour and produces around ~480k rows that is then merged with a table of about ~980k rows.
The join is executed as streaming `foreachBatch` job, where we execute read from files from s3  and write it like:
 
spark
.readStream
.format("cloudFiles")
.schema(df_schema)
.option("cloudFiles.format", "parquet")
.load(f"{s3_path_base}/*/*")
.writeStream.foreachBatch(upsert_to_delta)
.option( "checkpointLocation", "<location_in_s3>", )
.trigger(availableNow=True)
.start()
I have mentioned the upsert_to_delta function at the end with other relevant details.
 
The same job with the same table run perfectly on a cluster without data_secutiy_mode set to “USER_ISOLATION”. As soon as the “USER_ISOLATION” mode is turned on, with the same cluster specifications and configuration the job starts hitting OOM errors. Another part that we are facing is general degradation in the performance of the jobs. Due to some internal overhead of the unity catalog, the jobs are running slow.
The jobs that used to run within a minute on a cluster with “NO_ISOLATION”, with the same configuration for the cluster and similar size of data the job takes sometimes twice the time or even more. No change has been made to the cluster setting whatsoever and still, and we are still seeing OOM errors or performance hits.
 
Important questions:
Is there something that we can do to overcome the OOM error and improve the performance of the job?
Also, why does the same job runs on a cluster with exactly same configuration with "NO_ISOLATION" mode and fails with "USER_ISOLATION" mode?
 
Any help is appreciated! Thank you.
 
General information:
Data type that is being processed at source is Parquet.
Target table is delta table.
DBR version: 14.3 LTS (spark 3.5, scala 2.12)
Driver and worker type: m6gd.xlarge (2 workers)
 
error returned:
INTERNAL: Job aborted due to stage failure: Task 2 in stage 546.0 failed 4 times, most recent failure: Lost task 2.3 in stage 546.0 (TID 3942) (10.48.255.186 executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Command exited with code 52

relevant code:

 

 

 

 

def upsert_to_delta(micro_df, batch_id):
    
    # spark DF of the columns and its type from source cdc files
    spark.createDataFrame(
      micro_batch_df.dtypes, schema=self.schema  # schema here is just <column_name, data_type>
    ).createOrReplaceGlobalTempView("SOURCE_CDC_FILES_VIEW_COLUMNS")

    # spark DF of the columns and its type from the delta target table
    spark.createDataFrame(
      spark.read.table(target_table).dtypes,
      schema=self.schema, # schema here is just <column_name, data_type>
    ).createOrReplaceGlobalTempView("TARGET_DBX_TABLE_COLUMNS")

    # (left) joining the columns from both source and target to get a list of
    # columns in the source files where we take the column type of target table for any
    # common columns. Giving priority to the column type of source table.
    df_col = spark.sql(
      f"""SELECT
          'CAST(sc.' || s.column_name || ' AS ' || COALESCE(t.data_type, s.data_type) || ') AS ' || s.column_name AS column_name
        FROM
         global_temp.SOURCE_CDC_FILES_VIEW_COLUMNS s
          LEFT JOIN global_temp.TARGET_DBX_TABLE_COLUMNS t
          ON (s.column_name = t.column_name)"""
    )
    columns = ", ".join(list(df_col.toPandas()["column_name"]))

    # Making a spark view from the streaming dataframe
    micro_batch_df.createOrReplaceGlobalTempView("SOURCE_DMS_FILES_VIEW")

    # Making the merge query to merge the streaming DF
    sql_query_for_micro_batch = f"""MERGE INTO <target_table> s
      USING (
        SELECT
         {columns}
        FROM global_temp.SOURCE_CDC_FILES_VIEW sc
          INNER JOIN (
            SELECT {self.unique_key},
                MAX(transact_seq) AS transact_seq
            FROM global_temp.{SOURCE_CDC_FILES_VIEW}
            GROUP BY 1) mc
          ON
            (sc.{self.unique_key} = mc.{self.unique_key}
            AND sc.transact_seq = mc.transact_seq)) b
      ON b.{self.unique_key} = s.{self.unique_key}
      WHEN MATCHED AND b.Op = "U"
       THEN UPDATE SET *
      WHEN MATCHED AND b.Op = "D"
       THEN DELETE
      WHEN NOT MATCHED AND b.Op = "I" OR b.Op = "U"
       THEN INSERT *"""

    LOGGER.info("Executing the merge")
    LOGGER.info(f"Merge SQL: {sql_query_for_micro_batch}")
    spark.sql(sql_query_for_micro_batch)
    LOGGER.info("Merge is done")
    spark.catalog.dropGlobalTempView("SOURCE_CDC_FILES_VIEW_COLUMNS")
    spark.catalog.dropGlobalTempView("TARGET_DBX_TABLE_COLUMNS")
    spark.catalog.dropGlobalTempView("SOURCE_CDC_FILES_VIEW")​

 

 

 

 

 

0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group