Azure application insights logging not working after upgrading cluster to databricks runtime 14.x
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-09-2024 11:46 PM - edited 12-09-2024 11:50 PM
I have a basic code setup to read a stream from a Delta table and write it into another Delta table. I am using logging to send logs to Application Insights. However, within the foreachBatch function, the logs I write are not being sent to Application Insights. For example, in the code below, logging works in the _write_stream method as it is outside the foreachBatch function, but it does not work in the _merge_streaming_data method because its inside foreachBatch. When I say it doesn't work, I mean the logs are not sent or captured in Application Insights. Below is the sample code.
properties = {'custom_dimensions': {'test': 'test'}}
def _merge_streaming_data(batch_df, batch_id):
spark = batch_df.sparkSession
logger = get_logger()
logger.warning('_merge_streaming_data1 amit', extra=properties)
batch_df.write.mode("overwrite").saveAsTable("table_name2")
def _write_stream(main_df):
logger = get_logger()
logger.warning('_write_stream', extra=properties)
write_df = (
main_df.writeStream
.foreachBatch(lambda batch_df, batch_id: _merge_streaming_data(batch_df, batch_id))
.option("checkpointLocation", checkpoint)
.trigger(availableNow=True)
.queryName("Query : test").start())
return write_df
checkpoint = '****'
raw_full_table_name = 'table_name'
source_df = spark.readStream.table(raw_full_table_name)
write_stream = _write_stream(source_df)
try:
write_stream.awaitTermination()
print(f"Completed Processing table: {raw_full_table_name}")
except Exception as e:
print(e)
raise
- Labels:
-
Delta Lake
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-09-2024 11:57 PM
@abaghel There are some changes in foreachBatch in Databricks Runtime 14.0.
Please check: https://docs.databricks.com/en/structured-streaming/foreach.html#behavior-changes-for-foreachbatch-i...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-10-2024 12:05 AM - edited 12-10-2024 12:05 AM
@MuthuLakshmi Thank you for getting back to me. I have read the article and understand that "Any files, modules, or objects referenced in the function must be serializable and available on Spark." However, based on the code provided, can you help me identify where I might be encountering serialization issues? The code seems quite basic. Additionally, could you suggest a sample code for reference?

