- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-12-2022 02:40 PM
The problem:
We have a dataframe which is based on the query:
SELECT *
FROM Very_Big_TableThis table returns over 4 GB of data, and when we try to push the data to Power BI we get the error message:
ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 87 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize 4.0 GiB.'.
In order to deal with this error we've done the following :
1. We've changed Cluster spark configuration for the driver.maxresultSize to 10GB - spark.driver.maxResultSize 10g. Now the data comes in perfectly.
2. We added a limitation on the data coming from Very_Big_Table (a where clause to limit the size of data to the past 7 days).
What do we want to achieve?
We want to be proactive about the process. In order to make sure that this error wouldn't happen again, we were thinking about a clearance warning. We want to be able to know - in advance - when we are close to hitting the cache limit, so the refresh would happen smoothly, and we would stop the refresh process and get some sort of notification to see that the size that is being pulled is too big. Or, if we see that we are close to the limit of 10 GB with the data that we pull, we could consider changing the configuration of the driver before this happens/limit the data that is pulled from the source table.
Is this information in the log? Can we get the size of the dataframe inside Databricks before we try to send it to Power BI so the cache can accommodate the data?
Please let us know.
Thanks!
- Labels:
-
DriverLogs
-
Power-bi