01-12-2022 02:40 PM
The problem:
We have a dataframe which is based on the query:
SELECT *
FROM Very_Big_Table
This table returns over 4 GB of data, and when we try to push the data to Power BI we get the error message:
ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 87 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize 4.0 GiB.'.
In order to deal with this error we've done the following :
1. We've changed Cluster spark configuration for the driver.maxresultSize to 10GB - spark.driver.maxResultSize 10g. Now the data comes in perfectly.
2. We added a limitation on the data coming from Very_Big_Table (a where clause to limit the size of data to the past 7 days).
What do we want to achieve?
We want to be proactive about the process. In order to make sure that this error wouldn't happen again, we were thinking about a clearance warning. We want to be able to know - in advance - when we are close to hitting the cache limit, so the refresh would happen smoothly, and we would stop the refresh process and get some sort of notification to see that the size that is being pulled is too big. Or, if we see that we are close to the limit of 10 GB with the data that we pull, we could consider changing the configuration of the driver before this happens/limit the data that is pulled from the source table.
Is this information in the log? Can we get the size of the dataframe inside Databricks before we try to send it to Power BI so the cache can accommodate the data?
Please let us know.
Thanks!
01-14-2022 12:22 AM
There is a size estimator.
But this is only an estimate so the reliability may vary.
Here is an option you can use, but performancewise this is suboptimal (as you have to cache).
01-12-2022 05:28 PM
@Hila Galapo - Welcome and thanks for your question! We'll give the community a chance to respond before we circle back around.
01-14-2022 12:22 AM
There is a size estimator.
But this is only an estimate so the reliability may vary.
Here is an option you can use, but performancewise this is suboptimal (as you have to cache).
01-14-2022 06:35 AM
As it is just SELECT for BI tool I strongly recommend start using SQL serverless endpoint. It is available in Premium version (you can always have two workspaces in Azure standard and premium in the same time). In my opinion it is more stable and also sometimes cheaper as you don't need VMs.
01-26-2022 08:27 AM
@Hila Galapo - Do these answers help you? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?
05-13-2022 05:23 AM
Hey @Hila Galapo
Hope everything is going good. Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.
Thanks!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group