cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to proactively monitor the use of the cache for driver node?

Hila_DG
New Contributor II

The problem:

We have a dataframe which is based on the query:

SELECT *
FROM Very_Big_Table

This table returns over 4 GB of data, and when we try to push the data to Power BI we get the error message:

ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 87 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize 4.0 GiB.'.

In order to deal with this error we've done the following :

1. We've changed Cluster spark configuration for the driver.maxresultSize to 10GB - spark.driver.maxResultSize 10g. Now the data comes in perfectly. 

2. We added a limitation on the data coming from Very_Big_Table (a where clause to limit the size of data to the past 7 days). 

What do we want to achieve?

We want to be proactive about the process. In order to make sure that this error wouldn't happen again, we were thinking about a clearance warning. We want to be able to know - in advance - when we are close to hitting the cache limit, so the refresh would happen smoothly, and we would stop the refresh process and get some sort of notification to see that the size that is being pulled is too big. Or, if we see that we are close to the limit of 10 GB with the data that we pull, we could consider changing the configuration of the driver before this happens/limit the data that is pulled from the source table.

Is this information in the log? Can we get the size of the dataframe inside Databricks before we try to send it to Power BI so the cache can accommodate the data?

Please let us know. 

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

-werners-
Esteemed Contributor III

There is a size estimator.

But this is only an estimate so the reliability may vary.

Here is an option you can use, but performancewise this is suboptimal (as you have to cache).

View solution in original post

5 REPLIES 5

Anonymous
Not applicable

@Hila Galapoโ€‹ - Welcome and thanks for your question! We'll give the community a chance to respond before we circle back around.

-werners-
Esteemed Contributor III

There is a size estimator.

But this is only an estimate so the reliability may vary.

Here is an option you can use, but performancewise this is suboptimal (as you have to cache).

Hubert-Dudek
Esteemed Contributor III

As it is just SELECT for BI tool I strongly recommend start using SQL serverless endpoint. It is available in Premium version (you can always have two workspaces in Azure standard and premium in the same time). In my opinion it is more stable and also sometimes cheaper as you don't need VMs.

Anonymous
Not applicable

@Hila Galapoโ€‹ - Do these answers help you? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?

Anonymous
Not applicable

Hey @Hila Galapoโ€‹ 

Hope everything is going good. Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.

Thanks!

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!