cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

brickster_2018
by Esteemed Contributor
  • 3757 Views
  • 2 replies
  • 0 kudos

Resolved! Is Spark Driver a synonym for Spark Master daemon

If I understand correctly, Spark driver is a master process. Is it the same as the Spark Master. I get confused with the Spark master and Spark driver.

  • 3757 Views
  • 2 replies
  • 0 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 0 kudos

This is a common misconception. Spark Master and Spark driver are two independent and isolated JVM's running on the same instance. Spark Master's responsibilities are to ensure the Spark worker's daemons are up and running and monitor the health. Als...

  • 0 kudos
1 More Replies
SaraCorralLou
by New Contributor III
  • 21019 Views
  • 5 replies
  • 2 kudos

Resolved! Error: The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.

What is the problem?I am getting this error every time I run a python notebook on my Repo in Databricks.BackgroundThe notebook where I am getting the error is a notebook that creates a dataframe and the last step is to write the dataframe to a Delta ...

  • 21019 Views
  • 5 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Sara Corral​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers y...

  • 2 kudos
4 More Replies
Nis
by New Contributor II
  • 1371 Views
  • 1 replies
  • 2 kudos

Best sequence of using Vacuum, optimize, fsck repair and refresh commands.

I have a delta table whose size will increases gradually now we have around 1.5 crores of rows while running vacuum command on that table i am getting the below error.ERROR: Job aborted due to stage failure: Task 7 in stage 491.0 failed 4 times, most...

  • 1371 Views
  • 1 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Moderator
  • 2 kudos

Do you have access to the Executor 7 logs? is there a high GC or some other events that is making the heartbeat timeout? would you be able to check the failed stages?

  • 2 kudos
yunna_wei
by New Contributor II
  • 665 Views
  • 0 replies
  • 3 kudos

In any Spark application, Spark driver plays a critical role and performs the following functions: 1. Initiating a Spark Session 2. Communicating with...

In any Spark application, Spark driver plays a critical role and performs the following functions:1. Initiating a Spark Session2. Communicating with the cluster manager to request resources (CPU, memory, etc) from the cluster manager for Spark's exec...

  • 665 Views
  • 0 replies
  • 3 kudos
JKR
by Contributor
  • 2315 Views
  • 2 replies
  • 0 kudos

The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.

Getting below error Context: Using Databricks shared interactive cluster for scheduled run multiple parallel jobs at the same time after every 5 mins. When I check Ganglia, driver node's memory reaches almost max and then restart of driver happens an...

  • 2315 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

please check the driver's logs, for example the log4j and the GC logs

  • 0 kudos
1 More Replies
uzairm
by New Contributor III
  • 4798 Views
  • 12 replies
  • 3 kudos

Resolved! Concurrent Jobs - The spark driver has stopped unexpectedly!

Hi, I am running concurrent notebooks in concurrent workflow jobs in job compute cluster c5a.8xlarge with 5-7 worker nodes. Each job has 100 concurrent child notebooks and there are 10 job instances. 8/10 jobs gives the error the spark driver has sto...

  • 4798 Views
  • 12 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @uzair mustafa​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so...

  • 3 kudos
11 More Replies
Data_Engineer3
by Contributor III
  • 10184 Views
  • 4 replies
  • 5 kudos

How can i use the same spark session from onenotebook to another notebook in databricks

I want to use the same spark session which created in one notebook and need to be used in another notebook in across same environment, Example, if some of the (variable)object got initialized in the first notebook, i need to use the same object in t...

  • 10184 Views
  • 4 replies
  • 5 kudos
Latest Reply
Manoj12421
Valued Contributor II
  • 5 kudos

You can use %run and then use the location of the notebook - %run "/folder/notebookname"

  • 5 kudos
3 More Replies
oleole
by Contributor
  • 3968 Views
  • 3 replies
  • 3 kudos

Resolved! How to delay a new job run after job

I have a daily job run that occasionally fails with the error: The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached. After I get the notification that this job failed on schedule, I manually run ...

image.png image.png
  • 3968 Views
  • 3 replies
  • 3 kudos
Latest Reply
oleole
Contributor
  • 3 kudos

According to this documentation, you can specify the wait time between the "start" of the first run and the retry start time.

  • 3 kudos
2 More Replies
uzairm
by New Contributor III
  • 4995 Views
  • 2 replies
  • 1 kudos

My whole code is running on driver node, I want my code to run on worker nodes so that the memory of driver node is not exhausted. Please tell me improvement is my codes. My spark crashes frequently when the pulled data from s3 is huge.

I am running process which has 4 steps.Querying s3 file paths from dynamo DB based on certain parameters given by user. (function to do so provided by client, just have to import). Returns a list of filesCheck if those file paths have already been qu...

  • 4995 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vartika
Moderator
  • 1 kudos

Hi @uzair mustafa​ Thank you for posting your question in our community! We are happy to assist you.Does @Suteja Kanuri​'s answer help? If it does, would you be happy to mark it as best?This will help other community members who may have similar ques...

  • 1 kudos
1 More Replies
jayallenmn
by New Contributor III
  • 1326 Views
  • 2 replies
  • 0 kudos

Analyzing 23 GB JSON file

Hey all, We're trying to analyze the data in a 23 GB JSON file. We're using the basic starter cluster - one node, 2 cpu x 8GB.We can read the JSON file into a spark dataframe and print out the schema but if we try and do any operations that won't c...

  • 1326 Views
  • 2 replies
  • 0 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 0 kudos

Hi @Jay Allen​ you can refer to the cluster sizing doc.

  • 0 kudos
1 More Replies
chandan_a_v
by Valued Contributor
  • 13648 Views
  • 6 replies
  • 6 kudos

Resolved! Spark Driver Out of Memory Issue

Hi, I am executing a simple job in Databricks for which I am getting below error. I increased the Driver size still I faced same issue. Spark config :from pyspark.sql import SparkSessionspark_session = SparkSession.builder.appName("Demand Forecasting...

  • 13648 Views
  • 6 replies
  • 6 kudos
Latest Reply
chandan_a_v
Valued Contributor
  • 6 kudos

I am getting the above issue while writing a Spark DF as a parquet file to AWS S3. Not doing any broadcast join actually.

  • 6 kudos
5 More Replies
NOOR_BASHASHAIK
by Contributor
  • 1439 Views
  • 1 replies
  • 0 kudos

Resolved! Databricks PAT (personal access token) with access to databases selectively

Hi all,I am establishing a connection to databricks from Collibra through Spark driver. Collibra expects these details for the connection (for token based):personal access token (pat)server/workspace namehttpPathUpon successful connection, Collibra d...

  • 1439 Views
  • 1 replies
  • 0 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 0 kudos

PAT token is integrated with the workspace, So it will get access of all hive. Is there anyway you can filter out with Collibra?

  • 0 kudos
brickster_2018
by Esteemed Contributor
  • 2221 Views
  • 1 replies
  • 1 kudos
  • 2221 Views
  • 1 replies
  • 1 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 1 kudos

Find the DriverDaemon%sh jpsTake the heap dump%sh jmap -dump:live,format=b,file=pbs_worker_DriverDaemon.hprof 2413Copy out to download%sh cp pbs_worker_DriverDaemon.hprof /dbfs/FileStore/pbs_worker_04-30-2021T15-50-00.hprof

  • 1 kudos
brickster_2018
by Esteemed Contributor
  • 5863 Views
  • 1 replies
  • 0 kudos

Resolved! Why do I always see "Executor heartbeat timed out" messages in the Spark Driver logs

Often, I see "Executor heartbeat timed out" messages in the Spark driver logs. Sometimes job fails with this error. Will increasing "spark.executor.heartbeatInterval" help to mitigate the issue ?

  • 5863 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 0 kudos

This is a common misconception that increasing "spark.executor.heartbeatInterval" will help to mitigate or resolve the heartbeat issues. In fact, increasing the spark.executor.heartbeatInterval will increase the chance of the error and worse the situ...

  • 0 kudos
Labels