Hi Team,We have a complex ETL job running in databricks for 6 hours. The cluster has the below configuration: Minworkers: 16Maxworkers: 24Worker and Driver Node Type: Standard_DS14_v2. (16 cores, 128 GB RAM)I have monitored the job progress in Spark...
Hi Team, I can see logs in Databricks console by navigating workflow -> job name -> logs. These logs are very generic like stdout, stderr and log4-avtive.log. How to download event, driver, and executor logs at once for a job? Regards,Rajesh.
Hi Team,We have a job it completes in 3 minutes in one Databricks cluster, if we run the same job in another databricks cluster it is taking 3 hours to complete.I am quite new to Databricks and need your guidance on how to find out where databricks s...
@Kaniz Fatma @John Lourdu @Vidula Khanna Hi Team,I managed to download logs using the Databricks command line as below: Installed the Databricks command line on my Desktop (pip install databricks-cli)Configured the Databricks cluster URL and perso...
@Lakshay Goel Hi Lakshay,It takes a couple of days to test this recommendation. I will try the job execution with new recommendations and update this thread. Regards,Rajesh.
@John Lourdu @Kaniz Fatma @Vidula Khanna Hi Team,We use job cluster, and logs default to file system DBFS. The cluster is terminated immediately after the job execution. Are there any ways to download the logs from DBFS from the terminated clu...
Hi Teja,Thank you for replying. From Databricks Workspace1) First, I navigated to Workflows -> Jobs and then searched for the job2) Opened the job3) Clicked the “Logs” and then directed to “Spark Driver Logs”.4) There is no option for "Log Storage"...