Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
We have a date (DD/MM/YYYY) partitioned BQ table. We want to update a specific partition data in 'overwrite' mode using PySpark. So to do this, I applied 'spark.sql.sources.partitionOverwriteMode' to 'DYNAMIC' as per the spark bq connector documentat...
@soumiknow
This is not the output from -verbose:class, what you see is likely coming from importing the library from an external repository and its showing the add dependencies process. Telling it has pulled and downloaded the "com.google.cloud.spar...
I have a Unity Catalog Enabled cluster with Node type Standard_DS4_v2 (28 GB Memory, 8 Cores). When "Use Photon Acceleration" option is disabled spark.executor.memory is 18409m. But if I enable Photon Acceleration it shows spark.executor.memory as 46...
The memory allocated to the Photon engine is not fixed; it is based on a percentage of the node’s total memory.
To calculate the value of spark.executor.memory based on a specific node type, you can use the following formula:
container_size = (vm_si...
I'm currently testing materialized views and I need some help understanding the refresh behavior. Specifically, I want to know if my materialized view is querying the full table (performing a full refresh) or just doing an incremental refresh.From so...
To validate the status of your materialized view (MV) refresh, run a DESCRIBE EXTENDED command and check the row corresponding to the "last refresh status type."RECOMPUTE indicates a full load execution was completed.NO_OPERATION means no operation w...
I want to use google-cloud-bigquery library in my PySpark code though I know that spark-bigquery-connector is available. The reason I want to use is that the Databricks Cluster 15.4LTS comes with 0.22.2-SNAPSHOT version of spark-bigquery-connector wh...
I have a DLT pipeline running to ingest files from storage using autoloader. We have a bronze table and a Silver table.A question came up from the team on how to restore DLT tables to a previous version in case of some incorrect transformation. When ...
The RESTORE command is not supported on streaming tables, which is why you encountered the error. Instead, you can use the TIME TRAVEL feature of Delta Lake to query previous versions of the table. You can use the VERSION AS OF or TIMESTAMP AS OF c...
I have a complex join that I'm trying to optimize df1 has cols id,main_key,col1,col1_isnull,col2,col2_isnull...col30 df2 has cols id,main_key,col1,col2..col_30I'm trying to run this sql query on Pysparkselect df1.id, df2.id from df1 join df2 on df1.m...
@Omri thanks for your question!
To help optimize your complex join further, we need clarification on a few details:
Data Characteristics:
Approximate size of df1 and df2 (in rows and/or size).Distribution of main_key in both dataframes—are the top...
Hey! I'm new to the forums but not Databricks, trying to get some help with this question:The error also is also fickle since it only appears what seems to be random. Like when running the same code it works then on the next run with a new set of dat...
@ls thanks for your question!
Since this is a PySpark application, the "Connection reset by peer" error seems to mask the actual exception. This type of issue is often linked to memory problems where Python workers are terminated, so the JVM <-> Pyth...
Hi All,Regrading creating clusters in Databricks I'm getting quota error have tried to increase quotas in the region where the resource is hosted still unable to increase the limit, is there any workaround or could you help select the right cluster ...
Hi @svm_varma ,You can try to create Standard_DS3_v2 cluster. It has 4 cores and your current subscription limit for given region is 6 cores. The one you're trying to create needs 8 cores and hence you're getting quota exceeded exception.You can also...
I'm currently diving deep into Spark SQL and its capabilities, and I'm facing an interesting challenge. I'm eager to learn how to write CTE recursive queries in Spark SQL, but after thorough research, it seems that Spark doesn't natively support recu...
Hi @singhanuj2803,
It is correct that Spark SQL does not natively support recursive Common Table Expressions (CTEs). However, there are some workarounds and alternative methods you can use to achieve similar results.
Using DataFrame API with Loops:...
I'm encountering an issue with incomplete Spark event logs. When I am running the local Spark History Server using the cluster logs, my application appears as "incomplete". Sometime I also see few queries listed as still running, even though the appl...
Thanks for your question!
I believe Databricks has its own SHS implementation, so it's not expected to work with the vanilla SHS. Regarding the queries marked as still running, we can also find this when there are event logs which were not properly c...
I think I found a bug where you get Pending indefinitely on jobs that has a library requirement and the user of the job does not have Manage permission on the cluster.In my case I was trying to start a dbt job with dbt-databricks=1.8.5 as library. Th...
I dont have the complete context of the issue.But Here it is what I know, a friend of mine facing this""I am fetching data from Oracle data in databricks using python.But every time i do it the schema gets changesso if the column is of type decimal f...
Thanks for your question!To address schema issues when fetching Oracle data in Databricks, use JDBC schema inference to define data types programmatically or batch-cast columns dynamically after loading. For performance, enable predicate pushdown and...
I need to increase the stack size (from the default of 16384) to run a subprocess that requires a larger stack size.I tried following this: https://community.databricks.com/t5/data-engineering/increase-stack-size-databricks/td-p/71492And this: https:...
I am experiencing performance issues when loading a table with 50 million rows into Delta Lake on AWS using Databricks. Despite successfully handling other larger tables, this especific table/process takes hours and doesn't finish. Here's the command...
Thank you for your question! To optimize your Delta Lake write process:
Disable Overhead Options: Avoid overwriteSchema and mergeSchema unless necessary. Use:
df.write.format("delta").mode("overwrite").save(sink)
Increase Parallelism: Use repartition...