Temp Table Vs Temp View Vs temp table function- which one is better for large Databrick data processing
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2023 01:23 PM
Hello ,
1 ) Which one is better during large data processing - Temp table vs Temporary view vs temp Table function .
2) How lazy evaluation better for processing ? and which one of the above helps in lazy evaluation
- Labels:
-
Tempview
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 06:47 AM
Does anyone have any suggestions regarding the question above?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 07:09 AM
Hi @Abhot ,
It depends, if you're going to use it multiple times in your processing then creating temp view will be slower due to lazy evaluation (unless you persist that view (dataframe cache)).
Every time your temp view is called, it will execute the view definition/code. This also depends upon your cluster capacity, how complex the view definition is, how much data the temp view has.
So every time you use view name it will be recomputed. In such scenarios it's better to use temporary table, especially if transformantion are complex.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 11:31 AM
What about Temporary view vs Temporary function as both are lazy evaluations. Any preference of when to use view vs function. Usually UDFs are not recommended in where clause. Does this still holds true in Databricks ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 11:34 AM - edited 07-29-2024 11:35 AM
Most often it's better to use temp view over UDFS. UDFs might introduce significant processing bottlenecks into code execution. Databricks uses a number of different optimizers automatically for code written with included Apache Spark, SQL, and Delta Lake syntax. When custom logic is introduced by UDFs, these optimizers do not have the ability to efficiently plan tasks around this custom logic. In addition, logic that executes outside the JVM has additional costs around data serialization.

