- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-21-2023 03:03 AM
I have a table with data for each month end and want to know the LEAD and LAG data points either side of each month. For example:
SELECT month_date,
LEAD(month_date) OVER (PARTITION BY id ORDER BY month_date) next_month_date,
LAG(month_date) OVER (PARTITION BY id ORDER BY month_date) previous_month_date
FROM t
I want to know if this is better than:
SELECT month_date,
t2.month_date AS next_month_date,
t3.month_date AS previous_month_date
FROM t
LEFT JOIN t2 ON t.id = t2.id AND t.month_date = ADD_MONTHS(t.month_date,1)
LEFT JOIN t3 ON t.id = t3.id AND t.month_date = ADD_MONTHS(t.month_date,-1)
- Labels:
-
DBR
-
Window functions
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-21-2023 05:35 AM
Hi @Matthew Elsham, As @Lakshay Goel pointed out, I would believe Windows would work better as it will first partition them based on the partition key and then your aggregation happens within that partition on a single worker. But it's good to see your query plan for both these cases and understand with your data what works best for you.
One good blog I checked was this - https://blog.knoldus.com/using-windows-in-spark-to-avoid-joins/. Please have a look.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-21-2023 04:27 AM
Hi @Matthew Elsham , In this case, I would expect the window functions to do better because you are doing 2 joins in the second query.
With that being said, the performance of joins is highly dependent on the type of join strategy and the volume of data to be joined. But window functions will give you much cleaner code to work with.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-21-2023 05:35 AM
Hi @Matthew Elsham, As @Lakshay Goel pointed out, I would believe Windows would work better as it will first partition them based on the partition key and then your aggregation happens within that partition on a single worker. But it's good to see your query plan for both these cases and understand with your data what works best for you.
One good blog I checked was this - https://blog.knoldus.com/using-windows-in-spark-to-avoid-joins/. Please have a look.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2023 10:48 PM
Hi @Matthew Elsham
Thank you for posting your question in our community! We are happy to assist you.
To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?
This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!

