Dynamically specify pivot column in SQL
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2023 11:50 PM
Hello everyone !
I am looking for a way to dynamically specify pivot columns in a SQL query, so it can be used in a view. However we don't want to hard code the values that need to become columns, and would rather extract it from another table.
I've seen other posts on internet asking the same question but without any luck 😕
Here's what we have currently :
SELECT * FROM table PIVOT (sum(my_value)
FOR mapping in ('col1', 'col2', ...)
Instead we want to use a table table_mapping containing all the values that should go in the list, is there any way to do that in sql ?
Thank you !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-03-2024 06:20 AM
I'm facing the same issue as you do, and apart from the Pyspark way (specifiying a `.groupBy(cols,...).pivot(col)`) and putting it in a view to query from SQL, I didn't find a convenient Spark SQL way of doing it.
Is there any plan to add a feature to get dynamic columns in SQL for Pivot on the Databricks side?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2025 09:46 AM
Did u find any answer? Can u share your thoughts on meeting this usecase?

