cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

Dynamic populated multiple choice query parameters - Documentation clarification

luksquaresma
New Contributor

Dear colleagues,

I'm in need of finishing a dashboard with the new databricks dashboard structure, released somewhat recently. On it, I'll have some calculations which need to be percentages over certain partitions, such as `input_amount_percentage_per_input_key_type`.

I've been having a lot of trouble to make work the following query:

select
    conciliation_status,
    input_key_type,
    input_key,
    acquirer,
    input_date,
    first_effect_date,
    sum(input_amount) as input_amount,
    sum(1) as event_amount,
    sum(input_amount) / sum(sum(input_amount)) over (partition by input_key_type) as input_amount_percentage_per_input_key_type,
    count(*) / sum(count(*)) over (partition by input_key_type) as event_amount_percentage_per_input_key_type
from (
    select 
        * except(conciliation_id, conciliation_metadata, input_id, effect_date),
        effect_date[0] as first_effect_date,
        regexp_extract(input_key, '^[^.]+', 0) as input_key_type
    from acquirer_conciliation.final_edge
    where input_date >= :p_input_date.min and input_date <= :p_input_date.max
)
group by conciliation_status, input_date, first_effect_date, input_key_type, input_key, acquirer
order by all
 

My problem lies in my dashboard needs to filter data using some columns such as `acquirer` and `input_key_type`. But, when i use these columns as field filters, they will not recalculate the aggregation, only filtering the already calculated table, and obviously having a total of `input_amount_percentage_per_input_key_type` which is less than 100%.

Hence, it is my current understanding that I must use a filter on `acquirer` (in the subquery or the overall one), but then the problem arises. I could not, for the life of me, make a parameter that has as default values all distinct possible `acquirer` values (such as the result of `select distinct acquirer from acquirer_conciliation.final_edge`). The only thing i was able to do is to add this as a static list, which does not solve my problem.

However, the documentation here here seems to imply that should be able to do this. So I need some clarification on what this doc snippet actually means, and if it is in fact possible to add the possible values of `acquirer` dynamically.

I tried everything from changing the calculation places (on the query and on the dashboard) to using Custom Calculations with no luck at all.

1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

It is possible to create dynamic parameter lists in Databricks dashboards—so, yes, the documentation is correct, but the crucial detail is that not all data sources or query environments support this feature in the exact same way, and certain limitations or configurations may prevent it from working out of the box.

Databricks Dynamic Parameter Lists Clarification

According to the documentation, dynamic parameters can use query results to populate their value list, for example, using a query like SELECT DISTINCT acquirer FROM .... When configured correctly, this enables dashboards to always reflect the current set of dimension values such as acquirer or input_key_type.

However, dynamic parameter population depends on the dashboard type and the data connection:

  • For SQL Warehouses (Databricks SQL), this is a supported feature in the Databricks SQL interface.

  • In the new dashboards (Databricks Workspace "Dashboard" UI, not "Legacy"), parameter values for dropdowns or multi-selects can be populated from query results.

  • Sometimes, the UI may only give you an option for a static list if the widget or data source is not set up optimally, or if you are operating outside Databricks SQL (e.g., in notebooks).

How It Works (and Why It Might Not)

  • In Databricks SQL dashboards: You can add a parameter by clicking "Add Parameter" and then selecting "Dynamic", providing your SQL code (like SELECT DISTINCT acquirer FROM ...).

  • Parameter values: are refreshed every time the dashboard reloads, pulling the current list of distinct values.

Main reasons it may not work:

  • Permissions: The user must have permission to run the query supplying parameter values.

  • Data Source Type: The feature is for Databricks SQL Warehouses, not always for clusters or in the notebook UI.

  • Old Widgets: If using classic widgets in notebook or legacy Dashboards, dynamic parameters are limited or unavailable.

  • Version: Recent features may not be available if the workspace is behind on updates.

Action Steps to Enable Dynamic Parameter Lists

  1. Create a new parameter in your Dashboard (not notebook) in the Databricks SQL UI.

  2. Choose Dynamic (or similar wording depending on UI).

  3. Input a query like:

    sql
    SELECT DISTINCT acquirer FROM acquirer_conciliation.final_edge
  4. Use the parameter in your underlying data query with WHERE acquirer IN ({{acquirer}}) syntax.

  5. If it only lets you add static values, check:

    • Are you in Databricks SQL, not in the notebook UI?

    • Is your workspace updated and does your SQL Warehouse support this?

    • Do you have query permissions on the supplying table?

Calculation Correction for Partitions and Filtering

For your percentage measures to remain correct under dashboard filtering, you must:

  • Apply the filters in the subquery (WHERE clause) that determines your denominator.

  • Use parameters in the filter, not just straight filters in the dashboard.

For example, your query should reference the parameter directly:

sql
select conciliation_status, input_key_type, input_key, acquirer, input_date, first_effect_date, sum(input_amount) as input_amount, sum(1) as event_amount, sum(input_amount) / sum(sum(input_amount)) over (partition by input_key_type) as input_amount_percentage_per_input_key_type, count(*) / sum(count(*)) over (partition by input_key_type) as event_amount_percentage_per_input_key_type from ( select /* columns */ from acquirer_conciliation.final_edge where input_date >= :p_input_date.min and input_date <= :p_input_date.max and acquirer IN ({{acquirer}}) ) group by /* group by columns */

This way, whenever the parameter changes (even if filtered from the dashboard UI), the denominator always represents the filtered set, so the percentages sum up to 100% per partition.

Official Docs

The feature is described in [Databricks documentation for dashboard parameters]. If the feature does not work for you, you may be using an unsupported environment or outdated workspace.

https://docs.databricks.com/aws/en/dashboards/parameters#create-a-dynamic-parameter-list


Summary Table: When Dynamic Parameters Work

Environment Dynamic Dropdown Supported Typical Setup Needed
Databricks SQL Dashboard Yes Must use Databricks SQL Warehouse
Classic Notebook Widgets No Only static/default lists
Legacy Dashboard (Legacy) Sometimes May need static values
 
 

If you set up as above and still cannot see the option for dynamic parameter lists, confirm with your Databricks Workspace admin or support whether the feature is enabled or your environment is up-to-date.