cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with Pyspark GroupBy GroupedData

Harun
Honored Contributor

Hi Guys,

I am working on streaming data movement from bronze to silver. My bronze table is having a entity_name column, based on the entity_name column i need to create multiple silver tables.

I tried the below approach, But it is failing with error "'GroupedData' object has no attribute 'get_group'"

Sample Code Snippet :

grouped_df = bronze_df.groupBy("entity_name")

entity_names = [row.PrimaryEntityName for row in grouped_df.agg({"entity_name": "first"}).collect()]

for entity_name in entity_names:

    entity_df = grouped_df.get_group(entity_name)

I think where/filter clause can do the needful but efficiency wise it wont be a good solution in my pov. Is there anyother approach on doing this?

TIA.

2 REPLIES 2

Anonymous
Not applicable

@Harun Raseed Basheer​ :

The issue with your code is that the groupBy operation returns a GroupedData object, which does not have a get_group method. Instead, you can use the filter method to filter the bronze_df DataFrame for each entity name and write the resulting DataFrames to separate Silver tables.

Here's an example of how you can modify your code to achieve this:

from pyspark.sql.functions import col
 
# Group the bronze DataFrame by entity_name
grouped_df = bronze_df.groupBy("entity_name")
 
# Extract the unique entity names
entity_names = [row.entity_name for row in grouped_df.agg({"entity_name": "first"}).collect()]
 
# Filter the bronze DataFrame for each entity name and write to a separate Silver table
for entity_name in entity_names:
    entity_df = bronze_df.filter(col("entity_name") == entity_name)
    entity_df.write.format("delta").mode("append").save(f"/mnt/silver/{entity_name}")

Anonymous
Not applicable

Hi @Harun Raseed Basheer​ 

Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance! 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.