Trying to use pivot function with pyspark for count aggregate
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2023 12:46 PM
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-10-2023 09:08 AM
Try this
from pyspark.sql import functions as F
testDF = (eventsDF
.groupBy("user_id")
.pivot("event_name")
.agg(F.count("event_name")))

