Making transform on pyspark.sql.Column object outside DataFrame.withColumn method
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2024 07:08 AM - edited 05-31-2024 07:12 AM
Hello,
I made some transform on pyspark.sql.Column object:
file_path_splitted=f.split(df[filepath_col_name],'/') # return Column object
file_name = file_path_splitted[f.size(file_path_splitted) - 1] # return Column object
Next I used variable "file_name" in DataFrame.withColumn method
df_with_file_name=df.withColumn('is_long_file_name',f.when((f.length(file_name) == 100), 'Yes')
.otherwise('No'))
My question is:
is there any risk that making transform on pyspark.sql.Column outside of "withColumn" method can missmach rows from pyspark.sql.Column and data frame? I mean the situation that the rows in the Column object can be sorted in the diffrent order and in the result dataframe and new column will be missmatch.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2024 09:18 AM
Hello @Marcin_U ,
Thank you for reaching out. The transformation you apply within or outside the `withColumn` method will ultimately result in the same Spark plan.
The answer is no, it's not possible to have rows mismatch if you're referring to the same column on the same Dataframe.
Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

