cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Making transform on pyspark.sql.Column object outside DataFrame.withColumn method

Marcin_U
New Contributor II

Hello,

I made some transform on pyspark.sql.Column object:

 

file_path_splitted=f.split(df[filepath_col_name],'/') # return Column object
file_name = file_path_splitted[f.size(file_path_splitted) - 1] # return Column object

 

Next I used variable "file_name" in DataFrame.withColumn method

 

df_with_file_name=df.withColumn('is_long_file_name',f.when((f.length(file_name) == 100), 'Yes')
                                    .otherwise('No'))

 

My question is:

is there any risk that making transform on pyspark.sql.Column outside of "withColumn" method can missmach rows from pyspark.sql.Column and data frame? I mean the situation that the rows in the Column object can be sorted in the diffrent order and in the result dataframe and new column will be missmatch.

1 REPLY 1

raphaelblg
Databricks Employee
Databricks Employee

Hello @Marcin_U ,

Thank you for reaching out. The transformation you apply within or outside the `withColumn` method will ultimately result in the same Spark plan.

The answer is no, it's not possible to have rows mismatch if you're referring to the same column on the same Dataframe.

 

Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group