cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Making transform on pyspark.sql.Column object outside DataFrame.withColumn method

Marcin_U
New Contributor II

Hello,

I made some transform on pyspark.sql.Column object:

 

file_path_splitted=f.split(df[filepath_col_name],'/') # return Column object
file_name = file_path_splitted[f.size(file_path_splitted) - 1] # return Column object

 

Next I used variable "file_name" in DataFrame.withColumn method

 

df_with_file_name=df.withColumn('is_long_file_name',f.when((f.length(file_name) == 100), 'Yes')
                                    .otherwise('No'))

 

My question is:

is there any risk that making transform on pyspark.sql.Column outside of "withColumn" method can missmach rows from pyspark.sql.Column and data frame? I mean the situation that the rows in the Column object can be sorted in the diffrent order and in the result dataframe and new column will be missmatch.

1 REPLY 1

raphaelblg
Contributor III
Contributor III

Hello @Marcin_U ,

Thank you for reaching out. The transformation you apply within or outside the `withColumn` method will ultimately result in the same Spark plan.

The answer is no, it's not possible to have rows mismatch if you're referring to the same column on the same Dataframe.

 

Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!