cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Difference between Databricks and local pyspark split.

Merchiv
New Contributor III

I have noticed some inconsistent behavior between calling the 'split' fuction on databricks and on my local installation.

Running it in a databricks notebook gives

spark.sql("SELECT split('abc', ''), size(split('abc',''))").show()

image.pngSo the string is split in 3 parts.

If I run on my local install I get 4 parts.

Locally I'm running on pyspark 3.2.1, on databricks I've tried it with multiple versions but they all give the same result.

4 REPLIES 4

JAHNAVI
New Contributor III
New Contributor III

Hi,

In Spark 3.0 and later versions, the default behavior of the split() function with an empty delimiter is to include an empty string at the beginning of the resulting array so that is the reason it is showing 4 .

Merchiv
New Contributor III

Hi,

My databricks cluster runs spark 3.3, but does give a length of 3.imageIs there something different about the databricks implementation of pyspark or should it use the same standards?

Anonymous
Not applicable

@Ivo Merchiers​ :

The behavior you are seeing is likely due to differences in the underlying version of Apache Spark between your local installation and Databricks.

split() is a function provided by Spark's SQL functions, and different versions of Spark may have differences in their implementation of these functions. You mentioned that you are using PySpark version 3.2.1 locally. To confirm which version of Spark is being used, you can run the following command in your PySpark shell:

import pyspark
print(pyspark.__version__)

You can then check the corresponding version of Spark and its SQL functions documentation for the

split() function behavior. On Databricks, you can check the version of Spark being used by running the command:

spark.version

If you are seeing different results for split() between your local installation and Databricks, you may need to adjust your code to handle the differences in behavior or use the same version of Spark across both environments.

Merchiv
New Contributor III

Thank you for the suggestion, but even with the same spark version there seems to be a difference between what is happening locally and what happens on a databricks cluster.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.