cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Difference between Databricks and local pyspark split.

Merchiv
New Contributor III

I have noticed some inconsistent behavior between calling the 'split' fuction on databricks and on my local installation.

Running it in a databricks notebook gives

spark.sql("SELECT split('abc', ''), size(split('abc',''))").show()

image.pngSo the string is split in 3 parts.

If I run on my local install I get 4 parts.

Locally I'm running on pyspark 3.2.1, on databricks I've tried it with multiple versions but they all give the same result.

4 REPLIES 4

JAHNAVI
Databricks Employee
Databricks Employee

Hi,

In Spark 3.0 and later versions, the default behavior of the split() function with an empty delimiter is to include an empty string at the beginning of the resulting array so that is the reason it is showing 4 .

Jahnavi N

Merchiv
New Contributor III

Hi,

My databricks cluster runs spark 3.3, but does give a length of 3.imageIs there something different about the databricks implementation of pyspark or should it use the same standards?

Anonymous
Not applicable

@Ivo Merchiersโ€‹ :

The behavior you are seeing is likely due to differences in the underlying version of Apache Spark between your local installation and Databricks.

split() is a function provided by Spark's SQL functions, and different versions of Spark may have differences in their implementation of these functions. You mentioned that you are using PySpark version 3.2.1 locally. To confirm which version of Spark is being used, you can run the following command in your PySpark shell:

import pyspark
print(pyspark.__version__)

You can then check the corresponding version of Spark and its SQL functions documentation for the

split() function behavior. On Databricks, you can check the version of Spark being used by running the command:

spark.version

If you are seeing different results for split() between your local installation and Databricks, you may need to adjust your code to handle the differences in behavior or use the same version of Spark across both environments.

Merchiv
New Contributor III

Thank you for the suggestion, but even with the same spark version there seems to be a difference between what is happening locally and what happens on a databricks cluster.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group