โ04-14-2023 12:13 AM
I have noticed some inconsistent behavior between calling the 'split' fuction on databricks and on my local installation.
Running it in a databricks notebook gives
spark.sql("SELECT split('abc', ''), size(split('abc',''))").show()
So the string is split in 3 parts.
If I run on my local install I get 4 parts.
Locally I'm running on pyspark 3.2.1, on databricks I've tried it with multiple versions but they all give the same result.
โ04-14-2023 12:22 AM
Hi,
In Spark 3.0 and later versions, the default behavior of the split() function with an empty delimiter is to include an empty string at the beginning of the resulting array so that is the reason it is showing 4 .
โ04-17-2023 01:34 AM
โ04-16-2023 12:26 AM
@Ivo Merchiersโ :
The behavior you are seeing is likely due to differences in the underlying version of Apache Spark between your local installation and Databricks.
split() is a function provided by Spark's SQL functions, and different versions of Spark may have differences in their implementation of these functions. You mentioned that you are using PySpark version 3.2.1 locally. To confirm which version of Spark is being used, you can run the following command in your PySpark shell:
import pyspark
print(pyspark.__version__)
You can then check the corresponding version of Spark and its SQL functions documentation for the
split() function behavior. On Databricks, you can check the version of Spark being used by running the command:
spark.version
If you are seeing different results for split() between your local installation and Databricks, you may need to adjust your code to handle the differences in behavior or use the same version of Spark across both environments.
โ04-17-2023 01:36 AM
Thank you for the suggestion, but even with the same spark version there seems to be a difference between what is happening locally and what happens on a databricks cluster.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now