โ05-29-2024 01:06 PM
I noticed that on some Databricks 14.3 clusters, I get DataFrames with type pyspark.sql.connect.dataframe.DataFrame, while on other clusters also with Databricks 14.3, the exact same code gets DataFrames of type pyspark.sql.DataFrame
pyspark.sql.connect.dataframe.DataFrame seems to be causing various issues.
for example:
To help investigate, I would like to know:
โ05-31-2024 07:13 AM
Hi @Retired_mod,
That's incorrect. I use exactly the same code and either get a pyspark.sql.dataframe.DataFrame, or pyspark.sql.connect.dataframe.DataFrame depending on the cluster. It doesn't matter if I create the dataframe using spark.read.table, spark.sql, or even spark.createDataFrame for in-memory data, what changes the class I will get is the cluster configuration.
This screenshot illustrates what I mean. I ran the same notebook on two different clusters and will get a different DataFrame type depending on the cluster. The only difference I can see between the two clusters is that one is a single-user cluster, and the other one is a shared (multi-user) cluster. Both clusters use Databricks 14.3.
So the choice for the class to use is an internal implementation decision by Databricks, and the question is what leads Databricks to pick one or another class, and, considering that they don't appear to be 100% interchangeable, what are the limitations?
Also note that both classes have methods like select, filter, groupBy, cache, persist that can be used the same way with both classes. Both can also be used to run SQL queries or directly read a table without using a query.
โ08-28-2024 12:54 PM
What makes the difference is whether the cluster is using Spark Connect or not.
Shared clusters are using Spark Connect, so even the spark session is of different type:
To compare on single user cluster:
What I tested is that you can disable Spark Connect on the cluster by setting spark.databricks.service.server.enabled to false, but in this case everything stops working:
โ08-06-2024 11:35 PM
@Retired_mod @ckarrasexo Any updates on this? I'm facing the same issue
โ08-27-2024 03:37 PM
@Retired_mod I am also running into this issue, also with Great Expectations as it happens. I have also tried using the read paquert like you suggested and am still getting the problematic format. Is it possible to direct Databricks to create one type, or convert or cast between them?
โ08-27-2024 03:59 PM
Additional info. In Databricks 13.3, the spark variable we're provided is of type pyspark.sql.SparkSession. In 15.4 it is created as pyspark.sql.connect.session.SparkSession (both shared clusters; it may behave differently for single node configuration).
โ09-17-2024 01:24 AM - edited โ09-17-2024 01:37 AM
Hitting the same problems trying to check the type of variables to pick out DataFrames.
Ended up getting around this (temporarily at least) by importing the following instead:
โ10-08-2024 03:10 AM
@ckarrasexo wrote:I noticed that on some Databricks 14.3 clusters, I get DataFrames with type pyspark.sql.connect.dataframe.DataFrame, while on other clusters also with Databricks 14.3, the exact same code gets DataFrames of type pyspark.sql.DataFrame
pyspark.sql.connect.dataframe.DataFrame seems to be causing various issues.
for example:
- Code that checks for isinstance(df, DataFrame) does not recognize df to be a DataFrame, even though pyspark.sql.connect.dataframe.DataFrame inherits from pyspark.sql.DataFrame
- I get this error with pyspark.sql.connect.dataframe.DataFrame and a third-party library (Great Expectations), but not with pyspark.sql.connect.DataFrame [CANNOT_RESOLVE_DATAFRAME_COLUMN] Cannot resolve dataframe column "<column name>". It's probably because of illegal references like `df1.select(df2.col("a"))`. SQLSTATE: 42704
To help investigate, I would like to know:
- What is the difference between pyspark.sql.connect.dataframe.DataFrame and pyspark.sql.DataFrame?
- What determines if I will get one type of DataFrame or the other?
- Does pyspark.sql.connect.dataframe.DataFrame have limitations that would lead the issues I have to be expected?
Inconsistencies in my Databricks 14.3 where some clusters return DataFrames as pyspark.sql.connect.dataframe.DataFrame, while others return pyspark.sql.DataFrame. This affects type checking with isinstance(df, DataFrame), and I'm facing errors with Great Expectations, specifically "CANNOT_RESOLVE_DATAFRAME_COLUMN." Has anyone else dealt with this issue, and what solutions did you find?
2 weeks ago
+1 encountered this for the first time today over a year later from when it was first posted. I've had a piece of code that was checking if dataframes were an instance of DataFrame from spark.sql.DataFrame and it suddenly stopped working today because now my dataframes are pyspark.sql.connect.dataframe.DataFrame
2 weeks ago
I have found a work around for this issue. Basically, I create a dummy_df and then I check if the dataframe I want to check has the same type as the dummy_df.
def get_dummy_df() -> DataFrame:
"""
Generates a dummy DataFrame with a range of integers.
This method creates a DataFrame containing integers starting from 0 up to (but not including) 2
using the current Spark session.
Returns:
DataFrame: A Spark DataFrame containing a single column with the values [0, 1].
"""
spark_session = SparkSession.builder.appName(
"dummy_df"
).getOrCreate()
return spark_session.range(0, 2)
def is_spark_df(df_to_check: DataFrame) -> bool:
"""
Checks if the provided object is a Spark DataFrame.
This function compares the type of the provided DataFrame with a dummy DataFrame created
using the `get_dummy_df()` function. This is necessary because in Databricks, depending
on the cluster configuration, the DataFrame type can vary. If you import
`pyspark.sql.dataframe`, your type check may fail because Databricks can provide
`pyspark.sql.connect.dataframe`.
Parameters:
df_to_check (DataFrame): The DataFrame instance to check.
Returns:
bool: True if the object is a Spark DataFrame, False otherwise.
For more information on this issue, please see:
https://community.databricks.com/t5/data-engineering/pyspark-sql-connect-dataframe-dataframe-vs-pyspark-sql-dataframe/td-p/71055
"""
return type(df_to_check) == type(get_dummy_df())
Regards,
Gleydson C.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now