Hello
I have a table in my catalog, and I want to have it as a pandas or spark df.
I was using this code to do that before, but I don't know what is happened recently that the code is not working anymore.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("test").getOrCreate() source_table = "my_path.test" df_spark = spark.read.table(source_table)
Now, when I run it, I get this error:
Unsupported cell during execution. SQL warehouses only support executing SQL cells.
If I query the table using SQL, then how to store it as a df.
I used this line of code to do that but again I get the same error.
df_spark = spark.sql("SELECT * FROM my_path.test")