cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Issue with quotes in struct type columns when using ODBC

dcrezee
New Contributor III

I'm trying to connect to Databricks using pyodbc and I'm running into an issue with struct columns.

As far as I understand, struct columns and array columns are not supported by pyodbc, but they are converted to JSON. However, when there are nested columns that contain quotes (") , the JSON that is returned is invalid, i.e. the quotes are not escaped.

Dataset I used:

from pyspark.sql import Row
 
data = [
  Row(nested=Row(nested_column="nested with a \" character")),
]
df = spark.sparkContext.parallelize(data).toDF()
 
(
  df.write
  .format("delta")
  .mode("overwrite")
  .option("path", "/delta/quotes_issue/")
  .saveAsTable("default.quotes_issue")
)

Querying the data afterwards through pyodbc (using docs here)

conn = pyodbc.connect("DSN=Databricks_Cluster", autocommit=True)
 
cursor = conn.cursor()
cursor.execute("SELECT * FROM default.quotes_issue")
rows = cursor.fetchall()
for r in rows:
    print(r)

This gives me the following invalid JSON output.

'{"nested_column":"nested with a " character"}'

I've tried with both the Databricks Compute cluster endpoint, as well as the newer SQL Endpoint. Also tried the Databricks SQL Connection for Python, but they all have the same problem.

Is there a way to configure Databricks, or the Simba driver somehow to escape quotes correctly?

2 ACCEPTED SOLUTIONS

Accepted Solutions

dcrezee
New Contributor III

Hi @Kaniz Fatmaโ€‹ ,

Thanks for the reply. I'm still curious to the answer for the question above, but I did manage to find a work around.

Work Around

Using the to_json method, you can let Spark convert all the complex columns into json, instead of relying on the SQL endpoint / Simba Driver to do this for you. If you want to completely automate the use of to_json, the describe table statement can help you find out which datatypes your columns have.

View solution in original post

BilalAslamDbrx
Databricks Employee
Databricks Employee

@Derk Crezeeโ€‹ - I learned something today. Apparently ODBC does not convert to JSON. There is no defined spec on how to return complex types, in fact that was added only in SQL 2016. That's exactly what you are running into!

End of history lesson ๐Ÿ˜„ Here's a fix:

SELECT to_json(nested)
FROM default.quotes_issue

This will return:

{"nested_column":"nested with a \" character"}	

which is valid JSON.

View solution in original post

7 REPLIES 7

dcrezee
New Contributor III

Hi @Kaniz Fatmaโ€‹ ,

Thanks for the reply. I'm still curious to the answer for the question above, but I did manage to find a work around.

Work Around

Using the to_json method, you can let Spark convert all the complex columns into json, instead of relying on the SQL endpoint / Simba Driver to do this for you. If you want to completely automate the use of to_json, the describe table statement can help you find out which datatypes your columns have.

dcrezee
New Contributor III

Hi @Kaniz Fatmaโ€‹ ,

My question wasn't related to connecting Databricks to SQL Server, but on an error I'm getting while connecting to Databricks using pyodbc.

Thanks,

Derk

dcrezee
New Contributor III

Hi @Kaniz Fatmaโ€‹ , thanks for the reply. I think I misunderstood. So the suggestion is to install a number of package and update pyodbc on the Databricks cluster you are using? Will give it a try.

dcrezee
New Contributor III

Thanks @Kaniz Fatma (Databricks)โ€‹ . Have tried adding above script as a startup script to the cluster configuration I'm using within databricks, but issue remains.

BilalAslamDbrx
Databricks Employee
Databricks Employee

@Derk Crezeeโ€‹ - I learned something today. Apparently ODBC does not convert to JSON. There is no defined spec on how to return complex types, in fact that was added only in SQL 2016. That's exactly what you are running into!

End of history lesson ๐Ÿ˜„ Here's a fix:

SELECT to_json(nested)
FROM default.quotes_issue

This will return:

{"nested_column":"nested with a \" character"}	

which is valid JSON.

Hey Bilal,

Thanks for your reply. This indeed works and is what I'm currently using as a work around (see one of the previous posts).

Thanks,

Derk

@Derk Crezeeโ€‹  sorry I missed that you already had a fix for this.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group