cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

need help with Azure Databricks questions on CTE and SQL syntax within notebooks

Jyo777
Contributor

Hi amazing community folks,

Feel free to share your experience or knowledge regarding below questions:-

1.) Can we pass a CTE sql statement into spark jdbc? i tried to do it i couldn't but i can pass normal sql (Select * from ) and it works. i heard that in spark 3.4, it should be available, is it true? anyone faced it?

2.) anyone has a list of sql function/syntaxes comparison handy. for eg:-

(Top 1 *) works on sql server but its doesn't in ADB notebooks (we need to use limit)

Thanks in advance ๐Ÿ™‚

7 REPLIES 7

Anonymous
Not applicable

@Jyoti jโ€‹ :

1) Yes, it is possible to pass a CTE (Common Table Expression) SQL statement into Spark JDBC. However, the ability to pass CTEs through Spark JDBC depends on the version of Spark you are using.

In Spark 2.x versions, CTEs are not supported in Spark JDBC. However, in Spark 3.x versions, CTEs are supported. Therefore, if you are using Spark 3.4, you should be able to pass a CTE SQL statement through Spark JDBC. To use CTEs with Spark JDBC in Spark 3.4, you need to make sure that you use the "subquery" option in the JDBC options

Here's an example Python code snippet to use Common Table Expressions (CTEs) with Spark JDBC in Spark 3.4:'

from pyspark.sql import SparkSession
 
# create a SparkSession
spark = SparkSession.builder \
    .appName("CTE with JDBC in Spark 3.4") \
    .getOrCreate()
 
# configure Spark to use the JDBC driver
spark.conf.set("spark.sql.catalog.jdbc.driver", "com.mysql.jdbc.Driver")
 
# create a temporary view using a CTE
spark.sql("""
    WITH my_cte AS (
        SELECT col1, col2
        FROM my_table
        WHERE col3 = 'some_value'
    )
    SELECT col1, AVG(col2) as avg_col2
    FROM my_cte
    GROUP BY col1
""").createOrReplaceTempView("my_temp_view")
 
# use the temporary view to query a remote database via JDBC
jdbc_df = spark.read \
    .format("jdbc") \
    .option("url", "jdbc:mysql://localhost:3306/my_db") \
    .option("dbtable", "my_temp_view") \
    .option("user", "my_username") \
    .option("password", "my_password") \
    .load()
 
# show the results
jdbc_df.show()

 2) Some examples are as below

Top function:
SQL Server: SELECT TOP 1 * FROM my_table
Azure Databricks: SELECT * FROM my_table LIMIT 1
 
Date functions:
SQL Server: DATEADD(day, 7, my_date) to add 7 days to a date
Azure Databricks: DATE_ADD(my_date, INTERVAL 7 DAYS) to add 7 days to a date
 
Substring function:
SQL Server: SUBSTRING(my_string, 1, 3) to get the first 3 characters of a string
Azure Databricks: SUBSTR(my_string, 1, 3) to get the first 3 characters of a string
 
String concatenation:
SQL Server: SELECT 'Hello ' + 'world' to concatenate two strings
Azure Databricks: SELECT CONCAT('Hello ', 'world') to concatenate two strings
 
Date formatting:
SQL Server: SELECT FORMAT(my_date, 'dd/MM/yyyy') to format a date as dd/MM/yyyy
Azure Databricks: SELECT DATE_FORMAT(my_date, 'dd/MM/yyyy') to format a date as dd/MM/yyyy

hey thanks so much for putting all information for me...

1.) question :- r u sure about spark 2* version? we currently have 3.* but still cant do it. just to clarify our table coming from jdbc connection directly.

 for eg:- my_table is coming from a jdbc connection to oracle or sql server

2.) regarding sql syntax : thanks ๐Ÿ™‚

Anonymous
Not applicable

@Jyoti jโ€‹ :

can you create a temporary view using your CTE statement and then query that view using Spark JDBC. Here's an example of how you can create a temporary view using a CTE statement:

spark.sql("WITH cte AS (SELECT * FROM table_name WHERE column_name = 'some_value') \
          SELECT * FROM cte").createOrReplaceTempView("temp_view_name")

vijaypavann
Databricks Employee
Databricks Employee

CTE expressions are supported with the `prepareQuery` option. 

https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html

 

A prefix that will form the final query together with query. As the specified query will be parenthesized as a subquery in the FROM clause and some databases do not support all clauses in subqueries, the prepareQuery property offers a way to run such complex queries. As an example, spark will issue a query of the following form to the JDBC Source.

<prepareQuery> SELECT <columns> FROM (<user_specified_query>) spark_gen_alias

Below are a couple of examples.

  1. MSSQL Server does not accept WITH clauses in subqueries but it is possible to split such a query to prepareQuery and query:
    spark.read.format("jdbc")
    .option("url", jdbcUrl)
    .option("prepareQuery", "WITH t AS (SELECT x, y FROM tbl)")
    .option("query", "SELECT * FROM t WHERE x > 10")
    .load()
  2. MSSQL Server does not accept temp table clauses in subqueries but it is possible to split such a query to prepareQuery and query:
    spark.read.format("jdbc")
    .option("url", jdbcUrl)
    .option("prepareQuery", "(SELECT * INTO #TempTable FROM (SELECT * FROM tbl) t)")
    .option("query", "SELECT * FROM #TempTable")
    .load()

csanjay100
New Contributor II

Nope prepareQuery doesn't wok either 

Rjdudley
Contributor II

1) Rather than a CTE you might be better served by creating another dataframe and querying from that.  Dataframes are more native to the Spark platform and you can have more than one in a notebook.

2) The DB-SQL language reference is SQL language reference - Azure Databricks - Databricks SQL | Microsoft Learn.  Regarding TOP/LIMIT, almost every SQL dialect has its own way of limiting results but LIMIT is common in open source databases like MySQL and PostgreSQL, which is probably why it was chosen for DB-SQL.

Rjdudley
Contributor II

Not a comparison, but there is a DB-SQL cheatsheet at https://www.databricks.com/sites/default/files/2023-09/databricks-sql-cheatsheet.pdf/

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group