cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

WHERE 1=0, Error message from Server

pritam_epam
New Contributor II

Hi ,

I am getting this Error:

WHERE 1=0, Error message from Server: Configuration db table is not available. I am using PySpark and JDBC connection. Please help on this.

9 REPLIES 9

szymon_dybczak
Contributor

Hi @pritam_epam ,

Could you share with us more details like code that you want to execute etc.?

pritam_epam
New Contributor II

@szymon_dybczak 

 

jdbc_url = f"jdbc:databricks://{os.environ[db_const.DATA_BRICKS_HOST]};" \
f"transportMode=http;" \
f"ssl=1;" \
f"httpPath={os.environ[db_const.DATA_BRICKS_HTTP_PATH]};" \
f"AuthMech=3;" \
f"UID=token;" \
f"PWD={os.environ[db_const.DATA_BRICKS_TOKEN]};" \
f"Catalog=scna_qa;" \
f"Schema=dsml"

conn = jdbc_url
print("JDBC connection url: ", conn)

# Error - configuration query not available
parameters = {}
df = pyspark.pandas.read_sql_query(sql=query, con=conn)
print("pyspark dataframe: ", df)
 
Exception coming:

com.databricks.client.support.exceptions.GeneralException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: SELECT * FROM (select * from scna_qa.dsml.geonode_sku_dategrp_kpi) SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration query is not available..

Below you're passing query variable to read_sql_query function but I don't see any place in your code where you defined this.

 

# Error - configuration query not available
parameters = {}
df = pyspark.pandas.read_sql_query(sql=query, con=conn)
print("pyspark dataframe: ", df)
 
 
And you don't need to use pandas, you can leverage pyspark to read from jdbc sources or you can also try Lakehouse Federation:
 

 

table = (spark.read
  .format("jdbc")
  .option("url", "<jdbc-url>")
  .option("dbtable", "<table-name>")
  .option("user", "<username>")
  .option("password", "<password>")
  .load()
)

Query databases using JDBC - Azure Databricks | Microsoft Learn

 

Query databases using JDBC - Azure Databricks | Microsoft Learn

 

pritam_epam
New Contributor II

This is the query I already defined - 

query = 'select * from scna_qa.dsml.geonode_sku_dategrp_kpi'
 
jdbc_url = f"jdbc:databricks://{os.environ[db_const.DATA_BRICKS_HOST]};" \
f"transportMode=http;" \
f"ssl=1;" \
f"httpPath={os.environ[db_const.DATA_BRICKS_HTTP_PATH]};" \
f"AuthMech=3;" \
f"UID=token;" \
f"PWD={os.environ[db_const.DATA_BRICKS_TOKEN]};" \
f"Catalog=scna_qa;" \
f"Schema=dsml"

conn = jdbc_url
print("JDBC connection url: "conn)

# Error - configuration query not available
parameters = {}
df = pyspark.pandas.read_sql_query(sql=querycon=conn)
print("pyspark dataframe: "df)

pritam_epam
New Contributor II

@szymon_dybczak  I also tried the another approach you provided,

 

jdbc_url = f"jdbc:databricks://{os.environ[db_const.DATA_BRICKS_HOST]};" \
f"transportMode=http;" \
f"ssl=1;" \
f"httpPath={os.environ[db_const.DATA_BRICKS_HTTP_PATH]};" \
f"AuthMech=3;" \
f"UID=token;" \
f"PWD={os.environ[db_const.DATA_BRICKS_TOKEN]};" \
f"Catalog=scna_qa;" \
f"Schema=dsml"
df = spark.read.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", "scna_qa.dsml.geonode_sku_dategrp_kpi") \
.load()
print("pyspark dataframe: ", df)

Here scna_qa is Catalog and dsml is schema
 
Both the approach not working
Exception :
[Data bricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: SELECT * F***, Error message from Server: Configuration Schema is not available..

 

Hi @pritam_epam ,

Try something similar to the following code and see if to works for you.

 

databricks_url = f'''jdbc:databricks://adb-774941343460743.3.azuredatabricks.net:443/default;
transportMode=http;
ssl=1;
httpPath=sql/protocolv1/o/your_org_id/your_cluster_id;
AuthMech=3;
UID=token;
PWD=your_accesss_token;
UseNativeQuery=0;
'''

df = spark.read.format("jdbc").option("url", databricks_url) \
.option("query", 'SELECT * FROM main.default.department') \
.load()

display(df)

 

In my case it worked without any issue:

szymon_dybczak_0-1726056322174.png

 

 

pritam_epam
New Contributor II

@szymon_dybczak ,

No it's again throwing same type of exceptions

[Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: SELECT * FROM (SELECT * FROM scna_qa.dsml.geonode_sku_dategrp_kpi) SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration query is not available..

 

pritam_epam
New Contributor II

@szymon_dybczak 

can we do a meeting to understand what's the issue?

pritam_epam
New Contributor II

@szymon_dybczak 

Can you help us on this? Or could you provide a complete structure/steps how to connect with databricks using PySpark and JDBC step by step . Like initiate spark session then JDBC connection url then sql read all these in details.

Also what should be the JDBC version ? I am using databricks==0.2 and JDBC - DatabricksJDBC42.jar latest, what is the compatible JDBC driver?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group