cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Community. Share insights, tips, and best practices for leveraging data for informed decision-making.
cancel
Showing results for 
Search instead for 
Did you mean: 

[UNBOUND_SQL_PARAMETER] error

AnnaP
Visitor

Hi, I'd appreciate it if anyone could help!

We are using offical ODBC driver (Simba Spark ODBC Driver 64-bit 2.08.02.1013) in our application. C++ APIs.

All the following SQL statements are passed through ODBC API to Databricks:

successully executing: CREATE TABLE hive_metastore.default.a_dbTab1( id DOUBLE, name STRING )

We wanted to do this:  insert into hive_metastore.default.a_dbTab1(id, name) values(1, "Alice");

1. execute prepare statement successfully: 

INSERT INTO hive_metastore.default.a_dbTab1(`id` , `name` ) VALUES (?,?)

2. Bind the insert statement also run successfully. 

3. But when we ran the execute the insert statement, got this error:

42P02: [Simba][Hardy] (80) Syntax or semantic analysis error thrown in server while
executing query. Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: [UNBOUND_SQL_PARAMETER]
org.apache.spark.sql.catalyst.ExtendedAnalysisException: [UNBOUND_SQL_PARAMETER] Found
the unbound parameter: _68. Please, fix `args` and provide a mapping of the parameter to
either a SQL literal or collection constructor functions such as `map()`, `array()`,
`struct()`. SQLSTATE: 42P02; line 1 pos 68
at
org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThri
ftServerErrors.scala:49)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(
SparkExecuteStatementOperation.scala:805)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)
at org.apache.spark.sql.hiv

 

 

 

 

 

 

 

 

0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group