[UNBOUND_SQL_PARAMETER] error
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2024 10:02 AM
Hi, I'd appreciate it if anyone could help!
We are using offical ODBC driver (Simba Spark ODBC Driver 64-bit 2.08.02.1013) in our application. C++ APIs.
All the following SQL statements are passed through ODBC API to Databricks:
successully executing: CREATE TABLE hive_metastore.default.a_dbTab1( id DOUBLE, name STRING )
We wanted to do this: insert into hive_metastore.default.a_dbTab1(id, name) values(1, "Alice");
1. execute prepare statement successfully:
INSERT INTO hive_metastore.default.a_dbTab1(`id` , `name` ) VALUES (?,?)
2. Bind the insert statement also run successfully.
3. But when we ran the execute the insert statement, got this error:
42P02: [Simba][Hardy] (80) Syntax or semantic analysis error thrown in server while
executing query. Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: [UNBOUND_SQL_PARAMETER]
org.apache.spark.sql.catalyst.ExtendedAnalysisException: [UNBOUND_SQL_PARAMETER] Found
the unbound parameter: _68. Please, fix `args` and provide a mapping of the parameter to
either a SQL literal or collection constructor functions such as `map()`, `array()`,
`struct()`. SQLSTATE: 42P02; line 1 pos 68
at
org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThri
ftServerErrors.scala:49)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(
SparkExecuteStatementOperation.scala:805)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104)
at org.apache.spark.sql.hiv
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-06-2024 10:53 AM
@AnnaP Hey,
Did you try below:
To disable the SQL Connector feature, select the Use Native Query check box.
Important:
l When this option is enabled, the connector cannot execute
parameterized queries.
l By default, the connector applies transformations to the queries
emitted by an application to convert the queries into an equivalent
form in HiveQL. If the application is Spark-aware and already emits
HiveQL, then turning off the translation avoids the additio
from

