cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

GET_COLUMNS fails with Unexpected character (\\'t\\' (code 116)): was expecting comma to separate Object entries - how to fix?

yzaehringer
New Contributor

I just run `cursor.columns()` via the python client and I'll get back a `org.apache.hive.service.cli.HiveSQLException` as response. There is also a long stack trace, I'll just paste the last bit because it might be illuminating:

 org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$:hiveOperatingError:HiveThriftServerErrors.scala:66
 org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$:hiveOperatingError:HiveThriftServerErrors.scala:60
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation$$anonfun$onError$1:applyOrElse:SparkAsyncOperation.scala:196
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation$$anonfun$onError$1:applyOrElse:SparkAsyncOperation.scala:181
 scala.runtime.AbstractPartialFunction:apply:AbstractPartialFunction.scala:38
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation:$anonfun$wrappedExecute$1:SparkAsyncOperation.scala:169
 scala.runtime.java8.JFunction0$mcV$sp:apply:JFunction0$mcV$sp.java:23
 com.databricks.unity.EmptyHandle$:runWith:UCSHandle.scala:103
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation:org$apache$spark$sql$hive$thriftserver$SparkAsyncOperation$$wrappedExecute:SparkAsyncOperation.scala:144
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation:runInternal:SparkAsyncOperation.scala:79
 org.apache.spark.sql.hive.thriftserver.SparkAsyncOperation:runInternal$:SparkAsyncOperation.scala:44
 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation:runInternal:SparkGetColumnsOperation.scala:54
 org.apache.hive.service.cli.operation.Operation:run:Operation.java:383
 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation:org$apache$spark$sql$hive$thriftserver$SparkOperation$$super$run:SparkGetColumnsOperation.scala:54
 org.apache.spark.sql.hive.thriftserver.SparkOperation:run:SparkOperation.scala:113
 org.apache.spark.sql.hive.thriftserver.SparkOperation:run$:SparkOperation.scala:111
 org.apache.spark.sql.hive.thriftserver.SparkGetColumnsOperation:run:SparkGetColumnsOperation.scala:54
 org.apache.hive.service.cli.session.HiveSessionImpl:getColumns:HiveSessionImpl.java:704
 org.apache.hive.service.cli.CLIService:getColumns:CLIService.java:411
 org.apache.hive.service.cli.thrift.OSSTCLIServiceIface:GetColumns:ThriftCLIService.java:1159
 com.databricks.sql.hive.thriftserver.thrift.DelegatingThriftHandler:GetColumns:DelegatingThriftHandler.scala:81

The request looked as follows:

TGetColumnsReq(
    sessionHandle=TSessionHandle(
        sessionId=THandleIdentifier(...), serverProtocolVersion=None
    ),
    catalogName=None,
    schemaName=None,
    tableName=None,
    columnName=None,
    getDirectResults=TSparkGetDirectResults(maxRows=100000, maxBytes=10485760),
    runAsync=False,
    operationId=None,
    sessionConf=None,
)

The summary is:

  • databricks receives the thrift request
  • databricks propagates it down to the hive thrift layer
  • the hive layer fails with a SQL error

Has anybody encountered this before? What would be the solution here?

1 REPLY 1

Aviral-Bhardwaj
Esteemed Contributor III

this can be package issue or runtime issue, try to change both