cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Error in SQL statement: AnalysisException: Cannot up cast documents from array

sage5616
Valued Contributor

Hi Everyone,

I am getting the following error when running a SQL query and do not understand what it means or what can be done to resolve it. Any recommendations?

View DDL:

CREATE VIEW myschema.table (
  accountId,
  agreementType,
  capture_file_name,
  capture_file_path,
  createdDate,
  currency,
  de_agentid,
  de_applicationshardid,
  de_datacontenttype,
  de_eventapplicationtime,
  de_eventmode,
  de_eventpublishtime,
  de_eventsequenceid,
  de_id,
  de_partitionkey,
  de_source,
  documents,
  effectiveDate,
  eh_EnqueuedTimeUtc,
  eh_Offset,
  eh_SequenceNumber,
  eh_SystemProperties_x_opt_enqueued_time,
  eh_SystemProperties_x_opt_kafka_key,
  endDate,
  expirationDate,
  externalId,
  externalSource,
  id,
  isInWorkflow,
  isSigned,
  name,
  notice,
  noticeDate,
  noticePeriod,
  parties,
  reconciled_file_name_w_path,
  requestor,
  resourceVersion,
  status,
  terminateForConvenience,
  updatedDate,
  value,
  de_action,
  de_eventapplication_year,
  de_eventapplication_month,
  de_eventapplication_day,
  de_eventapplication_hour,
  de_eventapplication_minute)
TBLPROPERTIES (
  'transient_lastDdlTime' = '1664473495')
AS select * from parquet.`/mnt/store/de_entitytype=Agreement`

SQL query:

select de_id from myschema.table;

Error:

Error in SQL statement: AnalysisException: Cannot up cast documents from array<struct<accountId:string,agreementId:string,createdBy:string,createdDate:string,id:string,obligations:array<string>,resourceVersion:bigint,updatedBy:string,updatedDate:string>> to array<string>.
The type path of the target object is:
 
You can either add an explicit cast to the input data or choose a higher precision type of the field in the target object

Any recommendations on how to resolve this error would be much appreciated ๐Ÿ™‚

11 REPLIES 11

-werners-
Esteemed Contributor III

Seems like the nested struct gives you some issues.

Can you check this to see if it helps?

sage5616
Valued Contributor

Thank you @Werner Stinckensโ€‹ , but it does not help much. The article you've linked explains new features of Databricks Runtime 3.0 for working with complex data types.

I am simply selecting a single column with a simple data type from a view that also has a column with complex data type. This query should not be failing like that. I hope this makes sense.

-werners-
Esteemed Contributor III

Sorry, I missed the select query.

Can you check if the view returns data when using the explorer (so in the Data tab)? Maybe it is corrupt.

sage5616
Valued Contributor

@Werner Stinckensโ€‹: Same error occurs in the explorer (so in the Data tab).

-werners-
Esteemed Contributor III

OK, that means the table/view does not correspond anymore to the actual data.

Can you create a temp view on the delta lake/parquet/... and run the same query?

It will probably work. And if it does, you will have to re-create the table to have a correct schema.

sage5616
Valued Contributor

I re-created the view and got a similar error. It looks like when the complex data type slightly changes (columns like "documents" in my original post), Spark view cannot handle that, which is not right.

Hubert-Dudek
Esteemed Contributor III

Just read parquet in python and retrieve auto-generated DDL schema:

parquetFile = spark.read.parquet("people.parquet")
parquetFile.createOrReplaceTempView("parquetFile")
schema_json = spark.sql("SELECT * FROM parquetFile").schema.json()
ddl = spark.sparkContext._jvm.org.apache.spark.sql.types.DataType.fromJson(schema_json).toDDL()
print(ddl)

Thank you for chiming in @Hubert Dudekโ€‹, but not sure how this helps with getting the view to read its parquet files when some of them have columns with structs and arrays with varying schemas, which is what appears to be the problem.

P.S.

I tried converting complex data types to strings, struct or arrays schema inside the column disappears leaving just the data values without their names.

Hubert-Dudek
Esteemed Contributor III

It is helping in the way that it will give you the perfect schema ๐Ÿ™‚ which you can copy paste to your code

sage5616
Valued Contributor

Thanks again.

  1. I need to use Spark Persistent view. I don't think there is a way to specify the schema when creating Spark Persistent view, perhaps I don't know.
  2. Copy/paste schema into my code is hard-coding, which will not work in my case. I need the schema determined dynamically as new partitions are added that the view sits on top of. Sometimes fields within a complex (datatype) column have no data, which apparently changes the schema of the entire complex column. Somehow, I need to get the Spark Persistent view to recognize that and handle this dynamically rather than failing.
  3. Querying the same parquet files without a Spark Persistent view yields the same error. Flattening out columns with complex data type fields into separate simple data type columns takes care of the problem.

Anonymous
Not applicable

Hi @Michael Okulikโ€‹ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group