cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Creating a delta table Mismatch Input Error

Brose
New Contributor III

I am trying to create a delta table for streaming data, but I am getting the following error; Error in SQL statement: ParseException:

mismatched input 'CREATE' expecting {<EOF>, ';'}(line 2, pos 0).

My statement is as follows;

%sql

DROP TABLE IF EXISTS table1

CREATE TABLE table1 (

  col_1 STRING,

  col_2 STRING,

  col_date_1 DATE,

  col_dat_2 DATE,

  col_3 Double,

  col_4 Double,

) USING DELTA

I can't figure out why I am getting this error.

10 REPLIES 10

Kaniz
Community Manager
Community Manager

Hi @ Brose! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.

-werners-
Esteemed Contributor III

can you try putting a semicolon (;) at the end of the drop table statement?

Brose
New Contributor III

I did that but still getting the same error.

Thanks @werners

-werners-
Esteemed Contributor III

can you try with CREATE OR REPLACE table1 (instead of drop and create)?

Brose
New Contributor III

Thanks again @werners, I tried that but still getting the same error

-werners-
Esteemed Contributor III

strange,

can you try to execute both statements in separated cells?

Like that you can see where the problem exactly is

Brose
New Contributor III

Thank you @Werner Stinckens I was able to resolve this issue. Your assistance is much appreciated

hi @Ambrose Walkerโ€‹ ,

Here is the link to the docs https://docs.databricks.com/spark/latest/spark-sql/language-manual/sql-ref-syntax-ddl-create-table-d...

Here is the syntax:

CREATE TABLE [ IF NOT EXISTS ] table_identifier

[ ( col_name1 col_type1 [ COMMENT col_comment1 ], ... ) ]

USING data_source

[ OPTIONS ( key1 [ = ] val1, key2 [ = ] val2, ... ) ]

[ PARTITIONED BY ( col_name1, col_name2, ... ) ]

[ CLUSTERED BY ( col_name3, col_name4, ... )

[ SORTED BY ( col_name [ ASC | DESC ], ... ) ]

INTO num_buckets BUCKETS ]

[ LOCATION path ]

[ COMMENT table_comment ]

[ TBLPROPERTIES ( key1 [ = ] val1, key2 [ = ] val2, ... ) ]

[ AS select_statement ]

Brose
New Contributor III

Thanks Jose.

Anonymous
Not applicable

@Ambrose Walkerโ€‹ - If Jose's answer resolved your issue, would you be happy to mark that post as best? That will help others find the solution more quickly.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.