- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 12:21 AM
I have created a database called retail and inside database a table is there called sales_order. I want to create an identity column in the sales_order table, but while creating it I am getting an error.
- Labels:
-
Column
-
Databricks SQL
-
Identity Column
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 11:44 PM
Thanks @Priya Ananthram , issue got solved
I used the same DBR
my job aborted issue got solved by using "try_cast" function while inserting the data from one table to the delta table. I used try_cast function for every column I was inserting.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 01:57 AM
That is because you can't add an id column to an existing table.
Instead create a table from scratch and copy data:
CREATE TABLE tname_ (
<tname columns>,
id BIGINT GENERATED BY DEFAULT AS IDENTITY
);
INSERT INTO tname_ (<tname columns>) SELECT * FROM tname;
DROP TABLE tname;
ALTER TABLE tname_ RENAME TO tname;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 04:29 AM
while creating a new table I'm getting an error like this:
Error in SQL statement: AnalysisException: Cannot create table ('`spark_catalog`.`retaildb`.`sales_order1`'). The associated location ('dbfs:/user/hive/warehouse/retaildb.db/sales_order1') is not empty but it's not a Delta table
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 04:41 AM
what databricks version do you use?
Only as from a certain version on, delta is the default format. So if you use an older release you should add 'using delta' (and I doubt in that case if the ID column will work)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 04:44 AM
I'm using community version
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2022 04:54 AM
It is possible that it is not supported with the CE.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2022 02:46 PM
@Raman Gupta - For the error mentioned
Error in SQL statement: AnalysisException: Cannot create table ('`spark_catalog`.`retaildb`.`sales_order1`'). The associated location ('dbfs:/user/hive/warehouse/retaildb.db/sales_order1') is not empty but it's not a Delta table
could you please manually clean up the data directory specified in the error message? using the below command and then try to create a table.
%scala
dbutils.fs.rm("<path-to-directory>", true)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 12:46 AM
Can you please tell me why it is showing job aborted ? same identity column issue.
I'm able to create table but while inserting the data it's saying :
Error in SQL statement: SparkException: Job aborted.
error file is attached named: blocker5
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 02:03 AM
above error is from community version.
and in azure version also it is showing the error.
i.e :
org.apache.spark.SparkException: Job aborted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 03:58 PM
I just did this on the community cluster
%sql
create or replace table picklist
( picklist_id BIGINT not null GENERATED ALWAYS AS IDENTITY(start with 1 increment by 1),
picklist_Name string
)
insert into picklist(picklist_Name) values('a');
insert into picklist(picklist_Name) values('b');
insert into picklist(picklist_Name) values('c');
select * from picklist
Also check DESCRIBE to see where the delta table is saved
I have a feeling its the version of the DBR that your using
Could you use the latest and see if that helps you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 04:46 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2022 11:44 PM
Thanks @Priya Ananthram , issue got solved
I used the same DBR
my job aborted issue got solved by using "try_cast" function while inserting the data from one table to the delta table. I used try_cast function for every column I was inserting.

