Hi !
I'm working on a project at my company on Databricks using Scala and Spark. I'm new to Spark and Databricks and so I would like to know how to create a table on specific location (on the Delta Lake of my company). In SQL + some Delta features, I would have done it like so :
CREATE OR REPLACE TABLE delta.`mnt/peth/to/MyTable` (
id SERIAL PRIMARY KEY,
m1 TIMESTAMP NOT NULL,
m2 TIMESTAMP NOT NULL
) USING DELTA
However, it seems that PRIMARY KEY as well as SERIAL is not recognize by Spark. So how can I make it understand I want this column to be in auto-increment and signed integer so I can simply do this to add new values :
INSERT INTO MyTable VALUES (m1Value, m2Value)
Thank you
PS : I tried to use dataframes but when making unions to add a new row, Spark decide to only keep the last row of the table + the new row, so I want to skip dataframes if possible.
PS2 : MyTable will not be used simultaneously by many process. It will be only successive calls