Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2023 09:44 PM
We have identified a work around to resolve this issue:
df_table = spark.sql(f'''SELECT * FROM Employee''')
df_table.write.mode("append").json("/mnt/temp_table/ Employee ",ignoreNullFields=False)
CREATE STREAMING LIVE TABLE Employee_temp
COMMENT "Employee temp"
AS
SELECT
*
FROM cloud_files("/mnt/temp_table/ Employee ", "json")
-- Create and populate the target table.
CREATE OR REFRESH STREAMING LIVE TABLE dim_employee;
APPLY CHANGES INTO
live.dim_employee
FROM
stream(Live. Employee_temp)
KEYS
(employeeid)
IGNORE NULL UPDATES
SEQUENCE BY
load_datetime
STORED AS
SCD TYPE 2;