cancel
Showing results for 
Search instead for 
Did you mean: 
zero234
New Contributor III
since ‎02-16-2024
‎05-15-2024

User Stats

  • 7 Posts
  • 0 Solutions
  • 3 Kudos given
  • 0 Kudos received

User Activity

So I have created a delta live table Which uses spark.sql() to execute a query And uses df.write.mode(append).insert intoTo insert  data into the respective table And at the end i return a dumy table Since this was the requirement So now I have also ...
 i am trying to create 2 streaming tables in one DLT pipleine , both read json data from different locations and both have different schema , the pipeline executes but no data is inserted in both the tables.where as when i try to run each table indiv...
So i have this nested data with more than 200+columns and i have extracted this data into json file when i use the below code to read the json files, if in data there are few columns which have no value at all it doest inclued those columns in schema...
I have created a DLT pipeline which  reads data from json files which are stored in databricks volume and puts data into streaming table This was working fine.when i tried to read the data that is inserted into the table and compare the values with t...
i have created a materialized view table using delta live table pipeline , for some reason it is overwriting data every day , i want it to append data to the table instead of doing full refresh suppose i had 8 million records in table and if irun the...
Kudos given to