rt-slowth
New Contributor III
since ‎08-29-2023
a week ago

User Stats

  • 10 Posts
  • 0 Solutions
  • 1 Kudos given
  • 1 Kudos received

User Activity

I created a separate pipeline notebook to generate the table via DLT, and a separate notebook to write the entire output to redshift at the end. The table created via DLT is called spark.read.table("{schema}.{table}").This way, I can import[MATERIALI...
If anyone has example code for building a CDC live streaming pipeline generated by AWS DMS using import dlt, I'd love to see it.I'm currently able to see the parquet file starting with Load on the first full load to S3 and the cdc parquet file after ...
I want to test a pipeline created using dlt and python in vscode.
    from databricks.connect import DatabricksSession from data.dbx_conn_info import DbxConnInfo class SparkSessionManager: _instance = None _spark = None def __new__(cls): if cls._instance is None: cls._instance = s...
If there is no data abnormality in redshift connecting to spark from shared in databricks, and the data suddenly decreases, what cause should I check? Also, is there any way to check the variables in widget or code on each execution?
Kudos from
Kudos given to