cancel
Showing results for 
Search instead for 
Did you mean: 
159312
New Contributor III
since ‎06-24-2022
‎06-26-2023

User Stats

  • 8 Posts
  • 0 Solutions
  • 0 Kudos given
  • 4 Kudos received

User Activity

I tried to load a static table as source to a streaming dlt pipeline. I understand this is not optimum, but it provides the best path toward eventually having a full streaming pipeline. When I do I get the following error:pyspark.sql.utils.Analysis...
I have a notebook used for a dlt pipeline. The pipeline should perform an extra task if the pipeline is run as a full refresh. Right now, I have to set an extra configuration parameter when I run a full refresh. Is there a way to programmatically...
From a notebook I can import the log4j logger from cs and write to a log like so:log4jLogger = sc._jvm.org.apache.log4jLOGGER = log4jLogger.LogManager.getLogger(__name__)LOGGER.info("pyspark script logger initialized")But this does not work in a Delt...
I'm new to spark and Databricks and I'm trying to write a pipeline to take CDC data from a postgres database stored in s3 and ingest it. The file names are numerically ascending unique ids based on datatime (ie20220630-215325970.csv). Right now auto...
When trying to ingest parquet files with autoloader with the following codedf = (spark   .readStream   .format("cloudFiles")   .option("cloudfiles.format","parquet")   .load(filePath))I get the following error:java.lang.UnsupportedOperationException:...
Kudos from