Error when executing spark.readStream Script
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2025 01:29 PM - edited 08-25-2025 01:39 PM
Hi all,
When I try to execute the script (as per the screenshot below), in a Notebook cell, I get an error message. I am using Databricks Free Edition, and am not sure if the error relates to the compute cluster that I am using?
Any guidance would be greatly appreciated.
Thanks
Giuseppe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2025 01:45 PM
Can you please copy the entire error message and paste here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2025 01:53 PM
Yep, as requested.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-26-2025 02:17 AM
Free Edition clusters often come with restricted resources and certain features may not be supported. You may want to review your cluster settings, check compatibility with your script, and consider whether upgrading to a paid tier is necessary for your use case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-26-2025 03:43 AM
Hi @giuseppe_esq ,
Can you please share the DBR version, cluster configurations and log4 file containing the error stact trace to further review?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-26-2025 08:59 AM - edited 08-26-2025 09:01 AM
Hi,
Thanks for your reply.
Apologies if I am sending the incorrect information. I am in the process of learning databricks, and adding content to my personal blog.
DBR Version
Cluster Configurations
From the Compute menu, this is the only cluster available in the Databricks Free Edition:
From the Workspace Notebook, I also tried to attach the General Compute option, though this didn't resolve the issue either:
Log4j File
I have attached the Log4j File, for the time I executed the following query, that errored:
## Incrementally (or stream) data using Auto Loader
(spark.
readStream
.format("cloudFiles")
.option("cloudFiles.format", "csv")
.option("header", "true")
.option("sep", ",")
.option("inferSchema", "true")
.option("cloudFiles.schemaLocation", f"{checkpoint_file_location}")
.load("/Volumes/workspace/python_auto_loader/csv_staging")
.writeStream
.option("checkpointLocation", f"{checkpoint_file_location}")
.trigger(once=True)
.toTable("workspace.python_auto_loader.python_csv_autoloader")
)
If you require any additional information, please message me again.
Thank you for your help.
Giuseppe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2025 06:36 AM
Hi Sidhant,
Are there any updates on this please?
Thanks for your time and help
Giuseppe