I can use pandas to read local files in a notebook, such as those located in tmp.However, when I run two consecutive notebooks within the same job and read files with pandas in both, I encounter a permission error in the second notebook stating that ...
Can the default cluster Serverless of Databricks install Scala packagesI need to use the spark-sftp package, but it seems that serverless is different from purpose compute, and I can only install python packages?There is another question. I can use p...
An error occurred while converting a timestamp in the yyyyMMddHHmmssSSS formatfrom pyspark.sql.functions import to_timestamp_ntz, col, lit
df = spark.createDataFrame(
[("20250730090833000")], ["datetime"])
df2 = df.withColumn("dateformat", to_t...
When I execute the statement:dbutils.fs.ls("file:/tmp/")I receive the following error:ExecutionError: (java.lang.SecurityException) Cannot use com.databricks.backend.daemon.driver.WorkspaceLocalFileSystem - local filesystem access is forbiddenDoes an...
Hello, @Pilsner thank you for your replyThe situation is slightly different,I transferred the file from the SFTP system to the local path of Databricks, read the file into Pandas, and then passed it to Spark.In this job, although the two notebooks ac...
hi @szymon_dybczak Thank you for your respons.I did apply some substring concatenation logic to make the conversion work, but the most straightforward way is still using the yyyyMMddHHmmssSSS format.I checked the link you shared this appears to be ...
thank you very muchI created a dedicated compute instance, and I’m now able to access local files.Can I ask one more question? When creating a compute instance, what options should I pay attention to?