I only have an AWS Access Key ID and Secret Access Key, and I want to use this information to access S3.However, the official documentation states that I need to set the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables, but I cannot ...
I can use pandas to read local files in a notebook, such as those located in tmp.However, when I run two consecutive notebooks within the same job and read files with pandas in both, I encounter a permission error in the second notebook stating that ...
Can the default cluster Serverless of Databricks install Scala packagesI need to use the spark-sftp package, but it seems that serverless is different from purpose compute, and I can only install python packages?There is another question. I can use p...
An error occurred while converting a timestamp in the yyyyMMddHHmmssSSS formatfrom pyspark.sql.functions import to_timestamp_ntz, col, lit
df = spark.createDataFrame(
[("20250730090833000")], ["datetime"])
df2 = df.withColumn("dateformat", to_t...
When I execute the statement:dbutils.fs.ls("file:/tmp/")I receive the following error:ExecutionError: (java.lang.SecurityException) Cannot use com.databricks.backend.daemon.driver.WorkspaceLocalFileSystem - local filesystem access is forbiddenDoes an...
@szymon_dybczak Sorry, I'm a newbie. Currently, I can only append S3 to external data through role. For your suggestion about using UC, can I convert the CSV file in S3 into a volume in UC only through id and key, or is there another method
hi @szymon_dybczak Thank you very much for your answer,But I can't use other cluster , I can only use serverless, can I set them for serverless?I also configure it at a notebook scope, but they are not working properly at the moment. I have been tol...
Hello, @Pilsner thank you for your replyThe situation is slightly different,I transferred the file from the SFTP system to the local path of Databricks, read the file into Pandas, and then passed it to Spark.In this job, although the two notebooks ac...