We read data from csv in the volume into the table using COPY INTO. The first 200 files were added without problems, but now we are no longer able to add any new data to the table and the error is FAILED_READ_FILE.NO_HINT. The CSV format is always th...
I'm trying to perform a merge inside a streaming foreachbatch using the command: microBatchDF._jdf.sparkSession().sql(self.merge_query)Streaming runs fine if I use a Personal cluster while if I use a Shared cluster streaming fails with the following ...
According to the documentation the WHERE predicate in a DELETE statement should supports subqueries, including IN, NOT IN, EXISTS, NOT EXISTS, and scalar subqueries.if I try to run a query like: DELETE FROM dev.gold.table AS trg
WHERE EXISTS (
...
I would like to create a databricks Job where the 'Run as' field is set to a ServicePrincipal. The Job points to notebooks stored in Azure DevOps.The step I've already performed are:I created the Service Principal and I'm now able to see it into the ...
I added the service principal in Admin Settings > Service Principal and then enabled all the Configurations "allow cluster creation", "databricks SQL access" and "workspace access". In the Permission settings I have enabled "Service principal: Manage...
Py4JJavaError: An error occurred while calling o392.sql. : org.apache.spark.SparkException: [FAILED_READ_FILE.NO_HINT] Error while reading file dbfs:/Volumes/...txt. SQLSTATE: KD001 at org.apache.spark.sql.errors.QueryExecutionErrors$.cannotReadFiles...
Hi @Walter_C webapp_2024-01-19_08.14.35Z_master_4143c9b9_1139299051 I have all the Experimental Features enabled, is there something else to activate? ThanksDiego