- 2756 Views
- 2 replies
- 2 kudos
Using:( hostname is hidden )kafka = spark.readStream\ .format("kafka")\ .option("kafka.sasl.mechanism", "SCRAM-SHA-512")\ .option("kafka.security.protocol", "SASL_SSL")\ .option("kafka.sasl.jaas.config", f'org.apache.kafka.common.security...
- 2756 Views
- 2 replies
- 2 kudos
Latest Reply
With TACL enabled cluster, you got many restrictions, so streaming will not work. Generally, you can read only things registered in metastore; please disable it for your use case,Additionally, remember that the unity catalog doesn't support streaming...
1 More Replies
- 1083 Views
- 0 replies
- 0 kudos
Hi,I have a solution design question on which I am looking for some help. We have 2 environments in azure (dev and prod), each env has its own ADLS storage account with a different name of course. Within Databricks code we are NOT leveraging the mou...
- 1083 Views
- 0 replies
- 0 kudos
by
kjoth
• Contributor II
- 12531 Views
- 9 replies
- 7 kudos
I have created External table using spark via below command. (Using Data science & Engineering)df.write.mode("overwrite").format("parquet").saveAsTable(name=f'{db_name}.{table_name}', path="dbfs:/reports/testing")I have tried to delete a row based on...
- 12531 Views
- 9 replies
- 7 kudos
Latest Reply
hi @karthick J ,Can you try to delete the row and execute your command in a non high concurrency cluster? the reason why im asking this is because we first need to isolate the error message and undertand why is happening to be able to find the best ...
8 More Replies
- 708 Views
- 0 replies
- 0 kudos
What is the use If I am able to upload and not able to read. I have only read access on the cluster
- 708 Views
- 0 replies
- 0 kudos
- 1256 Views
- 1 replies
- 0 kudos
I have used Ranger in Apache Hadoop and it works fine for my use case. Now that I am migrating my workloads from Apache Hadoop to Databricks
- 1256 Views
- 1 replies
- 0 kudos
Latest Reply
Currently, Table ACL does not support column-level security. There are several tools like Privcera which has better integration with Databricks.