- 8 Views
- 0 replies
- 0 kudos
Hi,I want to set default ACL that applies to all created jobs and clusters, according to a cluster policy for example, but currently I need to apply my ACL at every created job/cluster separately.is there a way to do that?BR,
- 8 Views
- 0 replies
- 0 kudos
- 58 Views
- 2 replies
- 0 kudos
Hi,Is there any native connector available to connect salesforce core (not cloud) in Databricks? If no native connector, what are all recommended approaches to connect to Salesforce coreThanks,Subashini
- 58 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @sdurai,
Yes. Databricks has a native Salesforce connector for core Salesforce (Sales Cloud / Service Cloud / Platform objects) via Lakeflow Connect - Salesforce ingestion connector. It lets you create fully managed, incremental pipelines from Sal...
1 More Replies
by
IM_01
• Contributor II
- 1010 Views
- 19 replies
- 3 kudos
Hi,A column was deleted on the source table, when I ran LSDP it failed with error DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_LOG : Streaming read is not supported on tables with read-incompatible schema changes( e.g: rename or drop or datatype ch...
- 1010 Views
- 19 replies
- 3 kudos
Latest Reply
This looks like a very practical template, especially for teams trying to structure their Data & AI strategy without overcomplicating things. The step-by-step format and examples should be really helpful for workshops and collaborative sessions. Curi...
18 More Replies
by
mits1
• New Contributor III
- 102 Views
- 7 replies
- 0 kudos
Hi,I am exploring Schema inference and Schema evolution using Autoloader.I am reading a single line json file and writing in a delta table which does not exist already (creating it on the fly), using pyspark (below is the code).Code :spark.readStream...
- 102 Views
- 7 replies
- 0 kudos
Latest Reply
Hi @mits1 can you try adding this option as well:{"multiLine": "true"}
6 More Replies
- 434 Views
- 7 replies
- 0 kudos
We are using databricks to connect to a glue catalog which contains iceberg tables. We are using DBR 17.2 and adding the jars org.apache.iceberg:iceberg-spark-runtime-4.0_2.13:1.10.0org.apache.iceberg:iceberg-aws-bundle:1.10.0the spark config is then...
- 434 Views
- 7 replies
- 0 kudos
Latest Reply
Hi @stemill ,
The way of connecting to Iceberg tables managed by Glue catalog that you described is not officially supported. Because spark_catalog is not a generic catalog slot – it’s a special, tightly‑wired session catalog with a lot of assumptio...
6 More Replies
by
mordex
• New Contributor III
- 40 Views
- 0 replies
- 0 kudos
Title: Databricks workflows for APIs with different frequencies (cluster keeps restarting)Hey everyone,I’m stuck with a Databricks workflow design and could use some advice.Currently, we are calling 70+ APIs Right now the workflow looks something l...
- 40 Views
- 0 replies
- 0 kudos
- 1619 Views
- 3 replies
- 2 kudos
Two Issues:1. What is the behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema? Why would you use both?2. When can cloudFiles.inferColumnTypes be used without a schema checkpoint? How does that affect the behavior of cloud...
- 1619 Views
- 3 replies
- 2 kudos
Latest Reply
Behavior of cloudFiles.inferColumnTypes with and without cloudFiles.inferSchema:When cloudFiles.inferColumnTypes is enabled, Auto Loader attempts to identify the appropriate data types for columns instead of defaulting everything to strings, which i...
2 More Replies
- 49 Views
- 0 replies
- 0 kudos
Hi All, I'm looking to implement an automated, scalable, and auditable purge mechanism on Azure Databricks to manage data retention, deletion and archival policies across our Unity Catalog-governed Delta tables.I've come across various approaches, s...
- 49 Views
- 0 replies
- 0 kudos
- 132 Views
- 4 replies
- 5 kudos
Hi all,I’ve been looking into the Python Data Source API and wanted to get some feedback from others who may be experimenting with it.One of the more common challenges I run into is working with applications that expose APIs but don’t have out-of-the...
- 132 Views
- 4 replies
- 5 kudos
Latest Reply
Adding on to @edonaire, which are accurate.
@beaglerot , your contacts project is the right use case for the pattern you have. Small data, infrequent changes, direct read into bronze. That works. The real question you're asking is what happens when t...
3 More Replies
- 101 Views
- 3 replies
- 1 kudos
Hi,We are planning to re-write our application ( which was originally running in R) in python. We chose to use Polars as they seems to be faster than pandas. We have functions written in R which we are planning to convert to Python.However in one of ...
- 101 Views
- 3 replies
- 1 kudos
Latest Reply
Thank you @Louis_Frolio and @pradeep_singh for the detailed explanation. I will discuss your inputs with the team and get back in case we have more question..
2 More Replies