- 5537 Views
- 0 replies
- 0 kudos
I am wondering how similar the backend execution of the two API's are. If I have code that does the same operations written in both styles, is there any functional difference between them when it comes to the execution?
Only HTTPS is supported right now.If SSH is required for your use case, please let your Databricks Rep know and reference the Idea DB-I-3697 so that it can be prioritized.
You can clone any repo, the security concern is usually around proprietary code exfiltration, whether intentional or accidental.
It's not currently on the roadmap but please create an Idea and Product team could consider it, based on demand.https://ideas.databricks.com/
Feature table deletion is a potentially dangerous operation, since downstream consumers of feature tables (models, online stores, jobs, etc) may break due to the deletion. We might support a safe way to do this in future. In the meanwhile, we may be ...
In the example response at https://docs.databricks.com/security/network/ip-access-list.html{ "ip_access_list": { "list_id": "<list-id>", "label": "office", "ip_addresses": [ "1.1.1.1", "2.2.2.2/21" ], "address_co...
The workspace audit logs should provide all workspace conf change logs. You can check service accountsManager and action name createWorkspaceConfiguration or updateWorkspaceConfiguration.
The below code gives different result when executed using DB Connect and a Notebooksc = spark.sparkContext a = sc.accumulator(0) rdd = sc.parallelize([1, 2, 3]) def f(x): global a a.add(x) rdd.foreach(f) rdd.count() print(a.value)
This is a known limitation that accumulators do not work with DB Connect.
I have a spark-submit job, I do not see auto-scaling happening on the cluster at the time of executions.
This is working as expected. Autoscaling is not available for spark-submit jobsRun the job as jar job instead of spark-submit
I am ingesting change data from S3 using autoloader jobs. We have some very long string fields in the data. Does spark/delta cap the string length by default?
I understand the Delta caching for the data files. Do we have anything similar for the metadata files. Will the delta metadata get cached in the Delta caching
The Delta logs - JSON files will be cached on the Driver (in memory) for Delta if they are small enough (<10 MB). They are not stored in the Delta cache. Before every query Delta checks if the snapshot is stale or has to be re-built.
And do I have any control over where and how it's saved?
The offline store is backed by Delta tables . In AWS we support Amazon Aurora (MySQL-compatible) & Amazon RDS MySQL and in Azure we support Azure Database for MySQL and Azure SQL Database as as online stores https://docs.microsoft.com/en-us/azure/d...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 1632 | |
| 791 | |
| 513 | |
| 349 | |
| 287 |