- 1650 Views
- 3 replies
- 1 kudos
Unity Catalog system tables (table_lineage, column_lineage) not populated
Hi community,We have enabled Unity Catalog system schemas (including `access`) more than 24 hours ago in sandbox. The schemas are showing ENABLE_COMPLETED, and other system tables (like query) are working fine.However, both `system.access.table_linea...
- 1650 Views
- 3 replies
- 1 kudos
- 1 kudos
Hi @yavuzmert, is this still an issue? Are you seeing lineage for those tables in the UI? One thing to remember about system tables is that their data is updated throughout the day. Usually, if you don't see a log for a recent event, check back later...
- 1 kudos
- 613 Views
- 2 replies
- 1 kudos
Resolved! Asset Bundle Include Glob paths not resolving recursive directories
Hello,When trying to include resource definitions in nested yaml files, the recursive paths I am specifying in the include section are not resolving as would be expected.With the include path resources/**/*.yml and a directory structure structure as ...
- 613 Views
- 2 replies
- 1 kudos
- 1 kudos
This behavior is caused by the way the Databricks CLI currently handles recursive globbing for the include section in databricks.yml files. You are not misunderstanding; this is a limitation (and partially a bug) in how the CLI resolves glob patterns...
- 1 kudos
- 3243 Views
- 1 replies
- 0 kudos
How to know Legacy Metastore connection to SQL DB (used to store metadata)
I am logged into a workspace and trying to check the schemas in legacy hive_metastore using a serverless compute, I can see the schemas listed.However, when I am creating all-purpose cluster and trying to check the schemas in legacy hive_metastore. I...
- 3243 Views
- 1 replies
- 0 kudos
- 0 kudos
In Azure Databricks, the visibility difference you observe between Serverless SQL and All-Purpose Clusters when listing schemas in the hive_metastore is due to cluster-level configuration and how each environment connects to the underlying metastore....
- 0 kudos
- 188 Views
- 1 replies
- 2 kudos
Resolved! Looking for Databricks–Kinaxis Integration or Accelerator Information
Hi Databricks Community,I’m looking for information on the partnership between Databricks and Kinaxis. Specifically:Are there any official integrations or joint solutions available between the two platforms?Does Databricks provide any accelerators, r...
- 188 Views
- 1 replies
- 2 kudos
- 2 kudos
Greetings @vamsi_simbus , I did some digging and have some helpful information for you. Here’s a concise summary of what’s publicly available today on Databricks + Kinaxis. Official partnership and integration scope A formal strategic partnersh...
- 2 kudos
- 260 Views
- 4 replies
- 2 kudos
Resolved! How to run black code-formating on the notebooks using custom configurations in UI
Hi all,I’m currently exploring how we can format notebook code using Black (installed via libraries) with specific configurations.I understand that we can configure Black locally using a pyproject.toml file. However, I’d like to know if there’s a way...
- 260 Views
- 4 replies
- 2 kudos
- 2 kudos
Hi @szymon_dybczak ,Thanks for your response. My team has been using the same setup you mentioned. I’d like to know if there’s a way to override the default configuration that Black uses in a cluster environment — for example, adjusting the line-leng...
- 2 kudos
- 422 Views
- 4 replies
- 4 kudos
Resolved! Disable SQL Warehouse during week-ends
Hello,I massively deployed SQL Warehouses in our data Platform.Right now, most of them are running every hour (with some inactivity phasis) because of Power BI report/jobs schedules.To limit cost, I would like to stop/disable some on them on Friday e...
- 422 Views
- 4 replies
- 4 kudos
- 4 kudos
Also like to provide you with some alternate options.Tagging & Monitoring: Use tags and cost dashboards to monitor weekend usage and identify high-cost warehouses for manual intervention.Serverless SQL Warehouses: If not already in use, consider swi...
- 4 kudos
- 1437 Views
- 5 replies
- 3 kudos
Databricks Runtime 16.4 LTS has inconsistent Spark and Delta Lake versions
Per the release notes for Databricks Runtime 16.4 LTS, the environment has Apache Spark 3.5.2 and Delta Lake 3.3.1:https://docs.databricks.com/aws/en/release-notes/runtime/16.4ltsHowever, Delta Lake 3.3.1 is built on Spark 3.5.3; the newest version o...
- 1437 Views
- 5 replies
- 3 kudos
- 3 kudos
Hi @Angus-Dawson Use Databricks Connect for local development/testing against a remote Databricks cluster—this ensures your code runs in the actual Databricks environment and databricks managed dbrs which are different from open-source versions((DBR...
- 3 kudos
- 4110 Views
- 3 replies
- 2 kudos
Resolved! Networking Challenges with Databricks Serverless Compute (Control Plane) When Connecting to On-Prem
Hi Databricks Community,I'm working through some networking challenges when connecting Databricks clusters to various data sources and wanted to get advice or best practices from others who may have faced similar issues.Current Setup:I have four type...
- 4110 Views
- 3 replies
- 2 kudos
- 2 kudos
Thank you Louis for the detailed explanation and guidance!
- 2 kudos
- 195 Views
- 1 replies
- 1 kudos
Resolved! How safe is Databricks workspaces with user files uploaded to workspace?
With the growing adoption of diverse machine learning, AI, and data science models available in the market, it has become increasingly challenging to assess the safety of processing these models—especially when considering the potential for malicious...
- 195 Views
- 1 replies
- 1 kudos
- 1 kudos
Hi @Chiran-Gajula , Thanks for raising this. There are a few complementary controls that can put in place across models, inference traffic, files, and observability. Is there currently any mechanism in place within Databricks to track and verify the ...
- 1 kudos
- 366 Views
- 3 replies
- 2 kudos
Resolved! Accessing data bricks data outside data bricks
Hi!What is the best way to access data bricks data, outside data bricks e.g. from Python code? The main problem is authentication so that I can access data to which I have permissions but I would like to generate token outside data bricks (e.g. via R...
- 366 Views
- 3 replies
- 2 kudos
- 2 kudos
Hi @maikel - You can set up a Service Principal in Databricks and a client ID and Client Secret. Then set up a Databricks profile and use Python code with that profile. Look at the profile section in step 2, how the profile can be set up with client ...
- 2 kudos
- 417 Views
- 2 replies
- 3 kudos
Pre-Commit hook in Databricks
Hi team,Anyone has any idea how to use pre-commit hooks when developing via Databricks UI?Would specifically want to use something like isort, black, ruff etc.I have created .pre-commit-config.yaml and pyproject.toml files in my cloned repo folder, b...
- 417 Views
- 2 replies
- 3 kudos
- 3 kudos
Databricks Repos (Git folders) do not support Git hooks natively.The error you're seeing (git failed. Is it installed, and are you in a Git repository directory?) is expected because:1. The Databricks notebook environment does not expose a full Git C...
- 3 kudos
- 15115 Views
- 24 replies
- 19 kudos
how do you disable serverless interactive compute for all users
I don't want users using serverless interactive compute for their jobs. how do i disable it for everyone or for specific users
- 15115 Views
- 24 replies
- 19 kudos
- 19 kudos
At the local university, we have arranged, for the last few years, a course which uses Spark and Databricks for hands-on coding practice. There are 300 students on the course. We have controlled the price by having a single common cluster. It has aut...
- 19 kudos
- 4053 Views
- 7 replies
- 2 kudos
Resolved! Best Practices for Mapping Between Databricks and AWS Accounts
Hi everyone, this is my first post here. I'm doing my best to write in English, so I apologize if anything is unclear.I'm looking to understand the best practices for how many environments to set up when using Databricks on AWS. I'm considering the f...
- 4053 Views
- 7 replies
- 2 kudos
- 2 kudos
Hey @r_w_ If you think my answer was correct, it would be great if you could mark it as a solution to help future users Thanks,Isi
- 2 kudos
- 3410 Views
- 2 replies
- 0 kudos
Resolved! New default notebook format (IPYNB) causes unintended changes on release
Dear Databricks,We have noticed the following issue since the new default notebook format has been set to IPYNB. When we release our code from (for example) DEV to TST using a release pipeline built in Azure DevOps, we see unintended changes popping ...
- 3410 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @Rvwijk, please take a look at this. This should solve your issue. I suspect the mismatch is happening due to the previous ones, including output for the notebook cells. You may need to perform a rebase of your repository and allow the output to b...
- 0 kudos
- 447 Views
- 1 replies
- 1 kudos
Resolved! AIM with Entra ID Groups – Users and Service Principals not visible in Workspace
Hello Community, I am testing Automatic Identity Management (AIM) in Databricks with Unity Catalog enabled. Steps I did:      •     AIM is activated      •     In Microsoft Entra ID I created a group g1 and added user u1 and service principal sp1    ...
- 447 Views
- 1 replies
- 1 kudos
- 1 kudos
In Azure Databricks, when AIM is enabled, Entra users, service principals, and groups are available in Azure Databricks as soon as they’re granted permissions. Group memberships, including nested groups, flow directly from Entra ID, so permissions al...
- 1 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access control
1 -
Apache spark
1 -
Azure
7 -
Azure databricks
5 -
Billing
2 -
Cluster
1 -
Compliance
1 -
Data Ingestion & connectivity
5 -
Databricks Runtime
1 -
Databricks SQL
2 -
DBFS
1 -
Dbt
1 -
Delta Sharing
1 -
DLT Pipeline
1 -
GA
1 -
Gdpr
1 -
Github
1 -
Partner
47 -
Public Preview
1 -
Service Principals
1 -
Unity Catalog
1 -
Workspace
2
- « Previous
- Next »
| User | Count |
|---|---|
| 108 | |
| 37 | |
| 34 | |
| 25 | |
| 24 |