cancel
Showing results for 
Search instead for 
Did you mean: 
Databricks Platform Discussions
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the conversation to deepen your understanding and maximize your usage of the Databricks platform.
cancel
Showing results for 
Search instead for 
Did you mean: 

Browse the Community

Data Engineering

Join discussions on data engineering best practices, architectures, and optimization strategies with...

12327 Posts

Data Governance

Join discussions on data governance practices, compliance, and security within the Databricks Commun...

535 Posts

Generative AI

Explore discussions on generative artificial intelligence techniques and applications within the Dat...

398 Posts

Machine Learning

Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithm...

1026 Posts

Warehousing & Analytics

Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Communi...

688 Posts

Activity in Databricks Platform Discussions

maikel
by > Contributor II
  • 336 Views
  • 4 replies
  • 1 kudos

Resolved! Uploading file to volume and start ingestion job

Hello Community!I am writing to you with my idea about data ingestion job which we have to implement in our project.The data which we have are in CSV file format and depending on the case it differs a little bit. Before uploading we pivoting csv file...

  • 336 Views
  • 4 replies
  • 1 kudos
Latest Reply
maikel
Contributor II
  • 1 kudos

Yeah, understood. Thank you very much once again! 

  • 1 kudos
3 More Replies
maikel
by > Contributor II
  • 11 Views
  • 0 replies
  • 0 kudos

Job tasks monitoring

Hello Community,We have a case in our project that we would like to solve in an elegant and scalable manner. As always, I would really appreciate your suggestions and experience.In short:We have a multi-step job consisting of 4 stages. In one of the ...

  • 11 Views
  • 0 replies
  • 0 kudos
Danish11052000
by > Contributor
  • 1096 Views
  • 7 replies
  • 1 kudos

Resolved! How should I correctly extract the full table name from request_params in audit logs?

’m trying to build a UC usage/refresh tracking table for every workspace. For each workspace, I want to know how many times a UC table was refreshed or accessed each month. To do this, I’m reading the Databricks audit logs and I need to extract only ...

  • 1096 Views
  • 7 replies
  • 1 kudos
Latest Reply
SteveOstrowski
Databricks Employee
  • 1 kudos

Hi @Danish11052000, You are on the right track with the COALESCE approach. The reason for the inconsistency is that different Unity Catalog action types populate different keys in request_params. Here is a breakdown of the key fields and which action...

  • 1 kudos
6 More Replies
Pranav_1699
by > New Contributor
  • 20 Views
  • 0 replies
  • 0 kudos

Building a Spark Declarative Pipeline OSS with Apache Iceberg and AWS Glue Catalog

Hey everyone,I recently worked on building a modern financial data lakehouse using Spark Declarative Pipeline OSS (SDP OSS), Apache Iceberg, and AWS Glue Catalog.The blog covers:- Building declarative data pipelines with Spark- Using Apache Iceberg a...

Data Engineering
Spark Declarative Pipelines
  • 20 Views
  • 0 replies
  • 0 kudos
jasmin_mbi
by > New Contributor
  • 26 Views
  • 0 replies
  • 0 kudos

solved

the problem disappeared but so did the button for deleting this post. 

  • 26 Views
  • 0 replies
  • 0 kudos
kohei-matsumura
by > Databricks Partner
  • 602 Views
  • 3 replies
  • 1 kudos

Resolved! Best practices for health monitoring

Status page: https://status.databricks.com/ REST API: https://docs.databricks.com/api/workspace/workspace/getstatus I'm trying to perform health monitoring using the above status page and API, but is this the best method? If the API returns an error ...

  • 602 Views
  • 3 replies
  • 1 kudos
Latest Reply
kohei-matsumura
Databricks Partner
  • 1 kudos

We are considering using batch processing for system monitoring, but can we obtain the same information as the status page using the following API?https://docs.databricks.com/aws/ja/resources/status#%E5%85%AC%E9%96%8B%E3%82%B9%E3%83%86%E3%83%BC%E3%82...

  • 1 kudos
2 More Replies
mnissen1337
by > New Contributor II
  • 62 Views
  • 1 replies
  • 0 kudos

Managing Unity Catalog Permissions for Databricks Apps via DABs

I’m currently developing a Databricks App, and the app’s service principal needs access to Unity Catalog tables. From what I can tell, it doesn’t seem possible to grant Unity Catalog permissions through DABs yet — only through the UI, based on the cu...

  • 62 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @mnissen1337 ,But there is a way to do this in DABs. Look at following section in documentation:Manage Databricks apps using Declarative Automation Bundles | Databricks on AWSIf my answer was helpful, please consider marking it as accepted solutio...

  • 0 kudos
sminamioka
by > New Contributor III
  • 213 Views
  • 5 replies
  • 1 kudos

Compute tab doesn't show and doesn't give the option to create a cluster

I've just created an Azure Databricks workspace, tier (Premium) and when trying to create a cluster, when I click on compute, the UI opens automatically the menu SQL Warehouse, not sure if it's a glitch as shown below. Someone said "Ask the admin to ...

sminamioka_0-1778276402869.png
Data Engineering
cluster
clusters
  • 213 Views
  • 5 replies
  • 1 kudos
Latest Reply
gcj0310
Databricks Partner
  • 1 kudos

Hi @sminamioka This does not look like a UI glitch. In newer Azure Databricks workspaces, access to classic compute / clusters depends on workspace entitlements and compute policy permissions.If clicking Compute takes you directly to SQL Warehouses, ...

  • 1 kudos
4 More Replies
truplusphi
by > New Contributor
  • 40 Views
  • 0 replies
  • 1 kudos

TruProxy - Live Cost Estimator - Clusters

Hi everyone, I'm continuing to build a live cost estimator for Databricks to get immediate cost estimates every second instead of having to wait for the system tables to update. (see Live Cost Estimator - Databricks Community - 156374)I've finished t...

  • 40 Views
  • 0 replies
  • 1 kudos
Guillermo-HR
by > New Contributor
  • 69 Views
  • 1 replies
  • 0 kudos

Streaming read and writing with aggregation

Hi,I have the following problem: on a medallion architecture on a bronze volume I get files every month containing the data for each sensor reading during the period 1 of month 00:00 to last day 23:00. I have a manual job that calls the python files ...

  • 69 Views
  • 1 replies
  • 0 kudos
Latest Reply
Saritha_S
Databricks Employee
  • 0 kudos

Hi @Guillermo-HR  Yes — batch is usually the right fix here. What’s happening is that your query is using event-time window aggregation in Structured Streaming with append output mode. In that mode, Spark only emits a window after it is sure the wind...

  • 0 kudos
liu
by > Databricks Partner
  • 188 Views
  • 5 replies
  • 0 kudos

Inquiry regarding Serverless outbound IP ranges and Serverless compute firewall configuration

I am looking to obtain the Serverless outbound IP ranges. I found the instructions for obtaining them in this announcement "Serverless compute firewall configuration", which states that Databricks provides the outbound IP addresses in JSON format via...

  • 188 Views
  • 5 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @liu ,If you're on Azure then this feature is in Private Preview. If you want to try it you need to reach out to your Azure Databricks account team:IP addresses and domains for Azure Databricks services and assets - Azure Databricks | Microsoft Le...

  • 0 kudos
4 More Replies
cormierjohn
by > New Contributor
  • 95 Views
  • 1 replies
  • 0 kudos

FMAPI Anthropic endpoint rejects requests with trailing assistant message — known limitation?

Hey all — looking for confirmation on a behavior I'm hitting on the Foundation Model API (pay-per-token) Anthropic-compatible endpoint, in case anyone else has worked around it.What I'm doing: serving Claude models through /serving-endpoints/anthropi...

  • 95 Views
  • 1 replies
  • 0 kudos
Latest Reply
stbjelcevic
Databricks Employee
  • 0 kudos

Hi @cormierjohn , Your understanding is correct. The validation rejecting a trailing assistant turn is happening at the FMAPI proxy layer before the request reaches Claude, so any client that uses Anthropic's prefill primitive will 400 against this e...

  • 0 kudos
playnicekids
by > New Contributor II
  • 124 Views
  • 2 replies
  • 0 kudos

Recommended local development workflow for dashboard CI/CD with environment-specific catalog/schema?

Hi all,I’m trying to implement CI/CD for Databricks AI/BI dashboards using Declarative Automation Bundles, following guidance published by Databricks.The documentation recommends exporting dashboards as .lvdash.json using databricks bundle generate, ...

  • 124 Views
  • 2 replies
  • 0 kudos
Latest Reply
stbjelcevic
Databricks Employee
  • 0 kudos

hi @playnicekids , You've hit a known dev-UX gap. dataset_catalog and dataset_schema on the dashboard resource are the intended parameterization mechanism, but they only resolve at bundle deploy time, which is why workspace editing of an unqualified ...

  • 0 kudos
1 More Replies
MWojcicki
by > New Contributor
  • 81 Views
  • 1 replies
  • 0 kudos

Genie PDF export corrupts non-ASCII characters (Polish diacritics ł, ż, ś, ź, ę, ą)

When exporting a Genie conversation response to PDF, all Polish diacritical characters are systematically replaced with wrong ASCII characters, making the document unreadable for Polish-speaking users.Character substitution patternExpected Rendered a...

  • 81 Views
  • 1 replies
  • 0 kudos
Latest Reply
WiliamRosa
Databricks Partner
  • 0 kudos

Hi @MWojcicki My understanding is that Genie space skills would not solve this issue.The problem you described appears to be a **PDF rendering/export bug**, not something related to Genie instructions, skills, or table definitions.Based on your techn...

  • 0 kudos
Radeesh
by > New Contributor
  • 78 Views
  • 2 replies
  • 0 kudos

unable to download data ingestion with lake flow Notebook

I have registered for the Data Engineer Learning Plan, but I am unable to set up the lab shown in the video. Additionally, I cannot find where to download the notebook ZIP file. Could you please help me with this?

  • 78 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @Radeesh, Can you clarify which particular module you are referring to? Unfortunately, notebooks are not available for download in the current self-paced course. The narration is inherited from an earlier/instructor-led version of the material whe...

  • 0 kudos
1 More Replies