Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Join discussions on data engineering best practices, architectures, and optimization strategies with...
Join discussions on data governance practices, compliance, and security within the Databricks Commun...
Explore discussions on generative artificial intelligence techniques and applications within the Dat...
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithm...
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Communi...
Is there a way to calculate the percentage of total using the Databricks dashboard's custom calculations and also the cumulative percentage? What function would be equivalent to the DAX All and AllSelect functions, but using Databricks' aggregation f...
Hi @genebaldorios ,I don't use databricks dashboards on my project (we are PBI shop), but I guess you need to use AGGREGATE OVER clause with cumulative frame:
I am working on a use case in my personal Databricks account - AgenticDataPipeline. I am getting below error when using Anthropic Claude 3.7 Sonnet AI model15:52:56.267 | ERROR | [Pipeline] Task failed: ACCOUNT DATA NORMALIZATION: ops_bronze.account...
My name is Sharon J, a developer at sour sop farm, fungi, global mounts
Hi,we have our Databricks Jobs deployed via DABs, and they have been running fine for a while now (approximately 1 month since we migrated from ADF). However, since yesterday, we are getting a weird issue while writing. See error below:[STREAM_FAILED...
Hi @BMex,The link I shared with a similar issue contains some solutions — did any of them work for you?
I'm using DBT to run a model in databricks. I have view model, which holds 2 months of data (~2 million). There is no wide dependency transformation. All are case when statements. Total column no is 234. Till yesterday view was running fine. but toda...
I tried for 1-2 days, still same error. I changed to table materialization, then only it worked. Any reason view is failing?
AskCan we get a UC catalog (like prod or genie) in free edition of data bricks ?Problem i am solving:Structuring Data in Databricks before sending customer, account data to salesforceissue:cannot see workspace-local tables (workspace.default.structur...
root cause of not seeing your workspace-local tables (workspace.default.structured_pdf_table) is the unavailability of a Unity Catalog or Delta Sharing connector configuration in your Free Edition workspace. To resolve this, you typically need admin ...
HI team I am testing some changes on UAT / DEV environment and noticed that the model endpoint are very slow to deploy. Since the environment is just testing and not serving any production traffic, I was wondering if there was a way to expedite this ...
Hi @gbhatia,I’d need a few more details to fully understand your deployment, but in general, what can help is setting Compute type: CPU (cheaper and sufficient for testing), Compute scale-out: Small (0–4 concurrency, 0–4 DBU) since you don’t need hig...
https://subscribepage.io/rhinogoldgel
Hi,When deletion vectors are enabled on a Delta table, is there a guarantee that MERGE, UPDATE, or DELETE operations will not rewrite unmodified data, but rather use deletion vectors to soft delete the original file?For example, suppose the table cur...
Hey @shanisolomonron , Yes, you are right. The above sequence of actions is always true for MERGE and UPDATE .For DELETE , you don't see any Add a new file (step 3) And yes, if the table has the DV feature enabled, the writer/runtime supports DVs for...
Hi everyone,I'm looking to connect the managed MCP server to Claude code on Linux and struggle to have something working. The MCP are activated on my workspace (not sure it's linked to what I want to do) and I just want to add the UC endpoint to Clau...
Hi @alxsbn , did you get this resolved? Targets that are a remote HTTP endpoint need `--transport http`. For example... claude mcp add --transport http databricks-server https://xx-yyy-zzz.cloud.databricks.com/api/2.0/mcp/gold/core (see: https://docs...
Hello, I have a Databricks table with a column using the new GEOMETRY type. When I try to access this table from a Spark workload, I am not able to describe the table or operate on any of its columns. My Spark config is the following, per the Databri...
Reading the UC OSS doc - https://docs.unitycatalog.io/usage/tables/formats/ it does not support all data types columnsThe columns of the table in SQL-like format "column_name column_data_type". Supported data types include BOOLEAN, BYTE, SHORT, INT, ...
I have been able to set up jdbc driver with databricks to connect to my unity catalog using local spark sessions. When i try to retrieve tables in my schema i get this error An error occurred while calling o43.sql.: io.unitycatalog.client.ApiExcepti...
Hi @NUKSY , @Jofes This should be reported as a bug, see similar issues reported. https://github.com/unitycatalog/unitycatalog/issues/657 https://github.com/unitycatalog/unitycatalog/issues/1077 Thanks!
Hi . i have a source table that is a delta live streaming table created using dlt.auto_cdc logic and now i want to create another sreaming table that filters the record from that table as per the client but it also should have auto cdc logic for the...
Hi @tenzinpro , This is an expected error. "DELTA_SOURCE_TABLE_IGNORE_CHANGES] Detected a data update" As explained in the error: This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option ...
Hi, I've got a problem and I have run out of ideas as to what else I can try. Maybe you can help? I've got a delta table with hundreds millions of records on which I have to perform relatively expensive operations. I'd like to be able to process some...
Hi @Michał , One detail/feature to consider when working with Declarative Pipelines is that they manage and auto-tune configuration aspects, including rate limiting (maxBytesPerTrigger or maxFilesPerTrigger). Perhaps that's why you could not see this...
While creating a Knowledge Assistant in Databricks, I encountered an issue where the endpoint update failed with the following message:"Failed to deploy: served entity creation aborted because the endpoint update timed out. Please see service logs fo...
Hi @sathyavikasini , Are you still facing this issue? While creating the agent, do you see that your configured knowledge sources are processing? If you haven't done so already, you might check permissions on the catalog/schema/volume(s) that are ...
Hello,I’m interested in understanding whether it’s possible to embed multiple AI/BI dashboards created in Databricks within a Databricks app. Could you please share the steps or provide any documentation related to this? My goal is to use the app as ...
Just checking if anyone has already implemented this. Pls share your thoughts.
User | Count |
---|---|
1796 | |
873 | |
501 | |
468 | |
312 |