Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Explore discussions on Databricks administration, deployment strategies, and architectural best prac...
Join discussions on data engineering best practices, architectures, and optimization strategies with...
Join discussions on data governance practices, compliance, and security within the Databricks Commun...
Explore discussions on generative artificial intelligence techniques and applications within the Dat...
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithm...
Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Communi...
Anyone know where to see any logs related to Lakebase/Postgres? I have a Tableau Prep flow that is failing but the error is not clear and I'm trying to find out what the database is capturing.
Hi @pdiamond ,You can try to use Lakebase monitoring tools to capture query generated by Tableau Prep.Monitor | Databricks on AWSAlternatively, it seems that you can also use external monitoring tools. So you can connect to your lakebase instance usi...
The Irish Data Protection Commission has fined LinkedIn for breaches of GDPR related to how user data was processed. This case highlights the ongoing importance of transparent data practices and strict compliance, even for large platforms. It’s a use...
We're experiencing connectivity issues between Power BI (refreshing a dataset) and Databricks.On the Databricks side, we're using a SQL Warehouse (serverless), which reports queries timing out:Query has been timed out due to inactivity.On the Power B...
Hi,I have created a function that I have applied as a row filter function to multiple tables.The function takes one input parameter (a column value from the table). It then uses session_user() to look up a user in our users table. If the user is foun...
Hi,Thanks for the reply! Yeah, no the raw data has not changed at all. My thought is, can the row filter function handle reading from another table than the raw data table? I mean, I do a read in the users table to find a value, and want to compare t...
Is it possible to use the code execution tool with instances of Claude hosted through Databricks? If I try to format the payload like in Anthropic's documentation, I get an error that the function isn't defined properly: tools=[{ "type": ...
I am trying to start a cluster in Az databricks,our policy is to use proxy for outbound traffic. I have configured http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, no_proxy and NO_PROXY List in env variables and global . Made sure proxy is bypassin...
On the Databricks documentation they take the TPCH classical tables to create some example https://docs.databricks.com/aws/en/metric-views/create/It's pretty easy to create a metric view using orders and do a measure like SUM(totalprice). Now let's s...
Hi @alxsbn , The Metric View joins are designed for "Many-to-One" relationships. Because orders and lineitem have a One-to-Many relationship (one order has multiple line items), you cannot join lineitem onto an orders-based Metric View and aggregate...
Hello evryone,I'm using databricks CLI to moving several directories from Azure Repos to Databricks Workspace.The problem is that files are not updating properly, with no error to display.The self-hosted agent in pipeline i'm using has installed the ...
Hi @Giuseppe_C,Databricks CLI is not syncing updates during your pipeline runs. Several teams we work with have faced the same issue with legacy CLI versions and workspace import behavior. We’ve helped them stabilize CI/CD pipelines for Databricks, i...
Hi everyone,I’m looking to understand the real difference between Agentic AI services and the traditional AI automation tools many businesses already use.In your experience, what makes Agentic AI services more advanced or effective?Are the advantages...
Hi @Jackryan360, I saw your question on Agentic AI vs traditional automation. Many teams are exploring the same distinction as they shift from rule-based workflows to autonomous, multi-step agent systems. At Kanerika, we’ve been helping companies eva...
Hi everyone, sorry, I’m new here. I’m considering migrating to Databricks, but I need to clarify a few things first.When I define and launch an application, I see that I can specify the number of workers, and then later configure the number of execut...
Your Databricks question about workers versus executors. Many teams encounter the same sizing and configuration issues when evaluating a migration. At Kanerika, we help companies plan cluster architecture, optimize Spark workloads, and avoid overspen...
Dear Databricks Community, We encountered Bug in behaviour of import method explained in documentation https://learn.microsoft.com/en-us/azure/databricks/files/workspace-modules#autoreload-for-python-modules. Couple months ago we migrated our pipelin...
I have a running notebook job where I am doing some processing and writing the tables in a foreign catalog. It has been running successfully for about an year. The job is scheduled and runs on job cluster with DBR 16.2Recently, I had to add new noteb...
Thank you @Louis_Frolio! your suggestions really helped me understanding the scenario.
we are created data bricks job with Jar (scala code) and provided parameters/jar parameters and able to read those as arguments in main method. we are running job with parameters (run parameters / job parameters) , those parametes are not able to re...
Hey @kumarV , I did some digging and here are some hints/tips to help you further troubleshoot. Yep — this really comes down to how parameters flow through Lakeflow Jobs depending on the task type. JAR tasks are the odd duck: they don’t get the same ...
Hello,I'm conducting research on utilizing CLS in a project. We are implementing a lookup table to determine what tags a user can see. The CLS function looks like this:CREATE OR REPLACE FUNCTION {catalog}.{schema}.mask_column(value VARIANT, tag STRIN...
Thank you for an insightful answer @Poorva21. I conclude from your reasoning that this is the result of an optimization/engine error. It seems like I will need to resort to a workaround for the date columns then...
Starting with DBR 17 running Spark 4.0, spark.sql.ansi.enabled is set to true by default. With the flag enabled, strings are implicitly converted to numbers in a very dangerous manner. ConsiderSELECT 123='123';SELECT 123='123X';The first one is succe...
FYI, it seems I was mistaken about the behaviour of '::' on Spark 4.0.1. It does indeed work like CAST on both DBR 17.3 and Spark 4.0.1 and raises an exception on '123X'::int. The '?::' operator seems to be a Databricks only extension at the moment (...
| User | Count |
|---|---|
| 1821 | |
| 880 | |
| 714 | |
| 470 | |
| 312 |