Lakebase error logs
Anyone know where to see any logs related to Lakebase/Postgres? I have a Tableau Prep flow that is failing but the error is not clear and I'm trying to find out what the database is capturing.
- 26 Views
- 0 replies
- 0 kudos
Anyone know where to see any logs related to Lakebase/Postgres? I have a Tableau Prep flow that is failing but the error is not clear and I'm trying to find out what the database is capturing.
Hi everyone, sorry, I’m new here. I’m considering migrating to Databricks, but I need to clarify a few things first.When I define and launch an application, I see that I can specify the number of workers, and then later configure the number of execut...
Your Databricks question about workers versus executors. Many teams encounter the same sizing and configuration issues when evaluating a migration. At Kanerika, we help companies plan cluster architecture, optimize Spark workloads, and avoid overspen...
Dear Databricks Community, We encountered Bug in behaviour of import method explained in documentation https://learn.microsoft.com/en-us/azure/databricks/files/workspace-modules#autoreload-for-python-modules. Couple months ago we migrated our pipelin...
I have a running notebook job where I am doing some processing and writing the tables in a foreign catalog. It has been running successfully for about an year. The job is scheduled and runs on job cluster with DBR 16.2Recently, I had to add new noteb...
Thank you @Louis_Frolio! your suggestions really helped me understanding the scenario.
Hello,I'm conducting research on utilizing CLS in a project. We are implementing a lookup table to determine what tags a user can see. The CLS function looks like this:CREATE OR REPLACE FUNCTION {catalog}.{schema}.mask_column(value VARIANT, tag STRIN...
Thank you for an insightful answer @Poorva21. I conclude from your reasoning that this is the result of an optimization/engine error. It seems like I will need to resort to a workaround for the date columns then...
Starting with DBR 17 running Spark 4.0, spark.sql.ansi.enabled is set to true by default. With the flag enabled, strings are implicitly converted to numbers in a very dangerous manner. ConsiderSELECT 123='123';SELECT 123='123X';The first one is succe...
FYI, it seems I was mistaken about the behaviour of '::' on Spark 4.0.1. It does indeed work like CAST on both DBR 17.3 and Spark 4.0.1 and raises an exception on '123X'::int. The '?::' operator seems to be a Databricks only extension at the moment (...
HiWe have setup.py in my databricks workspace.This script is executed in other transformation scripts using%run /Workspace/Common/setup.pywhich consume lot of time. This setup.py internally calls other utilities notebooks using %run%run /Workspace/Co...
You can’t “%run a notebook” from a cluster init script—init scripts are shell-only and meant for environment setup (install libs, set env vars), not for executing notebooks or sharing Python state across sessions. +1 to what @Raman_Unifeye has told. ...
At around 5:30 am (UTC+11) this morning, a number of our scheduled serverless notebook jobs started failing when attempting to retrieve Databricks secrets.We are able to retrieve the secrets using the databricks CLI and the jobs are run as a user tha...
me tooBut it looks like there hasn't been any official reply regarding this matter yet.
Hi everyone,I’m trying to connect Databricks to an S3-compatible bucket using a custom endpoint URL and access keys.I’m using an Express account with Serverless SQL Warehouses, but the only external storage options I see are AWS IAM roles or Cloudfla...
Serverless compute does not support setting most Apache Spark configuration properties irrespective of Enterprise Tier as dB fully manages the underlying infrastructure.
I’m looking for guidance on the differences between:dbmanagedidentity (the workspace-managed identity), andUnity Catalog storage credentials based on Azure Managed IdentitySpecifically, I’d like to understand:What are the key differences between thes...
use dbmanageidentity for non‑storage Azure services, such as Cosmos DB, Azure SQL, Event Hub, Key vault.
When defining a streaming tables using DLT (declarative pipelines), we can provide a schema which lets us define primary and foreign key constraints.However, references to self, i.e. the defining table, are not currently allowed (you get a "table not...
Each of these workarounds give up the optimizations that are enabled by the use of key constraints.
Hi All,We are currently exploring a use case involving migration from IBM DataStage to Databricks. I noticed that LakeBridge supports automated code conversion for this process. If anyone has experience using LakeBridge, could you please share any be...
Hi @Echoes @Hari_P @SebastianRowan you can you Travinto technologies tool, their conversion ratio is 95-100%.
Hi Community,I'm new to Databricks and am trying to make and implement pipeline expectations, The pipelines work without errors and my job works. I've tried multiple ways to implement expectations, sql and python. I keep resolving the errors but end ...
Hey, I think it may be the row_count condition causing the issue. The expectation runs on each row and sees if the record meets the criteria in the expectation, so you're effectively asking count * on each record, which will always evaluate to 1 and...
I have a Python notebook in Databricks. Within it I have a multiselect widget, which is defined like this:widget_values = spark.sql(f''' SELECT my_column FROM my_table GROUP BY my_column ORDER BY my_column ''') widget_values = widget_values.collect(...
Hello @SRJDB , What you’re running into isn’t your Python variable misbehaving—it’s the widget hanging onto its own internal state. A Databricks widget will happily keep whatever value you gave it, per user and per notebook, until you explicitly clea...
Hi Community,I am facing a weird problem within my Azure Databricks workspace, I am trying to create and run SDP, but somehow when I try to run more than 1 pipeline in parallel, it gives me an error (pasting the error message below). I currently only...
Hello @AyushPaldecha09! This error usually appears due to concurrency limits. If you're already on a Premium tier, you typically shouldn’t be hitting this cap, so the best next step is to open a Databricks Support ticket and request an increase to yo...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 1629 | |
| 790 | |
| 511 | |
| 349 | |
| 287 |