cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

AlleyCat
by New Contributor II
  • 249 Views
  • 2 replies
  • 0 kudos

To identify deleted Runs in Workflow.Job UI in "system.lakeflow"

Hi,I executed a few runs in a Workflow.Jobs UI. I then deleted some of them. I am seeing the deleted runs in "system.lakeflow.job_run_timeline". How do i know which runs are the deleted ones? Thanks

  • 249 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @AlleyCat , Hope you are doing well!  The jobs table includes a delete_time column that records the time when the job was deleted by the user. So to identify deleted jobs, you can run a query like the following: SELECT * FROM system.lakeflow.jobs ...

  • 0 kudos
1 More Replies
nskiran
by New Contributor III
  • 401 Views
  • 3 replies
  • 0 kudos

How to bring in databricks dbacademy courseware

I have created an account in dbacademy and signed up for advanced data engineering with databricks course. Also, I have subscribed to Vocareum lab as well. During the demo, tutor/trainer opened 'ADE 1.1 - Follow Along Demo - Reading from a Streaming ...

  • 401 Views
  • 3 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

So, it appears that we no longer make the notebooks available with self-paced training.  They are not available for download.

  • 0 kudos
2 More Replies
jiteshraut20
by New Contributor III
  • 1283 Views
  • 2 replies
  • 0 kudos

Deploying Overwatch on Databricks (AWS) with System Tables as the Data Source

IntroductionOverwatch is a powerful tool for monitoring and analyzing your Databricks environment, providing insights into resource utilization, cost management, and system performance. By leveraging system tables as the data source, you can gain a c...

  • 1283 Views
  • 2 replies
  • 0 kudos
Latest Reply
raghu2
New Contributor III
  • 0 kudos

hi @jiteshraut20, Thanks for your post. From my set up, validation seems to work.Wrote 32 bytes. Validation report has been saved to dbfs:/mnt/overwatch_global/multi_ow_dep/report/validationReport Validation report details Total validation count: 35 ...

  • 0 kudos
1 More Replies
johnnwanosike
by New Contributor III
  • 597 Views
  • 6 replies
  • 0 kudos

Hive metastore federation, internal and external unable to connect

I enabled the internal hive on the metastore federation using this  query commandCREATE CONNECTION IF NOT EXISTS internal-hive TYPE hive_metastoreOPTIONS (builtin true);But I can't get a password or username to access the JDBC URL. 

  • 597 Views
  • 6 replies
  • 0 kudos
Latest Reply
johnnwanosike
New Contributor III
  • 0 kudos

Not really, what I want to achieve is connecting to an external hive but I do want to configure the external hive on our server to be able to interact with the Databricks cluster in such a way that I could have access to thrift protocol.

  • 0 kudos
5 More Replies
jeremy98
by Contributor III
  • 647 Views
  • 5 replies
  • 2 kudos

Resolved! Catch Metadata Workflow databricks

Hello community,Is it possible to get metadata workflow of a databricks job that is running? Like the start time, end time, triggered by etc.? Using dbutils.widgets.get()?

  • 647 Views
  • 5 replies
  • 2 kudos
Latest Reply
fmadeiro
Contributor
  • 2 kudos

@jeremy98, Accessing metadata such as the start time, end time, and trigger information of a running Databricks job cannot be accomplished using dbutils.widgets.get(). The dbutils.widgets utility is designed for retrieving the current value of input ...

  • 2 kudos
4 More Replies
_deepak_
by New Contributor II
  • 2510 Views
  • 4 replies
  • 0 kudos

Databricks regression test suite

Hi, I am new to Databricks and setting up the non-prod environment. I am wanted to know, IS there any way by which I can run a regression suite so that existing setup should not break in case of any feature addition and also how can I make available ...

  • 2510 Views
  • 4 replies
  • 0 kudos
Latest Reply
grkseo7
New Contributor II
  • 0 kudos

Regression testing after code changes can be automated easily. Once you’ve created test cases with Pytest or Great Expectations, you can set up a CI/CD pipeline using tools like Jenkins or GitHub Actions. For a non-prod setup, Docker is great for rep...

  • 0 kudos
3 More Replies
hari-prasad
by Valued Contributor II
  • 278 Views
  • 3 replies
  • 1 kudos

Optimize Cluster Uptime by Avoiding Unwanted Library or Jar Installations

Whenever we discuss clusters or nodes in any service, we need to address the cluster bootstrap process. Traditionally, this involves configuring each node using a startup script (startup.sh).In this context, installing libraries in the cluster is par...

Data Engineering
cluster
job
jobs
Nodes
  • 278 Views
  • 3 replies
  • 1 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 1 kudos

I'm sharing my experience here. Thank you for follow up!

  • 1 kudos
2 More Replies
korijn
by New Contributor II
  • 236 Views
  • 1 replies
  • 0 kudos

How to set environment (client) on notebook via API/Terraform provider?

I am deploying a job with a notebook task via the Terraform provider. I want to set the client version to 2. I do NOT need to install any dependencies. I just want to use the new client version for the serverless compute. How do I do this with the Te...

  • 236 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Unfortunately, there is no direct way to set the client version for a notebook task via the Terraform provider or the API without using the UI. The error message suggests that the %pip magic command is the recommended approach for installing dependen...

  • 0 kudos
korijn
by New Contributor II
  • 274 Views
  • 2 replies
  • 0 kudos

Git integration inconsistencies between git folders and job git

It's a little confusing and limiting that the git integration support is inconsistent between the two options available.Sparse checkout is only supported when using a workspace Git folder, and checking out by commit hash is only supported when using ...

  • 274 Views
  • 2 replies
  • 0 kudos
Latest Reply
Mounika_Tarigop
Databricks Employee
  • 0 kudos

Sparse Checkout: This feature is only supported when using a workspace Git folder. Sparse checkout allows you to clone and work with only a subset of the remote repository's directories, which is useful for managing large repositories. Checking Out b...

  • 0 kudos
1 More Replies
Binnisb
by New Contributor
  • 316 Views
  • 4 replies
  • 2 kudos

model_serving_endpoints in DAB updates every time

Love the model_serving_endpoints in the dab, but now it takes over 6 minutes to deploy resources when they already exist. It says (updating) in the serving tab in the side bar even if nothing has changed.Is there a way to not update the endpoints aft...

  • 316 Views
  • 4 replies
  • 2 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 2 kudos

I have created an internal feature request for this behavior: DB-I-13108

  • 2 kudos
3 More Replies
hari-prasad
by Valued Contributor II
  • 353 Views
  • 0 replies
  • 2 kudos

Databricks UniForm - Bridging Delta Lake and Iceberg

Databricks UniForm, enables seamless integration between Delta Lake and Iceberg formats. Databricks UniForm key features include:Interoperability: Read Delta tables with Iceberg clients without rewriting data.Automatic Metadata Generation: Asynchrono...

  • 353 Views
  • 0 replies
  • 2 kudos
martindlarsson
by New Contributor III
  • 811 Views
  • 2 replies
  • 0 kudos

Jobs indefinitely pending with libraries install

I think I found a bug where you get Pending indefinitely on jobs that has a library requirement and the user of the job does not have Manage permission on the cluster.In my case I was trying to start a dbt job with dbt-databricks=1.8.5 as library. Th...

  • 811 Views
  • 2 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

Thanks for your feedback! Just checking is this still an issue for you? would you share more details? if I wanted to reproduce this for example.

  • 0 kudos
1 More Replies
ls
by New Contributor III
  • 2245 Views
  • 10 replies
  • 0 kudos

Resolved! Py4JJavaError: An error occurred while calling o552.count()

Hey! I'm new to the forums but not Databricks, trying to get some help with this question:The error also is also fickle since it only appears what seems to be random. Like when running the same code it works then on the next run with a new set of dat...

  • 2245 Views
  • 10 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

@ls Agree, it doesn't seem to be fixed. Maybe on DBR 16 memory management is better optimized, hence I'd like to suggest going through the methods mentioned earlier in this post: Memory Profiling: Try freezing the dataset that reproduces the problem...

  • 0 kudos
9 More Replies
Alby091
by New Contributor
  • 278 Views
  • 1 replies
  • 0 kudos

Multiple schedules in workflow with different parameters

I have a notebook that takes a file from the landing, processes it and saves a delta table.This notebook contains a parameter (time_prm) that allows you to do this option for the different versions of files that arrive every day.Specifically, for eac...

Data Engineering
parameters
Workflows
  • 278 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Right now jobs support only 1 Schedule process per job, as you have mentioned you will need to create a different job for each schedule you require, you can use the clone capability to facilitate the process.

  • 0 kudos
Maatari
by New Contributor III
  • 576 Views
  • 1 replies
  • 0 kudos

How Dedicated Access mode work ?

I have a question about the dedicated access modehttps://docs.databricks.com/en/compute/group-access.htmlIt is stated that:"Dedicated access mode is the latest version of single user access mode. With dedicated access, a compute resource can be assig...

  • 576 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

The dedicated access mode allows multiple users within the assigned group to access and use the compute resource simultaneously. This is different from the traditional single-user access mode, as it enables secure sharing of the resource among group ...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels