cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Benedetta
by New Contributor III
  • 625 Views
  • 1 replies
  • 0 kudos

What happened to the JobIds in the parallel runs (again)????

Hey Databricks,      Why did you take away the jobids from the parallel runs? We use those to identify which output goes with which run. Please put them back.Benedetta 

  • 625 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Benedetta,  Thank you for reaching out. I understand your concern regarding the jobids in parallel runs. I will look into this matter and get back to you with more information as soon as possible.

  • 0 kudos
dm7
by New Contributor
  • 308 Views
  • 1 replies
  • 0 kudos

DLT CDC/SCD - Taking the latest ID per day

Hi I'm creating a DLT pipeline which uses DLT CDC to implement SCD Type 1 to take the latest record using a datetime column which works with no issues:@dlt.view def users(): return spark.readStream.table("source_table") dlt.create_streaming_table(...

  • 308 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @dm7, Thank you for providing the details of your DLT pipeline and the desired outcome! It looks like you’re trying to implement a Slowly Changing Dimension (SCD) Type 2 behaviour where you want to capture historical changes over time. Let’s br...

  • 0 kudos
BenCCC
by New Contributor
  • 187 Views
  • 1 replies
  • 0 kudos

Installing R packages for a customer docker container for compute

Hi,I'm trying to create a customer docker image with some R packages re-installed. However, when I try to use it in a notebook, it can't seem to find the installed packages. The build runs fine.FROM databricksruntime/rbase:14.3-LTS## update system li...

  • 187 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @BenCCC, Here are a few things you can check: Package Installation in Dockerfile: In your Dockerfile, you’re using the RUN R -e 'install.packages(...)' command to install R packages. While this approach works, there are alternative methods th...

  • 0 kudos
Databricks_S
by New Contributor II
  • 460 Views
  • 2 replies
  • 0 kudos

issue related to Cluster Policy

Hello Databricks Community,I am currently working on creating a Terraform script to provision clusters in Databricks. However, I've noticed that by default, the clusters created using Terraform have the policy set to "Unrestricted."I would like to co...

  • 460 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Valued Contributor II
  • 0 kudos

Hello, many thanks for your question, on the cluster creation template there is an optional setting called policy_id, this id can be retrieved from the UI, if you go under Compute > Policies > Select the policy you want to set.By default if the user ...

  • 0 kudos
1 More Replies
egndz
by New Contributor
  • 447 Views
  • 1 replies
  • 0 kudos

Cluster Memory Issue (Termination)

Hi,I have a single-node personal cluster with 56GB memory(Node type: Standard_DS5_v2, runtime: 14.3 LTS ML). The same configuration is done for the job cluster as well and the following problem applies to both clusters:To start with: once I start my ...

egndz_2-1712845742934.png egndz_1-1712845616736.png
  • 447 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @egndz, It seems like you’re dealing with memory issues in your Spark cluster, and I understand how frustrating that can be. Initial Memory Allocation: The initial memory allocation you’re observing (18 GB used + 4.1 GB cached) is likely a com...

  • 0 kudos
DavidKxx
by New Contributor III
  • 180 Views
  • 2 replies
  • 0 kudos

Can't create branch of public git repo

Hi,I have cloned a public git repo into my Databricks account.  It's a repo associated with an online training course.  I'd like to work through the notebooks, maybe make some changes and updates, etc., but I'd also like to keep a clean copy of it. M...

  • 180 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor III
  • 0 kudos

Hi DavidKxx, You can clone public remote repositories without Git credentials (a personal access token and a username). To modify a public remote repository or to clone or modify a private remote repository, you must have a Git provider username and...

  • 0 kudos
1 More Replies
groch_adam
by New Contributor
  • 176 Views
  • 1 replies
  • 0 kudos

Usage of SparkMetric_CL, SparkListenerEvent_CL and SparkLoggingEvent_CL

I am wondering If can retrieve any information from Azure Log Analytics custom tables (already set) for Azure Databricks. Would like to retrieve information about query and data performance for SQL Warehouse Cluster. I am not sure If I can get it fro...

  • 176 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @groch_adam, Retrieving information from Azure Log Analytics custom tables for Azure Databricks is possible.   Let me guide you through the process. Azure Databricks Monitoring Library: To send application logs and metrics from Azure Databric...

  • 0 kudos
liormayn
by New Contributor III
  • 234 Views
  • 1 replies
  • 3 kudos

Error while encoding: java.lang.RuntimeException: org.apache.spark.sql.catalyst.util.GenericArrayDa

Hello:)we are trying to run an existing working flow that works currently on EMR, on databricks.we use LTS 10.4, and when loading the data we get the following error:at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:...

  • 234 Views
  • 1 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @liormayn, It seems you’re encountering an issue related to the schema of your data when running your existing workflow on Databricks. Let’s explore some potential solutions: Parquet Decimal Columns Issue: The error message you’re seeing might...

  • 3 kudos
afdadfadsfadsf
by New Contributor
  • 310 Views
  • 1 replies
  • 0 kudos

Create Databricks model serving endpoint in Azure DevOps yaml

Hello,I need to create and destroy a model endpoint as part of CI/CD. I tried with mlflow deployments create-endpoint, giving databricks as --target however it errors saying that --endpoint is not a known argument when clearly --endpoint is required....

  • 310 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @afdadfadsfadsf, Creating and managing model endpoints as part of your CI/CD pipeline is essential for deploying machine learning models. I can provide some guidance on how to set up a CI/CD pipeline using YAML in Azure DevOps. You can adapt th...

  • 0 kudos
scottbisaillon
by New Contributor
  • 502 Views
  • 1 replies
  • 0 kudos

Databricks Running Jobs and Terraform

What happens to a currently running job when a workspace is deployed again using Terraform? Are the jobs paused/resumed, or are they left unaffected without any down time? Searching for this specific scenario doesn't seem to come up with anything and...

  • 502 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @scottbisaillon, When deploying a workspace again using Terraform, the behaviour regarding currently running jobs depends on the specific Terraform version and the platform you are using.   Let’s explore the details: Terraform Cloud (form...

  • 0 kudos
TinasheChinyati
by New Contributor
  • 144 Views
  • 1 replies
  • 0 kudos

Stream to stream join NullPointerException

I have a DLT pipeline running in continous mode. I have a stream to stream join which runs for the first 5hrs but then fails with a Null Pointer Exception. I need assistance to know what I need to do to handle this. my code is structured as below:@dl...

  • 144 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @TinasheChinyati, It looks like you’re encountering a Null Pointer Exception in your DLT pipeline when performing a stream-to-stream join. Let’s break down the issue and explore potential solutions: The error message indicates that the query te...

  • 0 kudos
Sudheer2
by New Contributor II
  • 180 Views
  • 1 replies
  • 1 kudos

Updating Databricks SQL Warehouse using Terraform

 We can Update SQL Warehouse manually in Databricks.Click SQL Warehouses in the sidebarIn Advanced optionsWe can find Unity Catalog toggle button there! While Updating Existing SQL Warehouse in Azure to enable unity catalog using terraform, I couldn'...

  • 180 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @Sudheer2, The Unity Catalog is a feature in Databricks SQL Warehouse that allows you to query data across multiple databases and tables seamlessly. It provides a unified view of your data.When you enable the Unity Catalog, you can access tables f...

  • 1 kudos
liormayn
by New Contributor III
  • 764 Views
  • 5 replies
  • 3 kudos

OSError: [Errno 78] Remote address changed

Hello:)as part of deploying an app that previously ran directly on emr to databricks, we are running experiments using LTS 9.1, and getting the following error: PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationEr...

  • 764 Views
  • 5 replies
  • 3 kudos
Latest Reply
NandiniN
Valued Contributor III
  • 3 kudos

Hi @liormayn , I can understand. I see the fix went on 20 March 2024, you would have to restart the clusters. Thanks!

  • 3 kudos
4 More Replies
Ikanip
by New Contributor II
  • 600 Views
  • 4 replies
  • 1 kudos

How to choose a compute, and how to find alternatives for the current compute being used?

We are using a compute for an Interactive Cluster in Production which incurs X amount of cost. We want to know what are the options available to use with near about the same processing power as the current compute but incur a cost of Y, which is less...

  • 600 Views
  • 4 replies
  • 1 kudos
Latest Reply
raphaelblg
New Contributor III
  • 1 kudos

Hello @Ikanip , You can utilize the Databricks Pricing Calculator to estimate costs. For detailed information on compute capacity, please refer to your cloud provider's documentation regarding Virtual Machine instance types.

  • 1 kudos
3 More Replies
ChristopherS5
by New Contributor
  • 381 Views
  • 1 replies
  • 0 kudos

Step-by-step guide to creating a Unity Catalog in Azure Databricks.

Hello everyone,There isn't an official document outlining the step-by-step procedure for enabling Unity Catalog in Azure Databricks.If anyone has created documentation or knows the process, please share it here.Thank you in advance.

  • 381 Views
  • 1 replies
  • 0 kudos
Latest Reply
PL_db
New Contributor III
  • 0 kudos

Setup Unity Catalog on Azure Unity Catalog best practices Which guidance/procedure are you missing?

  • 0 kudos