cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

Hubert-Dudek
by Esteemed Contributor III
  • 823 Views
  • 1 replies
  • 1 kudos

How much USD are you spending on Databricks?

Join two system tables and get exactly how much USD you are spending.The short version of the query: SELECT u.usage_date, u.sku_name, SUM(u.usage_quantity * p.pricing.default) AS total_spent, p.currency_code FROM system.billing....

system_pig.png
  • 823 Views
  • 1 replies
  • 1 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 1 kudos

Thank you for sharing this information @Hubert-Dudek 

  • 1 kudos
Fresher
by New Contributor II
  • 679 Views
  • 1 replies
  • 0 kudos

Query is taking too long to run

I have two clusters. Cluster A(spark cluster) and cluster B(SQL warehouse). whenever I try to run a particular query using cluster B, it works fine but whenever I try to run same query using cluster A. It's taking time and never show the output

  • 679 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Check the physical query plan of the query you are running. Also, check the Spark UI to identify where is taking time and why.

  • 0 kudos
shanebo425
by New Contributor III
  • 802 Views
  • 1 replies
  • 0 kudos

Databricks OutOfMemory error on code that previously worked without issue

I have a notebook in Azure Databricks that does some transformations on a bronze tier table and inserts the transformed data into a silver tier table. This notebook is used to do an initial load of the data from our existing system into our new datal...

  • 802 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Please review your Spark UI from the old job execution versus the new job execution. You might need to check if the data volume has increase and that could be the reason of the OOM

  • 0 kudos
Phani1
by Valued Contributor II
  • 855 Views
  • 0 replies
  • 0 kudos

Execute Pyspark cells concurrently

Hi Team,Hi Team,Is it feasible to run pyspark cells concurrently in databricks notebooks? If so, kindly provide instructions on how to accomplish this. We aim to execute the intermediate steps simultaneously.The given scenario entails the simultaneou...

  • 855 Views
  • 0 replies
  • 0 kudos
PrashantAghara
by New Contributor II
  • 1017 Views
  • 1 replies
  • 0 kudos

org.apache.spark.SparkException: Job aborted due to stage failure when writing to Cosmos

I am writing data to cosmos DB using Python & Spark on DatabricksI am getting below error :org.apache.spark.SparkException: Job aborted due to stage failure: Authorized committer (attemptNumber=0, stage=192, partition=105) failed; but task commit suc...

  • 1017 Views
  • 1 replies
  • 0 kudos
Latest Reply
PrashantAghara
New Contributor II
  • 0 kudos

The configs are for cluster:Worker Type & Driver type : Standard_D16ads_v5RUs for Cosmos : 1.5L

  • 0 kudos
DC3
by New Contributor II
  • 1593 Views
  • 2 replies
  • 0 kudos

Unable to access unity catalog volume via /Volumes in notebook

I have set up a volume in unity catalog in the format catalog/schema/volume, and granted all permissions to all users on the catalog, schema and volume.From the notebook I can see the /Volumes directory in the root of the file system but am unable to...

  • 1593 Views
  • 2 replies
  • 0 kudos
Latest Reply
DC3
New Contributor II
  • 0 kudos

Thanks for your comments. The problem turned out to be the compute resource not having unity catalog enabled.

  • 0 kudos
1 More Replies
Sagas
by New Contributor II
  • 809 Views
  • 1 replies
  • 0 kudos

SparkR or sparklyr not showing history

Hi,for some reason Azure Databricks doesn't show History if the data is saved with SparkR (2 in the figure below) or Sparklyr (3), but it does show it with Data Ingestion (0) or with PySpark (1). Is this a known bug or am I doing something wrong? Is ...

Databricks_history.PNG SparkR.PNG Sparklyr.PNG
Data Engineering
sparklyr
SparkR
  • 809 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hi,for some reason Azure Databricks doesn't show History if the data is saved with SparkR (2 in the figure below) or Sparklyr (3), but it does show it with Data Ingestion (0) or with PySpark (1). Is this a known bug or am I doing something wrong? Is ...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
patrickw
by New Contributor II
  • 5681 Views
  • 2 replies
  • 0 kudos

connect timed out error - Connecting to SQL Server from Databricks

I am getting a connect timed out error when attempting to access a sql server. I can successfully ping the server from Databricks. I have used the jdbc connection and the sqlserver included driver and both result in the same error. I have also attemp...

  • 5681 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Can you run the following command in a notebook using the same cluster you are using to connect:%sh nc -vz <hostname> <port> This test will confirm us if we are able to communicate with the SQL server by using the port you are defining to connect. If...

  • 0 kudos
1 More Replies
madrhr
by New Contributor III
  • 2427 Views
  • 3 replies
  • 3 kudos

Resolved! SparkContext lost when running %sh script.py

I need to execute a .py file in Databricks from a notebook (with arguments which for simplicity i exclude here). For this i am using:%sh script.pyscript.py:from pyspark import SparkContext def main(): sc = SparkContext.getOrCreate() print(sc...

Data Engineering
%sh
.py
bash shell
SparkContext
SparkShell
  • 2427 Views
  • 3 replies
  • 3 kudos
Latest Reply
madrhr
New Contributor III
  • 3 kudos

I got it eventually working with a combination of:from databricks.sdk.runtime import *spark.sparkContext.addPyFile("/path/to/your/file")sys.path.append("path/to/your")   

  • 3 kudos
2 More Replies
NOOR_BASHASHAIK
by Contributor
  • 2073 Views
  • 2 replies
  • 1 kudos

Machine Type for VACUUM operation

Dear allI have a workflow with 2 tasks : one that does OPTIMIZE, followed by one that does VACUUM. I used a cluster with F32s driver and F64s - 8 workers (auto-scaling enabled). All 8 workers are launched by Databricks as soon as OPTIMIZE starts. As ...

NOOR_BASHASHAIK_0-1710268182562.png
Data Engineering
best practice
F series
optimize
vacuum
  • 2073 Views
  • 2 replies
  • 1 kudos
Latest Reply
ArturOA
New Contributor III
  • 1 kudos

Hi,were you able to get any useful help on this?

  • 1 kudos
1 More Replies
PrebenOlsen
by New Contributor III
  • 1043 Views
  • 2 replies
  • 0 kudos

How to migrate Git repos with DLT configurations

Hi!I want to migrate all my databricks related code from one github repo to another. I knew this wouldn't be straight forward. When I copy my code for one DLT, I get the errororg.apache.spark.sql.catalyst.ExtendedAnalysisException: Table 'vessel_batt...

  • 1043 Views
  • 2 replies
  • 0 kudos
Latest Reply
PrebenOlsen
New Contributor III
  • 0 kudos

Does cloning take considerably less time then recreating the tables?Can I resume append operations to a cloned table?

  • 0 kudos
1 More Replies
Phani1
by Valued Contributor II
  • 694 Views
  • 0 replies
  • 0 kudos

Job cluster configuration for 24/7

Hi Team,We intend to activate the job cluster around the clock. We  consider the following parameters regarding cost:  - Data volumes - Client SLA for job completion- Starting with a small cluster configuration Please advise on any other options we s...

  • 694 Views
  • 0 replies
  • 0 kudos
Anshul_DBX
by New Contributor
  • 963 Views
  • 1 replies
  • 1 kudos

Masking rules with Delta Sharing

Hi,We tried Delta sharing to PBI which worked fine, But facing issues while trying to apply row, column level filtering or data masking. It fails with error that its not supported.Can anyone please confirm, if delta sharing with masking rules works w...

0dcc15fc-597c-460f-b37c-1a678ef60997.jpg
  • 963 Views
  • 1 replies
  • 1 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 1 kudos

Hi @Anshul_DBX good day! The issue you are encountering is due to a limitation in Delta Sharing. As per the provided information, Delta Sharing does not support row-level security or column masks. This means that you cannot apply row and column level...

  • 1 kudos
Yohannes
by New Contributor
  • 3092 Views
  • 1 replies
  • 0 kudos

Databricks cli workflow

Is there a way that I can set up and configure a Databricks workflow job and tasks from Databricks cli or api tools by using python? Any help would be appreciated. #databricksworkflow #databricks 

  • 3092 Views
  • 1 replies
  • 0 kudos
Latest Reply
steyler-db
Databricks Employee
  • 0 kudos

Hello and yes, you can set up and configure a Databricks workflow job and tasks using Databricks CLI or API tools with Python. Here are some resources and steps to guide you:   Create and run Databricks Jobs: This document: ( https://docs.databrick...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels