cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Brad
by Contributor II
  • 440 Views
  • 3 replies
  • 0 kudos

Will MERGE incur a lot driver memory

Hi team,We have a job to run MERGE on a target table with around 220 million rows. We found it needs a lot driver memory (just for MERGE itself). From the job metrics we can see the MERGE needs at least 46GB memory. Is there some special thing to mak...

  • 440 Views
  • 3 replies
  • 0 kudos
Latest Reply
filipniziol
Contributor III
  • 0 kudos

Hi @Brad ,Could you try to apply very standard optimization practices and check the outcome:1. If your runtime is greater equal 15.2, could you implement liquid clustering on the source and target tables using JOIN columns?ALTER TABLE <table_name> CL...

  • 0 kudos
2 More Replies
hcord
by New Contributor II
  • 645 Views
  • 1 replies
  • 2 kudos

Resolved! Trigger a workflow from a different databricks environment

Hello everyone,In the company I work we have a lot of different databricks environments and now we're in need of deeper integration of processes from environment's X and Y. There's a workflow in Y that runs a process that when finished we would like ...

  • 645 Views
  • 1 replies
  • 2 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 2 kudos

Hi @hcord ,You can use REST API in the last task to trigger a workflow in different workspace

  • 2 kudos
sshynkary
by New Contributor
  • 873 Views
  • 1 replies
  • 0 kudos

Loading data from spark dataframe directly to Sharepoint

Hi guys!I am trying to load data directly from PySpark dataframe to Sharepoint folder and I cannot find a solution regarding it.I wanted to implement workaround using volumes and logic apps, but there are few issues. I need to partition df in a few f...

Data Engineering
SharePoint
spark
  • 873 Views
  • 1 replies
  • 0 kudos
Latest Reply
ChKing
New Contributor II
  • 0 kudos

One approach could involve using Azure Data Lake as an intermediary. You can partition your PySpark DataFrames and load them into Azure Data Lake, which is optimized for large-scale data storage and integrates well with PySpark. Once the data is in A...

  • 0 kudos
dpc
by New Contributor III
  • 3117 Views
  • 4 replies
  • 2 kudos

Resolved! Remove Duplicate rows in tables

HelloI've seen posts that show how to remove duplicates, something like this:MERGE into [deltatable] as targetUSING ( select *, ROW_NUMBER() OVER (Partition By [primary keys] Order By [date] desc) as rn  from [deltatable] qualify rn> 1 ) as sourceON ...

  • 3117 Views
  • 4 replies
  • 2 kudos
Latest Reply
filipniziol
Contributor III
  • 2 kudos

Hi @dpc ,if you like using SQL:1. Test data:# Sample data data = [("1", "A"), ("1", "A"), ("2", "B"), ("2", "B"), ("3", "C")] # Create DataFrame df = spark.createDataFrame(data, ["id", "value"]) # Write to Delta table df.write.format("delta").mode(...

  • 2 kudos
3 More Replies
397973
by New Contributor III
  • 377 Views
  • 1 replies
  • 0 kudos

First time to see "Databricks is experiencing heavy load" message. What does it mean really?

Hi, I just went to run a Databricks pyspark notebook and saw this message:This is a notebook I've run before but never saw this. Is it referring to my cluster? The Databricks infrastructure? My notebook ran normally, just wondering though. Google sea...

397973_0-1727271218117.png
  • 377 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

never saw that message, but my guess it is not your cluster but the Databricks platform in your region.status.databricks.com perhaps has some info.

  • 0 kudos
MustangR
by New Contributor
  • 1386 Views
  • 2 replies
  • 0 kudos

Delta Table Upsert fails when source attributes are missing

Hi All,I am trying to merge a json to delta table. Since the Json is basically from MongoDB which does not have a schema, there are chances of having missing attributes expected by delta table schema validation. Schema Evolution is enabled as well. H...

  • 1386 Views
  • 2 replies
  • 0 kudos
Latest Reply
JohnM256
New Contributor II
  • 0 kudos

How do I set Existing Optional Columns?

  • 0 kudos
1 More Replies
Paul_Poco
by New Contributor II
  • 70349 Views
  • 5 replies
  • 5 kudos

Asynchronous API calls from Databricks

Hi, ​I have to send thousands of API calls from a Databricks notebook to an API to retrieve some data. Right now, I am using a sequential approach using the python request package. As the performance is not acceptable anymore, I need to send my API c...

  • 70349 Views
  • 5 replies
  • 5 kudos
Latest Reply
adarsh8304
New Contributor II
  • 5 kudos

Hey @Paul_Poco what about using the processpoolexecutor or threadypoolexecutor from the concurrent.futures module ? have u tried them or not . ?  

  • 5 kudos
4 More Replies
RabahO
by New Contributor III
  • 3106 Views
  • 3 replies
  • 1 kudos

Dashboard always display truncated data

Hello, we're working with a serverless SQL cluster to query Delta tables and display some analytics in dashboards. We have some basic group by queries that generate around 36k lines, and they are executed without the "limit" key word. So in the data ...

RabahO_0-1714985064998.png RabahO_1-1714985222841.png
  • 3106 Views
  • 3 replies
  • 1 kudos
Latest Reply
AlexHerbo
New Contributor II
  • 1 kudos

Hello, I am currently facing the same issue. Has there been an update or a solution since the last post?

  • 1 kudos
2 More Replies
Prashanth24
by New Contributor III
  • 937 Views
  • 1 replies
  • 0 kudos

Error connecting Databricks Notebook using managed identity from Azure Data Factory

I am trying to connect Databricks Notebook using managed identity authentication type from Azure Data Factory. Below are the settings done. Error message is appended at the bottom of this message. With the same settings but with different authenticat...

  • 937 Views
  • 1 replies
  • 0 kudos
priyansh
by New Contributor III
  • 902 Views
  • 3 replies
  • 0 kudos

How Photon Acceleration Actually works?

Hey folks!I would like to know that how photon acceleration actually works, I have tested it on a sample of 219MB, 513MB, 2.7 GB, 4.1 GB of Data and the difference in seconds between normal and photon accelerated compute was not so much, So my questi...

image (4).png
  • 902 Views
  • 3 replies
  • 0 kudos
Latest Reply
arch_db
New Contributor II
  • 0 kudos

Try to check merge operation on tables over 200GB.

  • 0 kudos
2 More Replies
Jorge3
by New Contributor III
  • 857 Views
  • 2 replies
  • 2 kudos

How to Upload Python Wheel Artifacts to a Volume from a DAB Run?

Hello,I'm currently working on a Databricks Assets Bundle (DAB) that builds and deploys a Python wheel package. My goal is to deploy this package to a Volume so that other DAB jobs can use this common library.I followed the documentation and successf...

  • 857 Views
  • 2 replies
  • 2 kudos
Latest Reply
dataeng42io
New Contributor III
  • 2 kudos

Hi @Jorge3 Hope I am not too lake to answer but here is my suggestion.If you reference to the docs to consume a wheel that is in a volume you can configure your job to reference your wheel in your volume.Documentation: > https://learn.microsoft.com/e...

  • 2 kudos
1 More Replies
EricCournarie
by New Contributor III
  • 421 Views
  • 2 replies
  • 0 kudos

Metadata on a prepared statement return upper case column names

Hello,Using the JDBC Driver , when I check the metadata of a prepared statement, the column names names are all uppercase . This does not happen when running a DESCRIBE on the same select. Any properties to set , or it is a known issue ? or a workaro...

  • 421 Views
  • 2 replies
  • 0 kudos
Latest Reply
gchandra
Databricks Employee
  • 0 kudos

Looks like a bug. Can you try using double quotes?  SELECT "ColumnName" instead of backticks?   

  • 0 kudos
1 More Replies
camilo_s
by Contributor
  • 1250 Views
  • 3 replies
  • 0 kudos

Spark SQL vs serverless SQL

Are there any benchmarks showing performance and cost differences between running SQL workloads on Spark SQL vs Databricks SQL (specially serverless SQL)?Our customer is hesitant about getting locked into Databricks SQL as opposed to being able to ru...

  • 1250 Views
  • 3 replies
  • 0 kudos
Latest Reply
robinhood555
New Contributor II
  • 0 kudos

@camilo_s wrote:Are there any benchmarks showing performance and cost differences between running SQL workloads on Spark SQL vs Databricks SQL (specially serverless SQL)?  hpinstantinkOur customer is hesitant about getting locked into Databricks SQL ...

  • 0 kudos
2 More Replies
shsalami
by New Contributor III
  • 594 Views
  • 2 replies
  • 0 kudos

Sample streaming table is failed

Running the following databricks sample code in the pipeline: CREATE OR REFRESH STREAMING TABLE customersAS SELECT * FROM cloud_files("/databricks-datasets/retail-org/customers/", "csv") I got error:org.apache.spark.sql.catalyst.ExtendedAnalysisExcep...

  • 594 Views
  • 2 replies
  • 0 kudos
Latest Reply
shsalami
New Contributor III
  • 0 kudos

There is no table with that name.Also, in that folder just the following file exists:dbfs:/databricks-datasets/retail-org/customers/customers.csv

  • 0 kudos
1 More Replies
shsalami
by New Contributor III
  • 604 Views
  • 2 replies
  • 1 kudos

Resolved! Materialize view creation is failed

I have 'ALL_PRIVILEGES' and 'USE_SCHEMA' on lhdev.gld_sbx schema but the following command has been failed with the error:DriverException: Unable to process statement for Table 'customermvx' create materialized view customermvxasselect *from lhdev.gl...

  • 604 Views
  • 2 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @shsalami ,According to below documentation snippet, you also need USE CATALOG privilege on the parent catalog. "The user who creates a materialized view (MV) is the MV owner and needs to have the following permissions:SELECT privilege over the ba...

  • 1 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels