cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

camilo_s
by Contributor
  • 698 Views
  • 0 replies
  • 1 kudos

Parametrizing query for DEEP CLONE

Update: Hey moderator, I've removed the link to the Bobby tables XKCD to reassure that this post is not spam Hi, I'm somehow unable to write a parametrized query to create a DEEP CLONE. I'm trying really hard to avoid using string interpolation (to p...

  • 698 Views
  • 0 replies
  • 1 kudos
greyfine
by New Contributor II
  • 12506 Views
  • 5 replies
  • 5 kudos

Hi Everyone , I was wondering if it is possible to have alerts set up on query level for pyspark notebooks that are run on schedule in databricks so if we have some expected result from it we can receive a mail alert ?

In Above you can see we have 3 workspaces - we have the alert option available in the sql workspace but not in our data science and engineering space , anyway we can incorporate this in our DS and Engineering space ?

image.png
  • 12506 Views
  • 5 replies
  • 5 kudos
Latest Reply
JKR
Contributor
  • 5 kudos

How can I receive call on teams/number/slack if any jobs fails?

  • 5 kudos
4 More Replies
Aidzillafont
by New Contributor II
  • 1361 Views
  • 1 replies
  • 0 kudos

How to pick the right cluster for your workflow

Hi All,I am attempting to execute a workflow on various job clusters, including general-purpose and memory-optimized clusters. My main bottleneck is that data is being written to disk because I’m running out of RAM. This is due to the large dataset t...

  • 1361 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ravivarma
Databricks Employee
  • 0 kudos

Hello @Aidzillafont , Greetings! Please find below the document which explains the Compute configuration best practices Doc: https://docs.databricks.com/en/compute/cluster-config-best-practices.html I hope this helps you! Regards, Ravi

  • 0 kudos
Sadam97
by New Contributor III
  • 650 Views
  • 0 replies
  • 0 kudos

Databricks (GCP) Cluster not resolving Hostname into IP address

we have #mongodb hosts that must be resolved to private internal loadbalancer ips ( of another cluster ), and that we are unable to add host aliases in the Databricks GKE cluster in order for the spark to be able to connect to a mongodb and resolve t...

  • 650 Views
  • 0 replies
  • 0 kudos
feliximmanuel
by New Contributor II
  • 1163 Views
  • 0 replies
  • 1 kudos

Error: oidc: fetch .well-known: Get "https://%E2%80%93host/oidc/.well-known/oauth-authorization-serv

I'm trying to authenticate databricks using WSL but suddenly getting this error./databricks-asset-bundle$ databricks auth login –host https://<XXXXXXXXX>.12.azuredatabricks.netDatabricks Profile Name:<XXXXXXXXX>Error: oidc: fetch .well-known: Get "ht...

  • 1163 Views
  • 0 replies
  • 1 kudos
Sudheer_DB
by New Contributor II
  • 966 Views
  • 3 replies
  • 0 kudos

DLT SQL schema definition

Hi All,While defining a schema in creating a table using Autoloader and DLT using SQL, I am getting schema mismatch error between the defined schema and inferred schema. CREATE OR REFRESH STREAMING TABLE csv_test(a0 STRING,a1 STRING,a2 STRING,a3 STRI...

Sudheer_DB_0-1719375711422.png
  • 966 Views
  • 3 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

@Sudheer_DB You can specify your own _rescued_data column name by setting up rescuedDataColumn option.https://docs.databricks.com/en/ingestion/auto-loader/schema.html#what-is-the-rescued-data-column

  • 0 kudos
2 More Replies
hr959
by New Contributor II
  • 1081 Views
  • 1 replies
  • 0 kudos

Access Control/Management Question

I have two workspaces made with the same account using same metastore and region, and I want the second workspace to be able to access only certain rows of tables from data held in the first workspace based on a user group condition. Is this possible...

  • 1081 Views
  • 1 replies
  • 0 kudos
Latest Reply
hr959
New Contributor II
  • 0 kudos

Sorry, forgot to mention! When I tried delta sharing, all my workspaces have the same sharing identifier so the data never actually showed up in the "shared with me", and then I wasn't able to access the data I shared. It was in "shared by me" in bot...

  • 0 kudos
pm71
by New Contributor II
  • 1980 Views
  • 4 replies
  • 3 kudos

Issue with os and sys Operations in Repo Path on Databricks

Hi,Starting from today, I have encountered an issue when performing operations using the os and sys modules within the Repo path in my Databricks environment. Specifically, any operation that involves these modules results in a timeout error. However...

  • 1980 Views
  • 4 replies
  • 3 kudos
Latest Reply
mgradowski
New Contributor III
  • 3 kudos

https://status.azuredatabricks.net/pages/incident/5d49ec10226b9e13cb6a422e/667c08fa17fef71767abda04"Degraded performance" is a pretty mild way of saying almost nothing productve can be done ATM...

  • 3 kudos
3 More Replies
hfyhn
by New Contributor
  • 838 Views
  • 0 replies
  • 0 kudos

DLT, combine LIVE table with data masking and row filter

I need to apply data masking and row filters to my table. At the same time I would like to use DLT Live tables. However, as far as I can see, DLT Live tables are not compatble with Live tables. What are my options? Move the tables from out of the mat...

  • 838 Views
  • 0 replies
  • 0 kudos
Hertz
by New Contributor II
  • 1298 Views
  • 1 replies
  • 0 kudos

System Tables / Audit Logs action_name createWarehouse/createEndpoint

I am creating a cost dashboard across multiple accounts. I am working get sql warehouse names and warehouse ids so I can combine with system.access.billing on warehouse_id.  But the only action_names that include both the warehouse_id and warehouse_n...

Data Engineering
Audit Logs
cost monitor
createEndpoint
createWarehouse
  • 1298 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hertz
New Contributor II
  • 0 kudos

I just wanted to circle back to this. It appears that the ID is returned in the response column of the create action_name.

  • 0 kudos
HASSAN_UPPAL123
by New Contributor II
  • 1528 Views
  • 1 replies
  • 0 kudos

SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration schema is not available

Hi Community,I'm trying to read the data from sample schema from table nation from data-bricks catalog via spark but i'm getting this error.com.databricks.client.support.exceptions.GeneralException: [Databricks][JDBCDriver](500051) ERROR processing q...

Data Engineering
pyspark
python
  • 1528 Views
  • 1 replies
  • 0 kudos
Latest Reply
HASSAN_UPPAL123
New Contributor II
  • 0 kudos

Hi Community,I'm still facing the issue can someone please provide me any solution how to fix above error.

  • 0 kudos
Zume
by New Contributor II
  • 1076 Views
  • 1 replies
  • 0 kudos

Unity Catalog Shared compute Issues

Am I the only one experiencing challenges in migrating to Databricks Unity Catalog? I observed that in Unity Catalog-enabled compute, the "Shared" access mode is still tagged as a Preview feature. This means it is not yet safe for use in production w...

  • 1076 Views
  • 1 replies
  • 0 kudos
Latest Reply
jacovangelder
Honored Contributor
  • 0 kudos

Have you tried creating a volume on top of the external location, and using the volume in spark.read.parquet?i.e.   spark.read.parquet('/Volumes/<volume_name>/<folder_name>/<file_name.parquet>')  Edit: also, not sure why the Databricks community mana...

  • 0 kudos
Martin_Pham
by New Contributor III
  • 773 Views
  • 1 replies
  • 1 kudos

Resolved! Is Datbricks-Salesforce already available to use?

Reference: Salesforce and Databricks Announce Strategic Partnership to Bring Lakehouse Data Sharing and Shared ...I was going through this article and wanted to know if this is already released. My assumption is that there’s no need to use third-part...

  • 773 Views
  • 1 replies
  • 1 kudos
Latest Reply
Martin_Pham
New Contributor III
  • 1 kudos

Looks like it has been released - Salesforce BYOM

  • 1 kudos
Jackson1111
by New Contributor III
  • 686 Views
  • 1 replies
  • 0 kudos

How to use job.run_id as the running parameter of jar job to trigger job through REST API

"[,\"\{\{job.run_id\}\}\"]" {"error_code": "INVALID_PARAMETER_VALUE","message": "Legacy parameters cannot contain references."}

  • 686 Views
  • 1 replies
  • 0 kudos
Latest Reply
Jackson1111
New Contributor III
  • 0 kudos

How to get the Job ID and Run ID in job runing?

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels