cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Aidzillafont
by New Contributor II
  • 2425 Views
  • 1 replies
  • 0 kudos

How to pick the right cluster for your workflow

Hi All,I am attempting to execute a workflow on various job clusters, including general-purpose and memory-optimized clusters. My main bottleneck is that data is being written to disk because I’m running out of RAM. This is due to the large dataset t...

  • 2425 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ravivarma
Databricks Employee
  • 0 kudos

Hello @Aidzillafont , Greetings! Please find below the document which explains the Compute configuration best practices Doc: https://docs.databricks.com/en/compute/cluster-config-best-practices.html I hope this helps you! Regards, Ravi

  • 0 kudos
Sadam97
by New Contributor III
  • 957 Views
  • 0 replies
  • 0 kudos

Databricks (GCP) Cluster not resolving Hostname into IP address

we have #mongodb hosts that must be resolved to private internal loadbalancer ips ( of another cluster ), and that we are unable to add host aliases in the Databricks GKE cluster in order for the spark to be able to connect to a mongodb and resolve t...

  • 957 Views
  • 0 replies
  • 0 kudos
Sudheer_DB
by New Contributor II
  • 1706 Views
  • 3 replies
  • 0 kudos

DLT SQL schema definition

Hi All,While defining a schema in creating a table using Autoloader and DLT using SQL, I am getting schema mismatch error between the defined schema and inferred schema. CREATE OR REFRESH STREAMING TABLE csv_test(a0 STRING,a1 STRING,a2 STRING,a3 STRI...

Sudheer_DB_0-1719375711422.png
  • 1706 Views
  • 3 replies
  • 0 kudos
Latest Reply
daniel_sahal
Databricks MVP
  • 0 kudos

@Sudheer_DB You can specify your own _rescued_data column name by setting up rescuedDataColumn option.https://docs.databricks.com/en/ingestion/auto-loader/schema.html#what-is-the-rescued-data-column

  • 0 kudos
2 More Replies
hr959
by New Contributor II
  • 2538 Views
  • 1 replies
  • 0 kudos

Access Control/Management Question

I have two workspaces made with the same account using same metastore and region, and I want the second workspace to be able to access only certain rows of tables from data held in the first workspace based on a user group condition. Is this possible...

  • 2538 Views
  • 1 replies
  • 0 kudos
Latest Reply
hr959
New Contributor II
  • 0 kudos

Sorry, forgot to mention! When I tried delta sharing, all my workspaces have the same sharing identifier so the data never actually showed up in the "shared with me", and then I wasn't able to access the data I shared. It was in "shared by me" in bot...

  • 0 kudos
pm71
by New Contributor II
  • 3120 Views
  • 4 replies
  • 3 kudos

Issue with os and sys Operations in Repo Path on Databricks

Hi,Starting from today, I have encountered an issue when performing operations using the os and sys modules within the Repo path in my Databricks environment. Specifically, any operation that involves these modules results in a timeout error. However...

  • 3120 Views
  • 4 replies
  • 3 kudos
Latest Reply
mgradowski
New Contributor III
  • 3 kudos

https://status.azuredatabricks.net/pages/incident/5d49ec10226b9e13cb6a422e/667c08fa17fef71767abda04"Degraded performance" is a pretty mild way of saying almost nothing productve can be done ATM...

  • 3 kudos
3 More Replies
hfyhn
by Databricks Partner
  • 1074 Views
  • 0 replies
  • 0 kudos

DLT, combine LIVE table with data masking and row filter

I need to apply data masking and row filters to my table. At the same time I would like to use DLT Live tables. However, as far as I can see, DLT Live tables are not compatble with Live tables. What are my options? Move the tables from out of the mat...

  • 1074 Views
  • 0 replies
  • 0 kudos
Hertz
by New Contributor II
  • 2113 Views
  • 1 replies
  • 0 kudos

System Tables / Audit Logs action_name createWarehouse/createEndpoint

I am creating a cost dashboard across multiple accounts. I am working get sql warehouse names and warehouse ids so I can combine with system.access.billing on warehouse_id.  But the only action_names that include both the warehouse_id and warehouse_n...

Data Engineering
Audit Logs
cost monitor
createEndpoint
createWarehouse
  • 2113 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hertz
New Contributor II
  • 0 kudos

I just wanted to circle back to this. It appears that the ID is returned in the response column of the create action_name.

  • 0 kudos
HASSAN_UPPAL123
by New Contributor II
  • 2301 Views
  • 1 replies
  • 0 kudos

SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration schema is not available

Hi Community,I'm trying to read the data from sample schema from table nation from data-bricks catalog via spark but i'm getting this error.com.databricks.client.support.exceptions.GeneralException: [Databricks][JDBCDriver](500051) ERROR processing q...

Data Engineering
pyspark
python
  • 2301 Views
  • 1 replies
  • 0 kudos
Latest Reply
HASSAN_UPPAL123
New Contributor II
  • 0 kudos

Hi Community,I'm still facing the issue can someone please provide me any solution how to fix above error.

  • 0 kudos
Zume
by New Contributor II
  • 1629 Views
  • 1 replies
  • 0 kudos

Unity Catalog Shared compute Issues

Am I the only one experiencing challenges in migrating to Databricks Unity Catalog? I observed that in Unity Catalog-enabled compute, the "Shared" access mode is still tagged as a Preview feature. This means it is not yet safe for use in production w...

  • 1629 Views
  • 1 replies
  • 0 kudos
Latest Reply
jacovangelder
Databricks MVP
  • 0 kudos

Have you tried creating a volume on top of the external location, and using the volume in spark.read.parquet?i.e.   spark.read.parquet('/Volumes/<volume_name>/<folder_name>/<file_name.parquet>')  Edit: also, not sure why the Databricks community mana...

  • 0 kudos
Martin_Pham
by New Contributor III
  • 1307 Views
  • 1 replies
  • 1 kudos

Resolved! Is Datbricks-Salesforce already available to use?

Reference: Salesforce and Databricks Announce Strategic Partnership to Bring Lakehouse Data Sharing and Shared ...I was going through this article and wanted to know if this is already released. My assumption is that there’s no need to use third-part...

  • 1307 Views
  • 1 replies
  • 1 kudos
Latest Reply
Martin_Pham
New Contributor III
  • 1 kudos

Looks like it has been released - Salesforce BYOM

  • 1 kudos
Jackson1111
by New Contributor III
  • 1147 Views
  • 1 replies
  • 0 kudos

How to use job.run_id as the running parameter of jar job to trigger job through REST API

"[,\"\{\{job.run_id\}\}\"]" {"error_code": "INVALID_PARAMETER_VALUE","message": "Legacy parameters cannot contain references."}

  • 1147 Views
  • 1 replies
  • 0 kudos
Latest Reply
Jackson1111
New Contributor III
  • 0 kudos

How to get the Job ID and Run ID in job runing?

  • 0 kudos
ttamas
by New Contributor III
  • 5619 Views
  • 1 replies
  • 0 kudos

Get the triggering task's name

Hi,I have tasks that depend on each other. I would like to get variables from task1 that triggers task2.This is how I solved for my problem:Following suggestion in https://community.databricks.com/t5/data-engineering/how-to-pass-parameters-to-a-quot-...

  • 5619 Views
  • 1 replies
  • 0 kudos
Kjetil
by Contributor
  • 3351 Views
  • 3 replies
  • 2 kudos

Resolved! Autoloader to concatenate CSV files that updates regularly into a single parquet dataframe.

I have multiple large CSV files. One or more of these files changes now and then (a few times a day). The changes in the CSV files are both of type update and append (so both new rows) and updates of old. I want to combine all CSV files into a datafr...

  • 3351 Views
  • 3 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

Hi @Kjetil, Please let us know if you still have issue or if @-werners- response could be mark as a best solution. Thank you

  • 2 kudos
2 More Replies
KSI
by New Contributor II
  • 1556 Views
  • 1 replies
  • 0 kudos

Variant datatype

I'm checking on variant datatype and noted that whenever a JSON string is stored as a variant datatype in order to filter and value it needs to be casted: i.eSELECT sum(jsondatavar:Value::double )FROM tableWHERE jsondatavar:customer ::int= 1000Here j...

  • 1556 Views
  • 1 replies
  • 0 kudos
Latest Reply
Mounika_Tarigop
Databricks Employee
  • 0 kudos

Could you please try using SQL functions:  SELECT SUM(CAST(get_json_object(jsondatavar, '$.Value') AS DOUBLE)) AS total_value FROM table WHERE CAST(get_json_object(jsondatavar, '$.customer') AS INT) = 1000

  • 0 kudos
Labels