cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

alvaro_databric
by New Contributor III
  • 2599 Views
  • 2 replies
  • 2 kudos

How to access hard disk attached to cluster?

Hi,I am using the VM family Lasv3, which incorporate a NVMe SSD. I would like to take advantage of this huge amount of space but I cannot find where this disk is mounted. Does someone know where this disk is mounted and if it can be used as local dri...

  • 2599 Views
  • 2 replies
  • 2 kudos
Latest Reply
JosiahJohnston
New Contributor III
  • 2 kudos

Great question; I've been trying to hunt that down also. `/local_disk0` looks like a good candidate, but it has restricted access and I can't confirm or use.Would love to learn a solution someday. This is a big need for hybrid workflows & libraries c...

  • 2 kudos
1 More Replies
Anand4
by New Contributor II
  • 1239 Views
  • 1 replies
  • 2 kudos

Resolved! Delta Table - Partitioning

Created a streaming job with delta table as a target.  The table did not have a partition when created earlier, however i would like to add an existing column as a partition column.I am getting the following error.com.databricks.sql.transaction.tahoe...

  • 1239 Views
  • 1 replies
  • 2 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 2 kudos

Hi @Anand4,Delta Lake does not support altering the partitioning of an existing table directly. Therefore, the way forward is to rewrite the entire table with the new partition column

  • 2 kudos
mvmiller
by New Contributor III
  • 7593 Views
  • 4 replies
  • 2 kudos

Troubleshooting _handle_rpc_error GRPC Error

I am trying to run the following chunk of code in the cell of a Databricks notebook (using Databricks runtime 14.3 LTS, Apache spark 3.5.0, scala 2.12): spark.sql("CREATE OR REPLACE table sample_catalog.sample_schema.sample_table_tmp AS SELECT * FROM...

  • 7593 Views
  • 4 replies
  • 2 kudos
Latest Reply
kunalmishra9
New Contributor III
  • 2 kudos

Following. Also having this issue, but within the context of pivoting a DF, then aggregating by *

  • 2 kudos
3 More Replies
ChristianRRL
by Valued Contributor
  • 1245 Views
  • 7 replies
  • 3 kudos

DLT Potential Bug: File Reprocessing Issue with "cloudFiles.allowOverwrites": "true"

Hi there, I ran into a peculiar case and I'm wondering if anyone else has run into this and can offer an explanation. We have a DLT process to pull CSV files from a landing location and insert (append) them into target tables. We have the setting "cl...

  • 1245 Views
  • 7 replies
  • 3 kudos
Latest Reply
NandiniN
Databricks Employee
  • 3 kudos

Apologies, that could be the internet or networking issue. So, in DLT you will be able to change the DBR but will have to use custom image, it may be tricky if you have not done it earlier.  By default, photon will be used in serverelss. It may be a ...

  • 3 kudos
6 More Replies
FabianGutierrez
by Contributor
  • 1714 Views
  • 3 replies
  • 1 kudos

Issue with DAB (Databricks Asset Bundle) requesting Terraform files

Hi community,Since recently (2 days ago) we have been receiving the following error when validating and deploying our DAB (Databricks Asset Bundle):"Error: error downloading Terraform: Get "https://releases.hashicorp.com/terraform/1.5.5/index.json": ...

  • 1714 Views
  • 3 replies
  • 1 kudos
Latest Reply
FabianGutierrez
Contributor
  • 1 kudos

Some update, we cannot get the FW cleared on time so we need to go for the offline optiion, that is download everything form Terraform and DB templated but it is not as clear or intuitive as describe. Using their Container unfortunately not a option ...

  • 1 kudos
2 More Replies
pjv
by New Contributor III
  • 1124 Views
  • 1 replies
  • 0 kudos

How to ensure pyspark udf execution is distributed across worker nodes

Hi,I have the following databricks notebook code defined: pyspark_dataframe = create_pyspark_dataframe(some input data)MyUDF = udf(myfunc, StringType())pyspark_dataframe = pyspark_dataframe.withColumn('UDFOutput', DownloadUDF(input data columns))outp...

  • 1124 Views
  • 1 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

@pjv Can you please try the following, you'll basically want to have more than a single partition: from pyspark.sql import SparkSession from pyspark.sql.functions import udf from pyspark.sql.types import StringType # Initialize Spark session (if not...

  • 0 kudos
Vasu_Kumar_T
by New Contributor II
  • 336 Views
  • 1 replies
  • 0 kudos

Larger than Max error :

Hi,We are trying to pass the keys to decrypt a file and receiving the above error as in attached.Please help in case we need to change and configuration or set any options to avoid this error. Thanks. Vasu 

VasuKumarT_0-1728473121954.png
  • 336 Views
  • 1 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

@Vasu_Kumar_T can you provide some more details or context? Feel free to replace sensitive data. Where are you getting this? How are you passing the keys to decrypt a file? Is there a move comprehensive stacktrace apart from this message in the image...

  • 0 kudos
ChingizK
by New Contributor III
  • 2481 Views
  • 1 replies
  • 1 kudos

Hyperopt Error: There are no evaluation tasks, cannot return argmin of task losses.

The trials succeed when the cell in the notebook is executed manually:However, the same process fails when executed as a Workflow: The error simply says that there's an issue with the objective function. However how can that be the case if I'm able t...

01.png 02.png
Data Engineering
hyperopt
Workflows
  • 2481 Views
  • 1 replies
  • 1 kudos
Latest Reply
honj
New Contributor II
  • 1 kudos

I've run in to the same issue using SparkTrials.Runs fine manually.Runs using only Trials in the workflow.Get this error when using SparkTrials.I've tried dropping parallelism right down, making sure there's only one experiment on that cluster.Did yo...

  • 1 kudos
sangram11
by New Contributor
  • 837 Views
  • 4 replies
  • 0 kudos

Myths about vacuum command

I identified some myths while working with vacuum command spark 3.5.x.1. vacuum command is not working with days. Instead it's retain clause is asking explicitly to supply values in hours. I tried many times, and it is throwing parse syntax error (wh...

sangram11_0-1730255825227.png sangram11_1-1730256066071.png
  • 837 Views
  • 4 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

Thanks for reporting this Sangram. Are these youtube and educational contents in the Databricks channel? > set delta.databricks.delta.retentionDurationCheck.enabled = false. It works if I want to delete obsolete files whose lifespan is less than defa...

  • 0 kudos
3 More Replies
kidexp
by New Contributor II
  • 25047 Views
  • 7 replies
  • 2 kudos

Resolved! How to install python package on spark cluster

Hi, How can I install python packages on spark cluster? in local, I can use pip install. I want to use some external packages which is not installed on was spark cluster. Thanks for any suggestions.

  • 25047 Views
  • 7 replies
  • 2 kudos
Latest Reply
Mikejerere
New Contributor II
  • 2 kudos

If --py-files doesn’t work, try this shorter method:Create a Conda Environment: Install your packages.conda create -n myenv python=3.xconda activate myenvpip install your-packagePackage and Submit: Use conda-pack and spark-submit with --archives.cond...

  • 2 kudos
6 More Replies
Akshay_127877
by New Contributor II
  • 44780 Views
  • 8 replies
  • 1 kudos

How to open Streamlit URL that is hosted by Databricks in local web browser?

I have run this webapp code on Databricks notebook. It works properly without any errors. With databricks acting as server, I am unable open this link on my browser for this webapp.But when I run the code on my local IDE, I am able to just open the U...

image
  • 44780 Views
  • 8 replies
  • 1 kudos
Latest Reply
navallyemul
New Contributor III
  • 1 kudos

@Akshay_127877 : Were you able to resolve this issue?

  • 1 kudos
7 More Replies
IoannaV
by New Contributor
  • 967 Views
  • 1 replies
  • 0 kudos

Issue with Uploading Oracle Driver in Azure Databricks Cluster

Hi, Could you please help me with the following ?I am facing the bellow issue when I try to upload a jar file in the Azure Databricks Libraries.Only Wheel and requirements file from /Workspace are allowed on Assigned UC cluster. Denied library is Jar...

  • 967 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hey, This is by design. I understand the jobs are failing when run on a UC Single user cluster since it is unable to install a Jar package located in the /Workspace path. This is however a known behaviour and is already documented below: https://docs...

  • 0 kudos
lprevost
by Contributor II
  • 571 Views
  • 1 replies
  • 0 kudos

Using Autoloader in DLT: ErrorClass=INVALID_PARAMETER_VALUE.LOCATION_OVERLAP]

I've been using Autloloader in a DLT pipeline loading data from an s3 location to my hive_metastore shared with AWS glue.I'm now trying to migrate this over to Unity Catalog to take advantage of liquid clustering and data quality.However, I'm getting...

  • 571 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

https://kb.databricks.com/unity-catalog/invalid_parameter_valuelocation_overlap-overlaps-with-managed-storage-error 

  • 0 kudos
nagendrapruthvi
by New Contributor
  • 642 Views
  • 2 replies
  • 0 kudos

Cannot login to databricks using SSO

 Hi, I created accounts with Databricks for both production and staging environments at my company, but I made a mistake with the case of the email addresses. For production, I used Xyz@company.com, and for staging, I used xyz@company.com.Now that my...

  • 642 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Okay, so I checked some documents - The email addresses will also be case-insensitive, the same behavior as in AWS, Azure and GCP. This means that email addresses will be stored in lowercase in Databricks. So, the issue is not with case sensitivity b...

  • 0 kudos
1 More Replies
ElaPG1
by New Contributor
  • 452 Views
  • 1 replies
  • 0 kudos

all-purpose compute for Oracle queries

Hi,I am looking for any guidelines, best practices regarding compute configuration for extracting data from Oracle db and saving it as parquet files. Right now I have a DBR workflow with for each task, concurrency = 31 (as I need to copy the data fro...

  • 452 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hi @ElaPG1 , While the cluster sounds like a pretty good one with Autoscaling, it depends on the workload too. The Standard_D8s_v5 instances you are using have 32GB memory and 8 cores. While these are generally good, you might want to experiment with...

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels