cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Vadim1
by New Contributor III
  • 3603 Views
  • 3 replies
  • 3 kudos

Resolved! Error on Azure-Databricks write RDD to storage account with wsabs://

Hi, I'm trying to write data from RDD to the storage account:Adding storage account key:spark.conf.set("fs.azure.account.key.y.blob.core.windows.net", "myStorageAccountKey")Read and write to the same storage:val path = "wasbs://x@y.blob.core.windows....

  • 3603 Views
  • 3 replies
  • 3 kudos
Latest Reply
TheoDeSo
New Contributor III
  • 3 kudos

Hello @Vadim1 and @User16764241763. I'm wondering if you find a way to avoid adding the hardcoded key in the advanced options spark config section in the cluster configuration. Is there a similar command to spark.conf.set("spark.hadoop.fs.azure.accou...

  • 3 kudos
2 More Replies
jdobken
by New Contributor III
  • 8745 Views
  • 8 replies
  • 11 kudos

As the Databricks account manager; I cannot login: "Your user already belongs to a Databricks account"

On GCP I subscribed to Databricks in one project within the organization.Then I canceled this subscription and subscribed to Databricks in another project.When I try to login onto the newly subscribed databricks with google SSO:> There was an error s...

Screenshot 2023-06-07 at 11.32.30
  • 8745 Views
  • 8 replies
  • 11 kudos
Latest Reply
Anonymous
Not applicable
  • 11 kudos

I can see the issue might be related to organizations or billing accounts. The new Databricks project I tried creating was on a different organization/billing-account than the test Databricks subscription I created a month back.I went back to the ori...

  • 11 kudos
7 More Replies
Distributed_Com
by New Contributor III
  • 12712 Views
  • 4 replies
  • 6 kudos

Resolved! Location not empty but not a Delta table

I need help or insight regarding the following errors. My instructors (Brooke Wenig with Conor Murphy) ran this code successfully on our course video, but I cannot replicate what she did. Here is the code and below it is the outcome from my Cours...

  • 12712 Views
  • 4 replies
  • 6 kudos
Latest Reply
gilo12
New Contributor III
  • 6 kudos

DELETE the original Parquet table as a separate statementHow can this be done? simple query "DROP TABLE .... " still failing with "cannot be found" 

  • 6 kudos
3 More Replies
Siravich
by New Contributor
  • 585 Views
  • 0 replies
  • 0 kudos

Permission on Unity catalog

I am facing an issue when assign permission on view created on unity catalog. The problem is I had create a user defined function (UDFs) in order to encrypt sensitive column, I create a view which call the functions and source table within the catalo...

  • 585 Views
  • 0 replies
  • 0 kudos
glebex
by New Contributor II
  • 9734 Views
  • 7 replies
  • 7 kudos

Resolved! Accessing workspace files within cluster init script

Greetings all!I am currently facing an issue while accessing workspace files from the init script.As it was explained in the documentation, it is possible to place init script inside workspace files (link). This works perfectly fine and init script i...

  • 9734 Views
  • 7 replies
  • 7 kudos
Latest Reply
jacob_hill_prof
New Contributor II
  • 7 kudos

@Gleb Smolnik​ You might also want to try cloning a github repo in your init script and then storing dependencies like requirements.txt files and other init scripts there. By doing this you can pull a whole slew of init scripts to be utilized in your...

  • 7 kudos
6 More Replies
Raviiit
by New Contributor II
  • 4230 Views
  • 4 replies
  • 5 kudos

Resolved! spark managed tables

Hi, I recently started learning about spark.  I was studying about spark managed tables. so as per docs " spark manages the both the data and metadata". Assume that i have a csv file in s3 and I read it into data frame like below.df = spark.read .for...

  • 4230 Views
  • 4 replies
  • 5 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 5 kudos

Yes, @Raviiit DBFS (Databricks File System) is a distributed file system used by Databricks clusters. DBFS is an abstraction layer over cloud storage (e.g. S3 or Azure Blob Store), allowing external storage buckets to be mounted as paths in the DBFS ...

  • 5 kudos
3 More Replies
databicky
by Contributor II
  • 6106 Views
  • 5 replies
  • 0 kudos

File copy in adls

i am using dbutils.fs.copy(abfss://container/provsn/filen[ame.txt,abfss://container/data/sasam.txt)while.trying this copy method to copy the files it is showing urisyntax exception near the square bracket how can i read and  copy it

  • 6106 Views
  • 5 replies
  • 0 kudos
Latest Reply
dplante
Contributor II
  • 0 kudos

From looking at stack trace, it looks like URIException.  Easiest solution would be renaming the file so that there are no square brackets in the filename.  If this is not an option, it might help to URLEncode the path - https://stackoverflow.com/que...

  • 0 kudos
4 More Replies
brickster
by New Contributor II
  • 4453 Views
  • 3 replies
  • 2 kudos

Passing values between notebook tasks in Workflow Jobs

I have created a Databricks workflow job with notebooks as individual tasks sequentially linked. I assign a value to a variable in one notebook task (ex: batchid = int(time.time()). Now, I want to pass this batchid variable to next notebook task.What...

  • 4453 Views
  • 3 replies
  • 2 kudos
Latest Reply
fijoy
Contributor
  • 2 kudos

@brickster You would use dbutils.jobs.taskValues.set() and dbutils.jobs.taskValues.get().See docs for more details: https://docs.databricks.com/workflows/jobs/share-task-context.html

  • 2 kudos
2 More Replies
Enzo_Bahrami
by New Contributor III
  • 6723 Views
  • 6 replies
  • 1 kudos

Resolved! On-Premise SQL Server Ingestion to Databricks Bronze Layer

Hello everyone!So I want to ingest tables with schemas from the on-premise SQL server to Databricks Bronze layer with Delta Live Table and I want to do it using Azure Data Factory and I want the load to be a Snapshot batch load, not an incremental lo...

  • 6723 Views
  • 6 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Parsa Bahraminejad​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best an...

  • 1 kudos
5 More Replies
Chaitanya_Raju
by Honored Contributor
  • 2862 Views
  • 2 replies
  • 0 kudos

Creating new group

Can someone help me by providing steps for creating a new group, as I could not able to find it anywhere? Actually, I wanted to create a new group for Hyderabad, India which I could not able to find in the Groups sections.@Kaniz Fatma​ @Sujitha Ramam...

  • 2862 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

adding @Vidula Khanna​ for visibility. Is this possible to do?

  • 0 kudos
1 More Replies
jhgorse
by New Contributor III
  • 1597 Views
  • 0 replies
  • 0 kudos

mqtt to Delta Live Table

Greetings,I see that Delta Live Tables has various real-time connectors such as Kafka, Kinesis, Google's Pub Sub, and so on. I also see that Apache had maintained an mqtt connector to Spark through the 2.x series called Bahir, but dropped it in versi...

  • 1597 Views
  • 0 replies
  • 0 kudos
chorongs
by New Contributor III
  • 5707 Views
  • 4 replies
  • 3 kudos

Resolved! I have a question about the VACUUM feature!

History is piled up as aboveFor testing, I want to erase the history of the table with the VACUUM command."set spark.databricks.delta.retentionDurationCheck.After the option "enabled = False" was given, the command "VACUUM del_park retain 0 hours;" w...

chorongs_0-1688456804185.png
  • 5707 Views
  • 4 replies
  • 3 kudos
Latest Reply
Vinay_M_R
Databricks Employee
  • 3 kudos

Executing VACUUM performs garbage cleanup on the table directory. By default, a retention threshold of 7 days will be enforced. Please follow the below steps to perform VACCUM: 1.) SET spark.databricks.delta.retentionDurationCheck.enabled false; This...

  • 3 kudos
3 More Replies
kll
by New Contributor III
  • 667 Views
  • 0 replies
  • 0 kudos

Mosaic's grid_boundary method returns inconsistent geometries

I am applying mosaic's `grid_boundary` method on a spark DataFrame containing a set of `h3_hex_ids`. The geometries returned are not consistent. i.e they could be either `lat, long` or `long, lat`.Here's a sample data```import pyspark.sql.functions a...

Data Engineering
geospatial
mosaic
  • 667 Views
  • 0 replies
  • 0 kudos
442027
by New Contributor II
  • 5751 Views
  • 2 replies
  • 3 kudos

Resolved! Delta Log checkpoints not being created?

It is mentioned in the delta protocol that checkpoints for delta tables are created every 10 commits - however when I modify a table after >10 separate operations (producing >10 separate json files in the _delta_log directory), no checkpoint files ar...

  • 5751 Views
  • 2 replies
  • 3 kudos
Latest Reply
Vinay_M_R
Databricks Employee
  • 3 kudos

 As the latest update now checkpointing of delta tables are created for every 100 commits. This is done for some improvement purpose.If you want to have a checkpoint file for delta table for every 10 commits or after any desired commits. You can cust...

  • 3 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels