cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

cal
by New Contributor
  • 522 Views
  • 0 replies
  • 0 kudos

G.I.S., Inc. is a distributor and fabricator of thermal and acoustical insulation systems for industrial, commercial, power, process, original equipme...

G.I.S., Inc. is a distributor and fabricator of thermal and acoustical insulation systems for industrial, commercial, power, process, original equipment manufacturers, plumbing and HVAC industries. In today's fast paced market, consumers have a multi...

  • 522 Views
  • 0 replies
  • 0 kudos
Anonymous
by Not applicable
  • 1663 Views
  • 1 replies
  • 1 kudos

Resolved! "policy_id" parameter in JOB API

I can't find information about that parameter in https://docs.databricks.com/dev-tools/api/latest/jobs.htmlWhere is it documented?

  • 1663 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 1 kudos

I believe it is just "policy_id". As an incomplete example the specification via API would be something like: { "cluster_id": "1234-567890-abd35gh", "spark_context_id": 1234567890, "cluster_name": "my_cluster", "spark_version": "9.1.x-scala2....

  • 1 kudos
sgannavaram
by New Contributor III
  • 3282 Views
  • 3 replies
  • 4 kudos

Resolved! Write output of DataFrame to a file with tild ( ~) separator in Databricks Mount or Storage Mount with VM.

I need to write output of Data Frame to a file with tilde ( ~) separator in Databricks Mount or Storage Mount with VM. Could you please help with some sample code if you have any?

  • 3282 Views
  • 3 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 4 kudos

@Srinivas Gannavaram​ , Does it have to be CSV with fields separated by ~?If yes is enough to add .option("sep", "~")(df .write .option("sep", "~") .csv(mount_path))

  • 4 kudos
2 More Replies
Braxx
by Contributor II
  • 2361 Views
  • 1 replies
  • 2 kudos

Resolved! list users having access to scope credentials

Hello!How do I list all the users or groups having access to the key-vault backed scope credentials?Let's say, I have a scope called MyScope for which all the secrets are stored in MyKeyVault.I would like to see what users have access there and ideal...

  • 2361 Views
  • 1 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

@Bartosz Wachocki​ , As secrets use ACL for the scope, you need to make an API call (can be via CLI also) to list ACL for the given scope >> 2.0/secrets/acls/list more info here https://docs.databricks.com/dev-tools/api/latest/secrets.html#list-secre...

  • 2 kudos
BeginnerBob
by New Contributor III
  • 5282 Views
  • 2 replies
  • 2 kudos

Bronze silver gold layers

Is there a best practise guide on setting up the delta lake for these 3 layers. ​I'm looking for document or scripts to run that will assist me.

  • 5282 Views
  • 2 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

hi @Lloyd Vickery​ ,I would highly recommend to use Databricks Delta Live Tables (DLT) docs here https://databricks.com/product/delta-live-tables

  • 2 kudos
1 More Replies
AdamRink
by New Contributor III
  • 4423 Views
  • 3 replies
  • 0 kudos

Try catch multiple write streams on a job

We are having issues with checkpoints and schema versions getting out of date (no idea why), but it causes jobs to fail. We have jobs that are running 15-30 streaming queries, so if one fails, that creates an issue. I would like to trap the checkpo...

  • 4423 Views
  • 3 replies
  • 0 kudos
Latest Reply
AdamRink
New Contributor III
  • 0 kudos

The problem is that on startup if a stream fails, it would never hit the awaitAnyTermination? I almost want to take that while loop and put it on a background thread to start that at the beginning and then fire all the streams afterward... not sure ...

  • 0 kudos
2 More Replies
TS
by New Contributor III
  • 3843 Views
  • 3 replies
  • 3 kudos

Resolved! Turn spark.sql query into scala function

Hello,I'm learning Scala / Spark and try to understand what's wrong with my function:I have a spark.sql query, stored in a variable:val uViewName = spark.sql(""" SELECT v.Data_View_Name FROM apoHierarchy AS h INNER JOIN apoView AS v ON h.View_N...

  • 3843 Views
  • 3 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

try add .first()(0) it will return only value from first row/column as currently you are returning Dataset: var uViewName = spark.sql(s""" SELECT v.Data_View_Name FROM apoHierarchy AS h INNER JOIN apoView AS v ON h.View_Name = v.Context_View_N...

  • 3 kudos
2 More Replies
brickster_2018
by Databricks Employee
  • 3020 Views
  • 2 replies
  • 1 kudos

Resolved! How to test Kafka connectivity from a Databricks notebook

My structured streaming job is failing as it's unable to connect to Kafka. I believe the issue is with Spark. How can I isolate if it's a Spark library issue or an actual network issue.

  • 3020 Views
  • 2 replies
  • 1 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 1 kudos

The below code snippet can be used to test the connectivityimport java.util.Arrays import java.util.Properties import org.apache.kafka.clients.admin.AdminClient import org.apache.kafka.clients.admin.AdminClientConfig import org.apache.kafka.clients.a...

  • 1 kudos
1 More Replies
Mr__E
by Contributor II
  • 5004 Views
  • 5 replies
  • 5 kudos

Resolved! Using shared python wheels for job compute clusters

We have a GitHub workflow that generates a python wheel and uploads to a shared S3 available to our Databricks workspaces. When I install the Python wheel to a normal compute cluster using the path approach, it correctly installs the Python wheel and...

  • 5004 Views
  • 5 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 5 kudos

You can mount S3 as a DBFS folder then set that library in "cluster" -> "libraries" tab -> "install new" -> "DBFS" 

  • 5 kudos
4 More Replies
yoniau
by New Contributor II
  • 2633 Views
  • 2 replies
  • 5 kudos

Resolved! Different configurations for same Databricks Runtime version

Hi all,On my DBR installations, s3a scheme is mapped to shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystem. On my customer's DBR installations it is mapped to com.databricks.s3a.S3AFileSystem.We both use the same DBR runtime, and none of us has...

  • 2633 Views
  • 2 replies
  • 5 kudos
Latest Reply
Prabakar
Databricks Employee
  • 5 kudos

@Yoni Au​ , If both of you are using the same DBR version, then you should not find any difference. As @Hubert Dudek​ mentioned, there might be some spark configuration change made on one of the clusters. Also, it's worth checking for any cluster sco...

  • 5 kudos
1 More Replies
susan1234567
by New Contributor
  • 1878 Views
  • 1 replies
  • 2 kudos

I cannot access databricks community edition account

Last week, I cannot loginto https://community.cloud.databricks.com/login.html all of a sudden. I tried to set the password, also didn't receive the reset email. It says "Invalid email address or password Note: Emails/usernames are case-sensitive".I e...

  • 1878 Views
  • 1 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

@Kaniz Fatma​ can help, additionally you can open ticket here https://help.databricks.com/s/contact-us

  • 2 kudos
Serhii
by Contributor
  • 4130 Views
  • 4 replies
  • 8 kudos

Resolved! init_script error during cluster creation - 101: Network is unreachable

When I run the init_script during cluster creationapt-get update && apt-get install -y ffmpeg libsndfile1-devI get an error in cluster logs E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/universe/o/openal-soft/libopenal1_1.19.1-1_amd64.deb ...

  • 4130 Views
  • 4 replies
  • 8 kudos
Latest Reply
Anonymous
Not applicable
  • 8 kudos

Hi @Sergii Ivakhno​ Could you please check if outbound TCP access for port 80 is allowed in the security group.

  • 8 kudos
3 More Replies
OmanEvisa
by New Contributor
  • 476 Views
  • 0 replies
  • 0 kudos

PROCESS OF APPLYING FOR OMAN E-VISA The Oman e-Visa was initiated in 2018, for making the process easy. Presently, 220 countries in the world are elig...

PROCESS OF APPLYING FOR OMAN E-VISAThe Oman e-Visa was initiated in 2018, for making the process easy. Presently, 220 countries in the world are eligible to apply for Oman e-Visa. Tourists can apply for visas online by submitting the Oman visa applic...

  • 476 Views
  • 0 replies
  • 0 kudos
JBear
by New Contributor III
  • 4155 Views
  • 4 replies
  • 4 kudos

Resolved! Cant find reason but suddenly new Jobs are getting huge job id numbers. example 945270539673815

Created Job ID is suddenly started to make huge numbers, and that is now making problems in Terraform plan, cause int is too big Error: strconv.ParseInt: parsing "945270539673815": value out of rangeIm new on the board and pretty new with Databricks ...

  • 4155 Views
  • 4 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Jere Karhu​ , In case you are using the Job/Run id in API, please be advised that you will need to change the client-side logic to process int64/long and expect a random number. In some cases, you just need to change the declared type in their so...

  • 4 kudos
3 More Replies
Mr__E
by Contributor II
  • 3040 Views
  • 3 replies
  • 3 kudos

Resolved! Importing MongoDB with field names containing spaces

I am currently using a Python notebook with a defined schema to import fairly unstructured documents in MongoDB. Some of these documents have spaces in their field names. I define the schema for the MongoDB PySpark connector like the following:Struct...

  • 3040 Views
  • 3 replies
  • 3 kudos
Latest Reply
Mr__E
Contributor II
  • 3 kudos

Solution: It turns out the issue is not the schema reading in, but the fact that I am writing to Delta tables, which do not currently support spaces. So, I need to transform them prior to dumping. I've been following a pattern of reading in raw data,...

  • 3 kudos
2 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels