cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

Sagar1
by New Contributor III
  • 3608 Views
  • 3 replies
  • 4 kudos

How to identify or determine how many jobs will be performed if I submit code

I’m not able to find a source where it explains how to determine how many job a written piece of pyspark code will trigger. Can you please help me here. About stages I know that the number of shuffles equals to the number of stages.

  • 3608 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @sagar Varma​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...

  • 4 kudos
2 More Replies
Kash
by Contributor III
  • 1365 Views
  • 2 replies
  • 6 kudos

Will Vacuum delete previous folders of data if we z-ordered by as_of_date each day?

Hi there,I've had horrible experiences Vacuuming tables in the past and losing tons of data so I wanted to confirm a few things about Vacuuming and Z-Order.Background:Each day we run an ETL job that appends data in a table and stores the data in S3 b...

  • 1365 Views
  • 2 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Avkash Kana​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...

  • 6 kudos
1 More Replies
Sweta
by New Contributor II
  • 3792 Views
  • 6 replies
  • 7 kudos

Can Delta Lake completely host a data warehouse and replace Redshift?

Our use case is simple - to store our PB scale data and transform and use for BI, reporting and analytics. As my title says am trying to eliminate expenditure on Redshift as we are starting as a green field. I know I have designed/used just Delta lak...

  • 3792 Views
  • 6 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

Hi @Swetha Marakani​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 7 kudos
5 More Replies
Carlton
by Contributor
  • 4810 Views
  • 5 replies
  • 14 kudos

I would like to know why CROSS JOIN fails recognize columns

Whenever I apply a CROSS JOIN to my Databricks SQL query I get a message letting me know that a column does not exists, but I'm not sure if the issue is with the CROSS JOIN.For example, the code should identify characters such as http, https, ://, / ...

image
  • 4810 Views
  • 5 replies
  • 14 kudos
Latest Reply
Shalabh007
Honored Contributor
  • 14 kudos

@CARLTON PATTERSON​ Since you have given an alias "tt" to your table "basecrmcbreport.organizations", to access corresponding columns you will have to access them in format tt.<column_name>in your code in line #4, try accessing the column 'homepage_u...

  • 14 kudos
4 More Replies
RyanD-AgCountry
by Contributor
  • 3549 Views
  • 5 replies
  • 7 kudos

Resolved! Azure Create Metastore button not available

With Unity Catalog gone GA on Azure, we are working through initial tests for setup within Databricks and Azure. However, we are not seeing the "Create Metastore" button available as indicated in documentation. We're also not seeing any additional pr...

  • 3549 Views
  • 5 replies
  • 7 kudos
Latest Reply
Addi1
New Contributor II
  • 7 kudos

I'm facing the same issues listed above. "Create Metastore" button is unavailable for me as well.

  • 7 kudos
4 More Replies
andreiten
by New Contributor II
  • 4302 Views
  • 1 replies
  • 3 kudos

Is there any example or guideline how to pass JSON parameters to the pipeline in Databricks workflow?

I used this source https://docs.databricks.com/workflows/jobs/jobs.html#:~:text=You%20can%20use%20Run%20Now,different%20values%20for%20existing%20parameters.&text=next%20to%20Run%20Now%20and,on%20the%20type%20of%20task. But there is no example of how...

  • 4302 Views
  • 1 replies
  • 3 kudos
Latest Reply
UmaMahesh1
Honored Contributor III
  • 3 kudos

Hi @Andre Ten​ That's exactly how you specify the json parameters in databricks workflow. I have been doing in the same format and it works for me..removed the parameters as it is a bit sensitive. But I hope you get the point.Cheers.

  • 3 kudos
PaulP
by New Contributor II
  • 2367 Views
  • 3 replies
  • 6 kudos

What is the best expected starting time for a cluster when using a pool?

Hi! I'm doing some tests to get an idea of how much time could be saved starting a cluster by using a pool and was wondering if the results I get are what should be expected.We're using AWS Databricks and used i3.xlarge as instance type (if that matt...

  • 2367 Views
  • 3 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Paul Pelletier​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...

  • 6 kudos
2 More Replies
marcus1
by New Contributor III
  • 1744 Views
  • 2 replies
  • 1 kudos

Running jobs as a non-job owner

We have enabled Cluster, Pool and Job access, and non-job owners can not run a job even though they are administrators. This disables users from creating cluster resources.When a non-owner of a job attempts to run, they get a permission denied.My un...

  • 1744 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Marcus Simonsen​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 1 kudos
1 More Replies
ConSpooky
by New Contributor II
  • 1998 Views
  • 3 replies
  • 4 kudos

Best practice for creating queries for data transformation?

My apologies in advance for sounding like a newbie. This is really just a curiosity question I have as an outsider observing my team clash with our client. Please ask any questions you have, and I will try my best to answer it.Currently, we are stori...

  • 1998 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Nick Connors​Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks...

  • 4 kudos
2 More Replies
Gerhard
by New Contributor III
  • 1380 Views
  • 0 replies
  • 1 kudos

Read proprietary files and transform contents to a table - error resilient process needed

We do have data stored in HDF5 files in a "proprietary" way. This data needs to be read, converted and transformed before it can be inserted into a delta table.All of this transformation is done in a custom python function that takes the HDF5 file an...

  • 1380 Views
  • 0 replies
  • 1 kudos
tassiodahora
by New Contributor III
  • 57035 Views
  • 2 replies
  • 7 kudos

Resolved! Failed to merge incompatible data types LongType and StringType

Guys, good morning!I am writing the results of a json in a delta table, only the json structure is not always the same, if the field does not list in the json it generates type incompatibility when I append(dfbrzagend.write .format("delta") .mode("ap...

  • 57035 Views
  • 2 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

Hi @Tássio Santos​ The delta table performs schema validation of every column, and the source dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.For reference-https://docs.dat...

  • 7 kudos
1 More Replies
Geeta1
by Valued Contributor
  • 1592 Views
  • 1 replies
  • 8 kudos

Activate gift certificate in Databricks Community Reward store

Hello all,Can anyone let me know about the "Activate Gift Certificate" option in Databricks Community Reward store? What is its purpose and how we can use it?

image.png
  • 1592 Views
  • 1 replies
  • 8 kudos
Latest Reply
yogu
Honored Contributor III
  • 8 kudos

you earn points with forum interaction. Those points can be exchanged for 'credits'.With those credits you can buy Databricks swag.Your lifetime points (so the cumulated amount of points) are not affected by this.

  • 8 kudos
Manjusha
by New Contributor II
  • 2073 Views
  • 1 replies
  • 1 kudos

SocketTimeout exception when running a display command on spark dataframe

I am using runtime 9.1LTSI have a R notebook that reads a csv into a R dataframe and does some transformations and finally is converted to spark dataframe using the createDataFrame function.after that when I call the display function on this spark da...

  • 2073 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Manjusha Unnikrishnan​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else bricksters will get back to you soon. Thanks.

  • 1 kudos
MBV3
by New Contributor III
  • 1704 Views
  • 1 replies
  • 2 kudos

Delete a file from GCS folder

What is the best way to delete files from the gcp bucket inside spark job?

  • 1704 Views
  • 1 replies
  • 2 kudos
Latest Reply
Unforgiven
Valued Contributor III
  • 2 kudos

@M Baig​ yes you need just to create service account for databricks and than assign storage admin role to bucket. After that you can mount GCS standard way:bucket_name = "<bucket-name>"mount_name = "<mount-name>"dbutils.fs.mount("gs://%s" % bucket_na...

  • 2 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels