cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Orianh
by Valued Contributor II
  • 1413 Views
  • 0 replies
  • 0 kudos

Retrieve a row from indexed spark data frame.

Hello guys, I'm having an issue when trying to get a row values from spark data frame.I have a DF with index column, and i need to be able to return a row based on index in fastest way possible .I tried to partitionBy index column, optimize with zor...

  • 1413 Views
  • 0 replies
  • 0 kudos
CrisBerg_65149
by New Contributor III
  • 1677 Views
  • 6 replies
  • 6 kudos

Resolved! SELECT * FROM delta doesn't work on Spark 3.2

Using DBR 10 or later and I’m getting an error when running the following querySELECT * FROM delta.`s3://some_path`getting org.apache.spark.SparkException: Unable to fetch tables of db deltaFor 3.2.0+ they recommend reading like this:CREATE TEMPORAR...

  • 1677 Views
  • 6 replies
  • 6 kudos
Latest Reply
CrisBerg_65149
New Contributor III
  • 6 kudos

Got support from Databricks.Unfortunately, someone created a DB called delta, so the query was done against that DB instead. Issue was solved

  • 6 kudos
5 More Replies
Development
by New Contributor III
  • 2460 Views
  • 8 replies
  • 5 kudos

Delta Table with 130 columns taking time

Hi All,We are facing one un-usual issue while loading data into Delta table using Spark SQL. We have one delta table which have around 135 columns and also having PARTITIONED BY. For this trying to load 15 millions of data volume but its not loading ...

  • 2460 Views
  • 8 replies
  • 5 kudos
Latest Reply
Development
New Contributor III
  • 5 kudos

@Kaniz Fatma​ @Parker Temple​  I found an root cause its because of serialization. we are using UDF to drive an column on dataframe, when we are trying to load data into delta table or write data into parquet file we are facing  serialization issue ....

  • 5 kudos
7 More Replies
laus
by New Contributor III
  • 12053 Views
  • 5 replies
  • 2 kudos

Resolved! get a "Py4JJavaError: An error occurred while calling o5082.csv." when trying to save to csv file.

Hi, I'm trying to save a dataframe to csv with the code below:output.coalesce(1).write.mode('overwrite').option('header', 'true').csv(tmp_file_path) But it get "Py4JJavaError: An error occurred while calling o5082.csv." error. Any idea how to solve...

Screenshot 2022-03-31 at 17.33.13
  • 12053 Views
  • 5 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @Laura Blancarte​ , Looks like you want to save your dataframe as CSV. Did you try to download the preview?

  • 2 kudos
4 More Replies
Vee
by New Contributor
  • 2156 Views
  • 3 replies
  • 0 kudos

Tips for resolving follolwing errors related to AWS S3 read / write

Job aborted due to stage failure: Task 0 in stage 3084.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3084.0 (TID...., ip..., executor 0): org.apache.spark.SparkExecution: Task failed while writing rowsJob aborted due to stage failure:...

  • 2156 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Vetrivel Senthil​ , Are you still facing the problem? Were you able to resolve it by yourself, or do you still need help? Please let us know.

  • 0 kudos
2 More Replies
Gerhard
by New Contributor III
  • 2635 Views
  • 9 replies
  • 5 kudos

Overall security/access rights concept needed (combine Table Access Control and Credential Passthrough), how to allow users the benefits of both worlds

What we have:Databricks Workspace Premium on AzureADLS Gen2 storage for raw data, processed data (tables) and files like CSV, models, etc.What we want to do:We have users that want to work on Databricks to create and work with Python algorithms. We d...

  • 2635 Views
  • 9 replies
  • 5 kudos
Latest Reply
Gerhard
New Contributor III
  • 5 kudos

Hey @Vartika Nain​ , we are still at the same situation as described above. The Hive Metastore is a weak point.I would love to have the functionality that a mount can be dedicated to a given cluster.Regards, Gerhard

  • 5 kudos
8 More Replies
Reza
by New Contributor III
  • 3929 Views
  • 8 replies
  • 8 kudos

Datepicker widget

There are textbox and dropdown list widgets in Databricks. Is there any datepicker widget? If not, is there any plan to add it?

  • 3929 Views
  • 8 replies
  • 8 kudos
Latest Reply
Kaniz
Community Manager
  • 8 kudos

Hi @Reza Rajabi​ , Just a friendly follow-up. Do you still need help, or does my response help you to find the solution? Please let us know.

  • 8 kudos
7 More Replies
Rahul_Samant
by Contributor
  • 6701 Views
  • 5 replies
  • 3 kudos

Resolved! Bucketing on Delta Tables

getting error as below while creating buckets on delta table.Error in SQL statement: AnalysisException: Delta bucketed tables are not supported.have fall back to parquet table due to this for some use cases. is their any alternative for this. i have...

  • 6701 Views
  • 5 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Rahul Samant​  , we checked internally on this due to certain limitations bucketing is not supported on delta tables, the only alternative for bucketing is to leverage the z ordering, below is the link for reference https://docs.databricks.com/de...

  • 3 kudos
4 More Replies
reedzhang
by New Contributor III
  • 1895 Views
  • 6 replies
  • 5 kudos

Resolved! uninstalled libraries continue to get installed on cluster startup

We have been trying to update some library versions by uninstalling the old versions and installing new ones. However, the old libraries continue to get installed on cluster startup despite not showing up in the "libraries" tab of the cluster page. W...

  • 1895 Views
  • 6 replies
  • 5 kudos
Latest Reply
reedzhang
New Contributor III
  • 5 kudos

The issue seemed to go away on its own. At some point the libraries page started showing what was getting installed to the cluster, and removing libraries from the page caused them to stop getting installed on cluster startup. I'm guessing there was ...

  • 5 kudos
5 More Replies
818674
by New Contributor III
  • 3885 Views
  • 13 replies
  • 8 kudos

Resolved! How to perform a cross-check for data in multiple columns in same table?

I am trying to check whether a certain datapoint exists in multiple locations.This is what my table looks like:I am checking whether the same datapoint is in two locations. The idea is that this datapoint should exist in BOTH locations, and be counte...

Table Examples of Results for Cross-Checking
  • 3885 Views
  • 13 replies
  • 8 kudos
Latest Reply
818674
New Contributor III
  • 8 kudos

Hi,Thank you very much for following up. I no longer need assistance with this issue.

  • 8 kudos
12 More Replies
Michael_Galli
by Contributor II
  • 2560 Views
  • 3 replies
  • 2 kudos

Resolved! Spark Streaming - only process new files in streaming path?

In our streaming jobs, we currently run streaming (cloudFiles format) on a directory with sales transactions coming every 5 minutes.In this directory, the transactions are ordered in the following format:<streaming-checkpoint-root>/<transaction_date>...

  • 2560 Views
  • 3 replies
  • 2 kudos
Latest Reply
Michael_Galli
Contributor II
  • 2 kudos

Update:Seems that maxFileAge was not a good idea. The following with the option "includeExistingFiles" = False solved my problem:streaming_df = ( spark.readStream.format("cloudFiles") .option("cloudFiles.format", extension) .option("...

  • 2 kudos
2 More Replies
LightUp
by New Contributor III
  • 3789 Views
  • 2 replies
  • 4 kudos

Converting SQL Code to SQL Databricks

I am new to Databricks. Please excuse my ignorance. My requirement is to convert the SQL query below into Databricks SQL. The query comes from EventLog table and the output of the query goes into EventSummaryThese queries can be found hereCREATE TABL...

image
  • 3789 Views
  • 2 replies
  • 4 kudos
Latest Reply
LightUp
New Contributor III
  • 4 kudos

Thank you @Joseph Kambourakis​  The part that is not clear to me from the how to rework the part circled in the image above. Even this part of the code does not work in databricks:DATEADD(month, DATEDIFF(month, 0, DATEADD(month , 1 , EventStartDateTi...

  • 4 kudos
1 More Replies
colette_chavali
by New Contributor III
  • 617 Views
  • 1 replies
  • 6 kudos

Resolved! Nominations are OPEN for the Databricks Data Team Awards!

Databricks customers - nominate your data team and leaders for one (or more) of the six Data Team Award categories: Data Team Transformation AwardData Team for Good AwardData Team Disruptor AwardData Team Democratization AwardData Team Visionary Awar...

Data Team Awards
  • 617 Views
  • 1 replies
  • 6 kudos
Latest Reply
Kaniz
Community Manager
  • 6 kudos

Cool!

  • 6 kudos
AvijitDey
by New Contributor III
  • 2730 Views
  • 3 replies
  • 4 kudos

Resolved! Azure Databrick SQL bulk insert to AZ SQL

Env: Azure Databrick :version : 9.1 LTS (includes Apache Spark 3.1.2, Scala 2.12)Work Type : 56 GB Memory 2-8 node ( standard D13_V2)No of rows : 2470350 and 115 Column Size : 2.2 GBTime taken approx. 9 min Python Code .What will be best approach for...

  • 2730 Views
  • 3 replies
  • 4 kudos
Latest Reply
AvijitDey
New Contributor III
  • 4 kudos

Any further suggestion

  • 4 kudos
2 More Replies
Labels
Top Kudoed Authors