cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

data_boy_2022
by New Contributor III
  • 3600 Views
  • 2 replies
  • 0 kudos

Resolved! Writing transformed DataFrame to a persistent table is unbearable slow

I want to transform a DF with a simple UDF. Afterwards I want to store the resulting DF in a new table (see code below)key = "test_key"   schema = StructType([ StructField("***", StringType(), True), StructField("yyy", StringType(), True), StructF...

  • 3600 Views
  • 2 replies
  • 0 kudos
Latest Reply
Vidula
Honored Contributor
  • 0 kudos

Hello @Jan R​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 0 kudos
1 More Replies
komplex
by New Contributor
  • 1588 Views
  • 2 replies
  • 1 kudos

I need help finding the right mode for my course

How do I find the Data Brick Community edition?

  • 1588 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vidula
Honored Contributor
  • 1 kudos

Hi @Kester Truman​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...

  • 1 kudos
1 More Replies
Jessevds
by New Contributor II
  • 3825 Views
  • 2 replies
  • 2 kudos

Create dropdown-list in Markdown

In the first cell of my notebooks, I record a changelog for all changes done in the notebook in Markdown. However, as this list becomes longer and longer, I want to implement a dropdown list. Is there anyway to do this in Markdown in databricks?For t...

  • 3825 Views
  • 2 replies
  • 2 kudos
Latest Reply
Vidula
Honored Contributor
  • 2 kudos

Hi @Jesse vd S​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 2 kudos
1 More Replies
mghildiy
by New Contributor
  • 1443 Views
  • 1 replies
  • 0 kudos

A basic DataFrame transformation query

I want to know how dataframe transformations work.Suppose I have a DataFrame instance df1. I apply some operation on it, say a filter. As every operation gives a new dataframe, so lets say now we have df2. So we have two DataFrame instances now, df1 ...

  • 1443 Views
  • 1 replies
  • 0 kudos
Latest Reply
Vidula
Honored Contributor
  • 0 kudos

Hi @mghildiy​ Does @Kaniz Fatma​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?We'd love to hear from you.Thanks!

  • 0 kudos
Erik
by Valued Contributor III
  • 2592 Views
  • 2 replies
  • 2 kudos

Resolved! Where is Databricks Tunnel (and is Databricks connect cool again?)

Two related questions:1: There has been several mentions in this forum about "Databricks Tunnel", which should allow us to connect from our local IDE to a remote databricks cluster and develop stuff locally. The roumors said early 2022, is there some...

  • 2592 Views
  • 2 replies
  • 2 kudos
Latest Reply
Vidula
Honored Contributor
  • 2 kudos

Hi there @Erik Parmann​ Does @Youssef Mrini​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?We'd love to hear from you.Thanks

  • 2 kudos
1 More Replies
dimsh
by Contributor
  • 2018 Views
  • 3 replies
  • 1 kudos

Any plans to provide Databricks SQL / Alerts API

Hi, Databricks! You are my favorite Big Data tool, but I've recently faced an issue I didn't expect to have. For our agriculture customers, we're trying to use Databricks SQL Platform to keep our data accurate all day. We use Alerts to validate our d...

  • 2018 Views
  • 3 replies
  • 1 kudos
Latest Reply
Vidula
Honored Contributor
  • 1 kudos

Hi @Dmytro Imshenetskyi​ Does @Hubert Dudek​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?We'd love to hear from you.Thanks!

  • 1 kudos
2 More Replies
ronaldolopes
by New Contributor
  • 3987 Views
  • 2 replies
  • 1 kudos

Resolved! Error deleting a table

I'm trying to delete a table that was created from a csv and due to the file deletion, I can't execute the deletion, with the following error: I'm new to Databricks and I don't know how to fix this. Some help?

Captura de tela de 2022-08-29 14-35-04
  • 3987 Views
  • 2 replies
  • 1 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 1 kudos

To delete the table, it's looking for underlying delta log file and because the file doesn't exist, it's throwing you that error.Just drop the table.drop table <table_name>

  • 1 kudos
1 More Replies
RohitKulkarni
by Contributor II
  • 2787 Views
  • 4 replies
  • 7 kudos

Resolved! Azure data bricks delta tables .Issue

Hello Team,I have written Spark SQL Query in data bricks :DROP TABLE IF EXISTS Salesforce.Location;CREATE EXTERNAL TABLE Salesforce.Location (Id STRING,OwnerId STRING,IsDeleted bigint,Name STRING,CurrencyIsoCode STRING,CreatedDate bigint,CreatedById ...

  • 2787 Views
  • 4 replies
  • 7 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 7 kudos

You need to provide one of the following value for 'data_source':TEXTAVROCSVJSONPARQUETORCDELTAeg: USING PARQUETIf you skip USING clause, then the default data source is DELTAhttps://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-create-t...

  • 7 kudos
3 More Replies
LearnerShahid
by New Contributor II
  • 7862 Views
  • 6 replies
  • 4 kudos

Resolved! Lesson 6.1 of Data Engineering. Error when reading stream - java.lang.UnsupportedOperationException: com.databricks.backend.daemon.data.client.DBFSV1.resolvePathOnPhysicalStorage(path: Path)

Below function executes fine: def autoload_to_table(data_source, source_format, table_name, checkpoint_directory):  query = (spark.readStream         .format("cloudFiles")         .option("cloudFiles.format", source_format)         .option("cloudFile...

I have verified that source data exists.
  • 7862 Views
  • 6 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Autoloader is not supported on community edition.

  • 4 kudos
5 More Replies
BenLambert
by Contributor
  • 3128 Views
  • 2 replies
  • 2 kudos

Resolved! Delta Live Tables not inferring table schema properly.

I have a delta live tables pipeline that is loading and transforming data. Currently I am having a problem that the schema inferred by DLT does not match the actual schema of the table. The table is generated via a groupby.pivot operation as follows:...

  • 3128 Views
  • 2 replies
  • 2 kudos
Latest Reply
BenLambert
Contributor
  • 2 kudos

I was able to get around this by specifying the table schema in the table decorator.

  • 2 kudos
1 More Replies
mick042
by New Contributor III
  • 1459 Views
  • 1 replies
  • 0 kudos

Optimal approach when using external script/executable for processing data

I need to process a number of files where I manipulate file text utilising an external executable that operates on stdin/stdout. I am quite new to spark. What I am attempting is to use rdd.pipe as in the followingexe_path = " /usr/local/bin/external...

  • 1459 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753725469
Contributor II
  • 0 kudos

Hi @Michael Lennon​  Can you please elaborate use case on what the external app is doing exe_path

  • 0 kudos
Leszek
by Contributor
  • 3239 Views
  • 2 replies
  • 4 kudos

How to set up partitions on the streaming Delta Table?

Let's assume that we have 3 streaming Delta Tables:BronzeSilverGoldMy aim is to add partitioning to Silver table (for example by Date). So, as a result Gold table with throw an error that source table has been updated and I would need to set 'ignoreC...

  • 3239 Views
  • 2 replies
  • 4 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 4 kudos

is the change data feed functionality (of your silver table) an option, combined with merge in your gold table?https://docs.microsoft.com/en-us/azure/databricks/delta/delta-change-data-feed

  • 4 kudos
1 More Replies
Mohit_m
by Valued Contributor II
  • 10073 Views
  • 1 replies
  • 1 kudos

Resolved! Databricks - python error when importing wheel distribution package

In previous days all notebooks containing : 'import anomalydetection' worked just fine. There was no change in any configuration of the cluster, notebook or our imported library.However recently notebooks just crashed with below errorSame happen also...

  • 10073 Views
  • 1 replies
  • 1 kudos
Latest Reply
Mohit_m
Valued Contributor II
  • 1 kudos

Solution: This is due to the latest version of protobuf library, please try to downgrade the library which should solve the issuepip install protobuf==3.20.*protobuf library versions which works: 3.20.1 if it does not work then try 3.18.1

  • 1 kudos
noimeta
by Contributor III
  • 1844 Views
  • 0 replies
  • 0 kudos

How to use Terraform to add Git provider credentials to a workspace in order to use service principal for CI/CD

Hi,I'm very new to Terraform. Currently, I'm trying to automate the service principal setup process using Terraform.Following this example, I successfully created a service principal and an access token. However, when I tried adding databricks_git_cr...

  • 1844 Views
  • 0 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels