cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

William_Scardua
by Valued Contributor
  • 3284 Views
  • 6 replies
  • 2 kudos

How not to reprocess old files without delta ?

Hi guys,​Look that case: Company ACME (hypothetical company)​This company does not use delta, but uses open source Spark to process raw data for .parquet, we have a 'sales' process which consists of receiving every hour a new dataset (.csv) within th...

  • 3284 Views
  • 6 replies
  • 2 kudos
Latest Reply
William_Scardua
Valued Contributor
  • 2 kudos

Hi @Jose Gonzalez​ , ​I agree the best option is to use auto load, but some cases you don`t have the databricks plataform and don`t use delta, i this cases you need build a way to process the new raw files

  • 2 kudos
5 More Replies
kaslan
by New Contributor II
  • 7812 Views
  • 5 replies
  • 0 kudos

How to filter files in Databricks Autoloader stream

I want to set up an S3 stream using Databricks Auto Loader. I have managed to set up the stream, but my S3 bucket contains different type of JSON files. I want to filter them out, preferably in the stream itself rather than using a filter operation.A...

  • 7812 Views
  • 5 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

According to the docs you linked, the glob filter on input-path only works on directories, not on the files themselves.So if you want to filter on certain files in the concerning dirs, you can include an additional filter through the pathGlobFilter o...

  • 0 kudos
4 More Replies
HamzaJosh
by New Contributor II
  • 14736 Views
  • 6 replies
  • 3 kudos

I want to use databricks workers to run a function in parallel on the worker nodes

I have a function making api calls. I want to run this function in parallel so I can use the workers in databricks clusters to run it in parallel. I have tried with ThreadPoolExecutor() as executor: results = executor.map(getspeeddata, alist)to run m...

  • 14736 Views
  • 6 replies
  • 3 kudos
Latest Reply
HamzaJosh
New Contributor II
  • 3 kudos

You guys are not getting the point, I am making API calls in a function and want to store the results in a dataframe. I want multiple processes to run this task in parallel. How do I create a UDF and use it in a dataframe when the task is calling an ...

  • 3 kudos
5 More Replies
sarosh
by New Contributor
  • 8721 Views
  • 2 replies
  • 1 kudos

ModuleNotFoundError / SerializationError when executing over databricks-connect

I am running into the following error when I run a model fitting process over databricks-connect.It looks like worker nodes are unable to access modules from the project's parent directory. Note that the program runs successfully up to this point; n...

modulenotfoundanno
  • 8721 Views
  • 2 replies
  • 1 kudos
Latest Reply
Manjunath
Databricks Employee
  • 1 kudos

@Sarosh Ahmad​ , Could you try adding the zip of the module to the addPyFile like belowspark.sparkContext.addPyFile("src.zip")

  • 1 kudos
1 More Replies
Tankala_Harika
by New Contributor II
  • 604 Views
  • 0 replies
  • 0 kudos

Hi juliet Wu I have completed my databricks apache spark associate developer exam on 7/10/2021 after subsequent completion of my exam I got my badge t...

Hi juliet Wu I have completed my databricks apache spark associate developer exam on 7/10/2021 after subsequent completion of my exam I got my badge to my Webaccesor Mail immediately after 1 day of exam which is on 8/10/2021​.but I didn't received my...

  • 604 Views
  • 0 replies
  • 0 kudos
Mihai1
by New Contributor III
  • 2446 Views
  • 1 replies
  • 2 kudos

Resolved! How to source control a Dashboard?

Is it possible to source control the dashboard along with a notebook code? When source controlling a python notebook it gets converted to *.py. It looks like the resulting *.py file loses the information about the dashboard cells. Thus, if this *.py ...

  • 2446 Views
  • 1 replies
  • 2 kudos
Latest Reply
Dan_Z
Databricks Employee
  • 2 kudos

No, you will need to save as another source, like DBC Archive, to replicate the Notebook features.

  • 2 kudos
krishnakash
by New Contributor II
  • 3833 Views
  • 1 replies
  • 1 kudos

How to provide custom class extending SparkPlugin/ExecutorPlugin in Databricks 7.3?

How to properly configure the jar containing the class and spark plugin in Databricks?During DBR 7.3 cluster creation, I tried by setting the spark.plugins, spark.driver.extraClassPath and spark.executor.extraClassPath Spark configs by copying the ja...

  • 3833 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hello @Krishna Kashiv​ - I don't know if we've met yet. My name is Piper and I'm a community moderator here. Thank you for your new question. It looks thorough! Let's give it a while to see what our members have to say. Otherwise, we will circle back...

  • 1 kudos
sriwin
by New Contributor
  • 2952 Views
  • 1 replies
  • 0 kudos

Create gpg file and save to AWS s3 storage in scala

Hi - Could you please help me on how can I create a scala notebook to perform the below tasksEncrypt a text file using the gpgUpload the file to amazon s3 storageverify the file exists in amazon s3decrypt the encrypted file to verify no issuesApprec...

  • 2952 Views
  • 1 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hello! My name is Piper and I'm a community moderator for Databricks. Thanks for your question. Let's give it a bit more to see what our members have to say. If not, we'll circle back around.

  • 0 kudos
cconnell
by Contributor II
  • 6960 Views
  • 11 replies
  • 7 kudos

Resolved! What is the proper way to import the new pyspark.pandas library?

I am moving an existing, working pandas program into Databricks. I want to use the new pyspark.pandas library, and change my code as little as possible. It appears that I should do the following:1) Add from pyspark import pandas as ps at the top2) Ch...

  • 6960 Views
  • 11 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

Make sure to use the 10.0 Runtime which includes Spark 3.2

  • 7 kudos
10 More Replies
IgnacioCastinei
by New Contributor III
  • 10752 Views
  • 6 replies
  • 2 kudos

CLI Command <databricks fs cp> Not Uploading Files to DBFS

Hi all, So far I have been successfully using the CLI interface to upload files from my local machine to DBFS/FileStore/tables. Specifically, I have been using my terminal and the following command: databricks fs cp -r <MyLocalDataset> dbfs:/FileStor...

  • 10752 Views
  • 6 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

hi @Ignacio Castineiras​ ,If Arjun.kr's fully answered your question, would you be happy to mark their answer as best so that others can quickly find the solution?Please let us know if you still are having this issue.

  • 2 kudos
5 More Replies
ExtreemTactical
by New Contributor
  • 541 Views
  • 0 replies
  • 0 kudos

1.   DIFFERENT TYPES OF TACTICAL GEAR 1. HARDWAREOptical hardware, for instance, cuffs, laser sights, optics, and night vision goggles accompany a hug...

1.   DIFFERENT TYPES OF TACTICAL GEAR1. HARDWAREOptical hardware, for instance, cuffs, laser sights, optics, and night vision goggles accompany a huge group of features and capacities. Packs and pockets are made of climate-safe material planned to ke...

  • 541 Views
  • 0 replies
  • 0 kudos
Adrien
by New Contributor
  • 2062 Views
  • 1 replies
  • 0 kudos

Creating a table like in SQL with Spark

Hi !I'm working on a project at my company on Databricks using Scala and Spark. I'm new to Spark and Databricks and so I would like to know how to create a table on specific location (on the Delta Lake of my company). In SQL + some Delta features, I ...

  • 2062 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Hi @Adrien MERAT​ ,I would like to share the following documentation that will provide examples on how to create Delta tables:Create Delta table linkDelta data types link

  • 0 kudos
vasu_sethia
by New Contributor II
  • 3168 Views
  • 8 replies
  • 0 kudos

Spark adding NUL

Hi I have a DF which contains Json string so the value is like {"key": Value, "anotherKey": anotherValue}, so when I am trying to write the DF containing this string to the CSV, spark is ​adding NUL character af the front of this line and at the end,...

  • 3168 Views
  • 8 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

Hard to tell without having the code, but it might be the separator for the csv? You do have comma's in the string, and comma is the default separator for csv.

  • 0 kudos
7 More Replies
afshinR
by New Contributor III
  • 3768 Views
  • 4 replies
  • 3 kudos

Hi, I like to create a web form with displayHTML in a notebook cell and when the users presses the post button, i like to write the content of my text...

Hi,I like to create a web form with displayHTML in a notebook cell and when the users presses the post button, i like to write the content of my text area of my form back in to the code cell of the notebook.Example:displayHTML ("""<form><textarea> u...

  • 3768 Views
  • 4 replies
  • 3 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 3 kudos

Hi @afshin riahi​ ,Did Dan's response helped you to solve your question? if it did, can you mark it as best answer? I will help to move the post to the top so other can quickly find the solution.

  • 3 kudos
3 More Replies
cig0
by New Contributor II
  • 5005 Views
  • 5 replies
  • 2 kudos

Resolved! AWS VPC peering connection: can't make Databricks VPC reach our services on the accepter VPC

Hi,We followed this document (https://docs.databricks.com/administration-guide/cloud-configurations/aws/vpc-peering.html) describing how to establish a connection between two (or more) VPC in AWS, but so far we haven't been able to communicate with t...

  • 5005 Views
  • 5 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

Hi @Martin Cigorraga​ ,If Huaming's fully answered your question, would you be happy to mark their answer as best so that others can quickly find the solution?

  • 2 kudos
4 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels