cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

MartinB
by Contributor III
  • 6094 Views
  • 4 replies
  • 3 kudos

Resolved! Interoperability Spark ↔ Pandas: can't convert Spark dataframe to Pandas dataframe via df.toPandas() when it contains datetime value in distant future

Hi,I have multiple datasets in my data lake that feature valid_from and valid_to columns indicating validity of rows.If a row is valid currently, this is indicated by valid_to=9999-12-31 00:00:00.Example:Loading this into a Spark dataframe works fine...

Example_SCD2
  • 6094 Views
  • 4 replies
  • 3 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 3 kudos

Currently, out of bound timestamps are not supported in pyArrow/pandas. Please refer to the below associated JIRA issue. https://issues.apache.org/jira/browse/ARROW-5359?focusedCommentId=17104355&page=com.atlassian.jira.plugin.system.issuetabpanels%3...

  • 3 kudos
3 More Replies
User16783852686
by New Contributor II
  • 1612 Views
  • 5 replies
  • 2 kudos

Resolved! Slow first time run, jar based jobs

When running a jar-based job, I've noticed that the 1st run always takes the extra time to complete the job and consecutive runs take less time to finish the job. This behavior is reproducible on an interactive cluster. What's causing this? Is this e...

  • 1612 Views
  • 5 replies
  • 2 kudos
Latest Reply
User16783852686
New Contributor II
  • 2 kudos

@Sandeep Katta​ , this is a fat jar that does read-transform-write. @DD Sharma​  response matches @Werner Stinckens​  & I intuition that there was efficiency on the second job due to jar already being loaded. I would not have noticed this had job run...

  • 2 kudos
4 More Replies
BorislavBlagoev
by Valued Contributor III
  • 1255 Views
  • 4 replies
  • 7 kudos

Resolved! Visualization of Structured Streaming in job.

Does Databricks have feature or good pattern to visualize the data from Structured Streaming? Something like display in the notebook.

  • 1255 Views
  • 4 replies
  • 7 kudos
Latest Reply
BorislavBlagoev
Valued Contributor III
  • 7 kudos

I didn't know about that. Thanks!

  • 7 kudos
3 More Replies
User16752246002
by New Contributor II
  • 1133 Views
  • 2 replies
  • 6 kudos

Resolved! New Bill Inmon Book, What are your thoughts?

Have you checked out the new Bill Inmon Book, Building the Data Lakehouse? https://dbricks.co/3uxCXjO What were your thoughts if you read it?

  • 1133 Views
  • 2 replies
  • 6 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 6 kudos

The quality of the book depends on the audience IMO. For people who have no background in data warehousing it will be interesting to read. For the others the book is too general and descriptive. The 'HOW do you do x' is missing.

  • 6 kudos
1 More Replies
FMendez
by New Contributor III
  • 9177 Views
  • 4 replies
  • 7 kudos

Resolved! How can you mount an Azure Data Lake (gen2) using abfss and Shared Key?

I wanted to mount a ADLG2 on databricks and take advantage on the abfss driver which should be better for large analytical workloads (is that even true in the context of DB?).Setting an OAuth is a bit of a pain so I wanted to take the simpler approac...

  • 9177 Views
  • 4 replies
  • 7 kudos
Latest Reply
User16753724663
Valued Contributor
  • 7 kudos

Hi @Fernando Mendez​ ,The below document will help you to mount the ADLS gen2 using abfss:https://docs.databricks.com/data/data-sources/azure/adls-gen2/azure-datalake-gen2-get-started.htmlCould you please check if this helps?

  • 7 kudos
3 More Replies
del1000
by New Contributor III
  • 15090 Views
  • 6 replies
  • 3 kudos

Resolved! Is it possible to passthrough job's parameters to variable?

Scenario:I tried to run notebook_primary as a job with same parameters' map. This notebook is orchestrator for notebooks_sec_1, notebooks_sec_2, and notebooks_sec_3 and next. I run them by dbutils.notebook.run(path, timeout, arguments) function.So ho...

  • 15090 Views
  • 6 replies
  • 3 kudos
Latest Reply
del1000
New Contributor III
  • 3 kudos

@Balbir Singh​ , I'm newbie in Databricks but the manual says you can use a python cell and transfer variables to scala's cell by temp tables.https://docs.databricks.com/notebooks/notebook-workflows.html#pass-structured-data

  • 3 kudos
5 More Replies
User16789201666
by Contributor II
  • 842 Views
  • 2 replies
  • 0 kudos

What are some guidelines for migrating to DBR 7/Spark 3?

What are some guidelines for migrating to DBR 7/Spark 3?

  • 842 Views
  • 2 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

Please refer to the below reference for switching to DBR 7.xWe have extended our DBR 6.4 support until December 2021, DBR 6.4 extended support - Release notes: https://docs.databricks.com/release-notes/runtime/6.4x.htmlMigration guide to DBR 7.x: htt...

  • 0 kudos
1 More Replies
MGH1
by New Contributor III
  • 3080 Views
  • 8 replies
  • 3 kudos

Resolved! how to log the KerasClassifier model in a sklearn pipeline in mlflow?

I have a set of pre-processing stages in a sklearn `Pipeline` and an estimator which is a `KerasClassifier` (`from tensorflow.keras.wrappers.scikit_learn import KerasClassifier`).My overall goal is to tune and log the whole sklearn pipeline in `mlflo...

  • 3080 Views
  • 8 replies
  • 3 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 3 kudos

could you please share the full error stack trace?

  • 3 kudos
7 More Replies
brij
by New Contributor III
  • 2996 Views
  • 8 replies
  • 3 kudos

Resolved! Databricks snowflake dataframe.toPandas() taking more space and time

I have 2 exactly same table(rows and schema). One table recides in AZSQL server data base and other one is in snowflake database. Now we have some existing code which we want to migrate from azsql to snowflake but when we are trying to create a panda...

  • 2996 Views
  • 8 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

@Brijan Elwadhi​ - That's wonderful. Thank you for sharing your solution.

  • 3 kudos
7 More Replies
krishnachaitany
by New Contributor II
  • 565 Views
  • 1 replies
  • 2 kudos

Spot Instances in Azure Databricks

The above screen shot is from AWS Databricks cluster .Similarly, in Azure Databricks - Is there a specific way to determine how many of worker nodes are using spot instances and on-demand instances when it is running/completed a job.Likewise, ...

Compute level spot instances and on demand instances
  • 565 Views
  • 1 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hello!My name is Piper and I'm one of the community moderators. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will follow up with the team. Thanks for your p...

  • 2 kudos
Databricks2005
by New Contributor II
  • 1380 Views
  • 4 replies
  • 3 kudos

Resolved! Cosine similarity between all rows pairwise on a dataset of 100million rows

Hello everyone,I am facing performance issue while calculating cosine similarity in pyspark on a dataframe with around 100 million records.I am trying to do a cross self join on the dataframe to calculate it.​The executors are all having same number ...

  • 1380 Views
  • 4 replies
  • 3 kudos
Latest Reply
Sonal
New Contributor II
  • 3 kudos

Is there a way to hash the record attributes so that the cartesian join can be avoided? I work on record similarity and fuzzy matching and we do a learning based blocking alorithm which hashes the records into smaller buckets and then the hashes are ...

  • 3 kudos
3 More Replies
Quan
by New Contributor III
  • 10180 Views
  • 9 replies
  • 6 kudos

Resolved! How to properly load Unicode (UTF-8) characters from table over JDBC connection using Simba Spark Driver

Hello all, I'm trying to pull table data from databricks tables that contain foreign language characters in UTF-8 into an ETL tool using a JDBC connection. I'm using the latest Simba Spark JDBC driver available from the Databricks website.The issue i...

  • 10180 Views
  • 9 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Can you try setting UseUnicodeSqlCharacterTypes=1 in the driver, and also make sure 'file.encoding' is set to UTF-8 in jvm and see if the issue still persists?

  • 6 kudos
8 More Replies
Abhendu
by New Contributor II
  • 856 Views
  • 3 replies
  • 0 kudos

Resolved! CICD Databricks

Hi TeamI was wondering if there is a document or step by step process to promote code in CICD across various environments of code repository (GIT/GITHUB/BitBucket/Gitlab) with DBx support? [Without involving code repository merging capability of the ...

  • 856 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Please refer this related thread on CICD in Databricks https://community.databricks.com/s/question/0D53f00001GHVhMCAX/what-are-some-best-practices-for-cicd

  • 0 kudos
2 More Replies
Kaniz
by Community Manager
  • 715 Views
  • 1 replies
  • 0 kudos
  • 715 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

The differences are as follows:-Pig operates on the client-side of a cluster whereas Hive operates on the server-side of a cluster.Pig uses pig-Latin language whereas Hive uses HiveQL language.Pig is a Procedural Data Flow Language whereas Hive is a ...

  • 0 kudos
Kaniz
by Community Manager
  • 700 Views
  • 1 replies
  • 0 kudos
  • 700 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

To export all collections:mongodump -d database_name -o directory_to_store_dumpsTo restore them:mongorestore -d database_name directory_backup_where_mongodb_tobe_restored

  • 0 kudos
Labels
Top Kudoed Authors