cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

HarshaK
by New Contributor III
  • 20053 Views
  • 4 replies
  • 6 kudos

Resolved! Partition By () on Delta Files

Hi All,I am trying to Partition By () on Delta file in pyspark language and using command:df.write.format("delta").mode("overwrite").option("overwriteSchema","true").partitionBy("Partition Column").save("Partition file path") -- It doesnt seems to w...

  • 20053 Views
  • 4 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hey @Harsha kriplani​ Hope you are well. Thank you for posting in here. It is awesome that you found a solution. Would you like to mark Hubert's answer as best?  It would be really helpful for the other members too.Cheers!

  • 6 kudos
3 More Replies
Manoj
by Contributor II
  • 2657 Views
  • 2 replies
  • 5 kudos

Resolved! Does job cluster helps the jobs that are fighting for Resources on all purpose cluster ?

Hi Team, Does job cluster helps the jobs that are fighting for Resources on all purpose cluster ?With job cluster the drawback that i see is creation of cluster every time when the job starts, Its taking 2 mins for spinning up the cluster. Instead of...

  • 2657 Views
  • 2 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 5 kudos

@Manoj Kumar Rayalla​ , You can in the job set to use an all-purpose cluster (that feature was added recently)You can use the pool to limit job cluster starting time (but it still can take a moment),

  • 5 kudos
1 More Replies
LorenRD
by Contributor
  • 14338 Views
  • 9 replies
  • 13 kudos

Resolved! Is it possible to connect Databricks SQL with AWS Redshift DB?

I would like to know if it's possible to connect Databricks SQL module with not just internal Metastore DB and tables from Data Science and Engineering module but also connect with an AWS Redshift DB to do queries and create alerts. 

image
  • 14338 Views
  • 9 replies
  • 13 kudos
Latest Reply
LorenRD
Contributor
  • 13 kudos

Hi @Kaniz Fatma​ I contacted Customer support explaining this issue, they told me that this feature is not implemented yet but it's in the roadmap with no ETA. It would be great if you ping me back when it's possible to access Redshift tables from SQ...

  • 13 kudos
8 More Replies
gazzyjuruj
by Contributor II
  • 2620 Views
  • 1 replies
  • 4 kudos

Resolved! databricks_error_message: time out placing nodes

Hi, today i'm receiving this error:-databricks_error_message :Timed out while placing nodes. what should be done to fix it?

  • 2620 Views
  • 1 replies
  • 4 kudos
Latest Reply
User16764241763
Databricks Employee
  • 4 kudos

Hello @Ghazanfar Uruj​  This can happen for a bunch of reasons. Could you please file a support case with details, if the issue still persists?

  • 4 kudos
AmanSehgal
by Honored Contributor III
  • 5175 Views
  • 2 replies
  • 10 kudos

Migrating data from delta lake to RDS MySQL and ElasticSearch

There are mechanisms (like DMS) to get data from RDS to delta lake and store the data in parquet format, but is it possible to reverse of this in AWS?I want to send data from data lake to MySQL RDS tables in batch mode.And the next step is to send th...

  • 5175 Views
  • 2 replies
  • 10 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 10 kudos

@Kaniz Fatma​  and @Hubert Dudek​  - writing to MySQL RDS is relatively simpler. I'm finding ways to export data into Elasticsearch

  • 10 kudos
1 More Replies
kjoth
by Contributor II
  • 1798 Views
  • 0 replies
  • 0 kudos

Unmanaged Table - Newly added data directories are not reflected in the table We have created an unmanaged table with partitions on the dbfs location, using SQL. After creating the tables, via SQL we are running

We have created an unmanaged table with partitions on the dbfs location, using SQL.example: %sql CREATE TABLE EnterpriseDailyTrafficSummarytest(EnterpriseID String,ServiceLocationID String, ReportDate String ) USING parquet PARTITIONED BY(ReportDate)...

  • 1798 Views
  • 0 replies
  • 0 kudos
Daba
by New Contributor III
  • 7229 Views
  • 3 replies
  • 5 kudos

Resolved! DLT+AutoLoader: where is the schema and checkpoint hide?

Hi, I'm exploring the DLT with AutoLoader feature and wondering where are the schema and checkpoint hide? I want to wipe these two to reset/reinitialize the flow but unlike the "regular" AutoLoader the checkpoint and schema folder are not there.Thank...

  • 7229 Views
  • 3 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 5 kudos

@Alexander Plepler​ , There is a storage option in pipeline settings - A path to a DBFS directory for storing checkpoints and tables created by the pipeline.Additionally, delta is registered in metastore, so the table schema is there.

  • 5 kudos
2 More Replies
Karthik1
by New Contributor II
  • 3791 Views
  • 2 replies
  • 0 kudos

Datab

Hi Databricks Team, I had given Databricks certified spark developer-Python exam on 15th April’22 and passed with 81.66% score but till now I didn’t receive my certificate or badge. I need to submit my badge to my employer. Kindly release my badge. T...

  • 3791 Views
  • 2 replies
  • 0 kudos
sannycse
by New Contributor II
  • 2786 Views
  • 2 replies
  • 3 kudos

Resolved! display password as shown in example using spark scala

Table has the following Columns:First_Name, Last_Name, Department_Id,Contact_No, Hire_DateDisplay the emplopyee First_name, Count of Characters in the firstname,password.Password should be first 4 letters of first name in lower case and the date and ...

  • 2786 Views
  • 2 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 3 kudos

@SANJEEV BANDRU​ , SELECT CONCAT(substring(First_Name, 0, 2) , substring(Hire_Date, 0, 2), substring(Hire_Date, 3, 2)) as password FROM table;If Hire_date is timestamp you may need to add date_format()

  • 3 kudos
1 More Replies
Syed1
by New Contributor III
  • 28762 Views
  • 7 replies
  • 13 kudos

Resolved! Python Graph not showing

Hi , I have run this code import matplotlib.pyplot as pltimport numpy as npplt.style.use('bmh')%matplotlib inlinex = np.array([5,7,8,7,2,17,2,9,4,11,12,9,6])y = np.array([99,86,87,88,111,86,103,87,94,78,77,85,86])p= plt.scatter(x, y)display command r...

  • 28762 Views
  • 7 replies
  • 13 kudos
Latest Reply
User16725394280
Databricks Employee
  • 13 kudos

@Syed Ubaid​  i tried with 7.3 LTS and its works fine.

  • 13 kudos
6 More Replies
Anonymous
by Not applicable
  • 12554 Views
  • 12 replies
  • 13 kudos

Resolved! Not able to run notebook even when cluster is running and databases/tables are not visible in "data" tab.

We are using Dataricks in AWS. i am not able to run a notebook even when cluster is running. When i run a cell, it returns "cancel". When i check the event log for the cluster, it shows "Metastore is down". Couldn't see any databases or tables that i...

Image Image Image
  • 12554 Views
  • 12 replies
  • 13 kudos
Latest Reply
User16753725182
Databricks Employee
  • 13 kudos

This means the network is fine, but something in the spark config is amiss.What are the DBR version and the hive version? Please check f you are using a compatible version.If you don't specify any version, it will take 1.3 and you wouldn't have to us...

  • 13 kudos
11 More Replies
p42af
by New Contributor
  • 8065 Views
  • 4 replies
  • 1 kudos

Resolved! rdd.foreachPartition() does nothing?

I expected the code below to print "hello" for each partition, and "world" for each record. But when I ran it the code ran but had no print outs of any kind. No errors either. What is happening here?%scala   val rdd = spark.sparkContext.parallelize(S...

  • 8065 Views
  • 4 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 1 kudos

Is it lazy evaluated so you need to trigger action I guess

  • 1 kudos
3 More Replies
KC_1205
by Databricks Partner
  • 5123 Views
  • 2 replies
  • 3 kudos

Resolved! NumPy update 1.18-1.21

Hi all,I am planning to update the DB to 9.1 LTS from 7.3 LTS, corresponding NumPy version will be 1.19 and later would like to update 1.21 in the notebooks. At cluster I have Spark version related to the 9.1 LTS which will support 1.19 and notebook ...

  • 5123 Views
  • 2 replies
  • 3 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 3 kudos

Hi @Kiran Chalasani​ ,According to the docs DBR 7.3 LTS comes with Numpy 1.18.1 https://docs.databricks.com/release-notes/runtime/7.3.html and DBR 9.1 LTS comes with Numpy 1.19.2 https://docs.databricks.com/release-notes/runtime/9.1.htmlIf you need t...

  • 3 kudos
1 More Replies
RKNutalapati
by Valued Contributor
  • 6756 Views
  • 4 replies
  • 3 kudos

Resolved! Copy CDF enabled delta table from one location to another by retaining history

I am currently doing some use case testing. I have to CLONE delta table with CDF enabled to a different S3 bucket. Deep clone doesn't meet the requirement. So I tried to copy the files using dbutils.fs.cp, it is copying all the versions but the tim...

  • 6756 Views
  • 4 replies
  • 3 kudos
Labels