cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

Vadim1
by New Contributor III
  • 2604 Views
  • 3 replies
  • 1 kudos

Resolved! Connect from Databricks to Hbase HDinsight cluster.

Hi, I have Databricks installation in Azure. I want to run a job that connects to HBase in a separate HDinsight cluster.What I tried:Created a peering between base cluster and Databricks vNets.I can ping IPs of Hbase zookeeper nodes but I cannot acce...

  • 2604 Views
  • 3 replies
  • 1 kudos
Latest Reply
User16764241763
Honored Contributor
  • 1 kudos

Vadim, Thank you for the response. Appreciate it.

  • 1 kudos
2 More Replies
lizou
by Contributor II
  • 1451 Views
  • 2 replies
  • 2 kudos

Merge into and data loss

I have a delta table with 20 M rows, Ther table is being updated dozens of times per day. The merge into is used, and the merge works fine for 1 year. But recently I begin notice some of data is deleted from merge into without delete specified. Mer...

  • 1451 Views
  • 2 replies
  • 2 kudos
Latest Reply
lizou
Contributor II
  • 2 kudos

I can't reproduce the issue anymore. for now, I am going to limit the number of merge into commands as intermediate data transformation does not need versioning history. I am going to try to use combined views for each step, and do a one-time merge i...

  • 2 kudos
1 More Replies
shan_chandra
by Databricks Employee
  • 5059 Views
  • 1 replies
  • 1 kudos

Resolved! Insert query fails with error "The query is not executed because it tries to launch ***** tasks in a single stage, while maximum allowed tasks one query can launch is 100000;

Py4JJavaError: An error occurred while calling o236.sql. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201) at org.apache.spark.sql.execution.datasources.I...

  • 5059 Views
  • 1 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

could you please increase the below config (at the cluster level) to a higher value or set it to zero spark.databricks.queryWatchdog.maxQueryTasks 0The spark config while it alleviates the issue.

  • 1 kudos
PradeepRavi
by New Contributor III
  • 34278 Views
  • 6 replies
  • 10 kudos

How do I prevent _success and _committed files in my write output?

Is there a way to prevent the _success and _committed files in my output. It's a tedious task to navigate to all the partitions and delete the files. Note : Final output is stored in Azure ADLS

  • 34278 Views
  • 6 replies
  • 10 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 10 kudos

Please find the below steps to remove _SUCCESS, _committed and _started files.spark.conf.set("spark.databricks.io.directoryCommit.createSuccessFile","false") to remove success file.run vacuum command multiple times until _committed and _started files...

  • 10 kudos
5 More Replies
auser85
by New Contributor III
  • 2223 Views
  • 3 replies
  • 1 kudos

dbutils.notebook.run() fails with job aborted but running the notebook individually works

I have a notebook that runs many notebooks in order, along the lines of:```%pythonnotebook_list = ['Notebook1', 'Notebook2']   for notebook in notebook_list:  print(f"Now on Notebook: {notebook}")  try:    dbutils.notebook.run(f'{notebook}', 3600)  e...

  • 2223 Views
  • 3 replies
  • 1 kudos
Latest Reply
auser85
New Contributor III
  • 1 kudos

I found the problem. Even if a notebook creates and specifies a widget fully, the notebook run process, e.g, dbutils.notebook.run('notebook') will not know how to use it. If I replace my widget with a non-widget provided value, the process works fine...

  • 1 kudos
2 More Replies
pieseautoford
by New Contributor
  • 527 Views
  • 0 replies
  • 0 kudos

www.pieseford.ro

Hi, my name is Jerry Maguire and I`m automatic engineer at Piese Ford. Piese originale Ford Fiesta 2008-2012

  • 527 Views
  • 0 replies
  • 0 kudos
jwilliam
by Contributor
  • 4548 Views
  • 4 replies
  • 2 kudos

Resolved! How to view the SQL Query History of traditional Databricks cluster (not Databricks SQL)?

I tried use the Spark Cluster UI. But the queries are truncated.

  • 4548 Views
  • 4 replies
  • 2 kudos
Latest Reply
walkermaster12
New Contributor II
  • 2 kudos

In Apache Spark prior to 2.1, once a SQL query was run, there was no way to re-run it; all history was lost. Spark SQL introduced the "replay" functionality in Spark 2.1.0, enabling users to re-run any query they have already run. You can run a query...

  • 2 kudos
3 More Replies
Phani1
by Valued Contributor II
  • 3321 Views
  • 2 replies
  • 3 kudos

Resolved! Terminated with exception: Could not initialize class org.rocksdb.Options

Problem Statement : When running Delta Live tables ,it is giving the error.Error Message : Could not initialize class org.rocksdb.Optionsorg.apache.spark.sql.streaming.StreamingQueryException: Query cpicpg_us_tgt_amz_bronze [id = a42eec82-0ee8-41b4-9...

  • 3321 Views
  • 2 replies
  • 3 kudos
Latest Reply
Phani1
Valued Contributor II
  • 3 kudos

Hi Team ,Thanks for your response, I faced this issue while executing the Delta Live tables / pipeline.Initially i choose product edition as Core and attached 4 notebooks to the pipeline and each notebook have Bronze and silver tables creation. duri...

  • 3 kudos
1 More Replies
Phani1
by Valued Contributor II
  • 5358 Views
  • 1 replies
  • 0 kudos

Execute tasks parallel to process multiple files parallel

Hi all, If we have multiple tasks under the job, How to invoke a specific task under a job.Do we have any API to invoke Job and its specific tasks instead of Job.Use case: When we receive multiple messages from the event hub, each underlying task in ...

  • 5358 Views
  • 1 replies
  • 0 kudos
Latest Reply
Phani1
Valued Contributor II
  • 0 kudos

Thanks for your response, My question is ,if we have multiple tasks in a job ,How can we invoke specific task, I can see API to invoke the job but not a particular task in it. Kindly find attachment for your reference.

  • 0 kudos
klllmmm
by New Contributor II
  • 4362 Views
  • 3 replies
  • 1 kudos

Error as no such file when reading CSV file using pandas

I'm trying to read a CSV file saved in data using pandas read_csv function. But it gives No such file error.%fs ls /FileStore/tables/   df= pd.read_csv('/dbfs/FileStore/tables/CREDIT_1.CSV')     df= pd.read_csv('/dbfs:/FileStore/tables/CREDIT_1.CSV')...

image
  • 4362 Views
  • 3 replies
  • 1 kudos
Latest Reply
klllmmm
New Contributor II
  • 1 kudos

Thanks to @Werner Stinckens​ for the answer.I understood that I have to use spark to read data from clusters.

  • 1 kudos
2 More Replies
yopbibo
by Contributor II
  • 5243 Views
  • 3 replies
  • 4 kudos

Resolved! Column name, starting with a number

Hi,I see it is possible to start a column name with a number, like `123_test`And store in a hive table with a location in delta.On that documentation https://www.stitchdata.com/docs/destinations/databricks-delta/reference#transformations--column-nami...

  • 5243 Views
  • 3 replies
  • 4 kudos
Latest Reply
yopbibo
Contributor II
  • 4 kudos

ha ha, yes, I try to find back the right page in DB documentation. If you have it, please, share.

  • 4 kudos
2 More Replies
auser85
by New Contributor III
  • 2497 Views
  • 2 replies
  • 4 kudos

Resolved! Cache Select on Temp Table?

How might I cache a temp table?The documentation suggests it is possible: https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-cache.htmlConsider the following on DBR 10.5 and Spark 3.2.1:```%pythondf.createOrReplaceTempView("chan...

  • 2497 Views
  • 2 replies
  • 4 kudos
Latest Reply
auser85
New Contributor III
  • 4 kudos

Thank you! The newer documentation does indeed work for me.

  • 4 kudos
1 More Replies
Vibhor
by Contributor
  • 5903 Views
  • 5 replies
  • 2 kudos

Get current date as string in databricks using scala

I want to get current date in scala as a string for example today current date is 3rd jan, want to store it as a new variable dynamically as below, how to get it.val currdate : String = “20220103”when I am using val currdate = Calendar.getInstance.ge...

  • 5903 Views
  • 5 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hey @Vibhor Sethi​ Hope you are well!Thank you for posting your question and letting us know that you were able to resolve the issue. Would you be happy to mark it as the best solution? It would be really helpful for the other members too.Cheers!

  • 2 kudos
4 More Replies
SailajaB
by Valued Contributor III
  • 3075 Views
  • 2 replies
  • 5 kudos

An error occurred while calling o303.mount: Operation failed: "This request is not authorized to perform this operation

Hi Team,We are unable to mount storage container in below scenario We created Gen 2 using VNet and added firewall restrictions (i.e allow trusted sources)And deployed Data bricks workspace with out VNet injection. Is it possible to add databricks pub...

  • 3075 Views
  • 2 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

Hey @Sailaja B​ Hope everything is great!Does Hubert's response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?Thanks!

  • 5 kudos
1 More Replies
sheree
by New Contributor III
  • 2745 Views
  • 3 replies
  • 1 kudos

Resolved! I can't access to my account.

I can't access to my account.This acccount was created today(not community, after 14 days trial it will chargable)when I'm try to access my account it gives meInvalid email address or passwordNote: Emails/usernames are case-sensitiveI tried to reset ...

  • 2745 Views
  • 3 replies
  • 1 kudos
Latest Reply
sheree
New Contributor III
  • 1 kudos

I got a reset link from the community. Actually the problem was with my username ,it did not identify a character within my username which was my email id.

  • 1 kudos
2 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels