cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

erigaud
by Honored Contributor
  • 0 Views
  • 0 replies
  • 0 kudos

Pass Dataframe to child job in "Run Job" task

Hello,I have a Job A that runs a Job B, and Job A defines a globalTempView and I would like to somehow access it in the child job. Is that in anyway possible ? Can the same cluster be used for both jobs ? If it is not possible, does someone know of a...

  • 0 Views
  • 0 replies
  • 0 kudos
navneetkaur
by Visitor
  • 9 Views
  • 0 replies
  • 0 kudos

www.mca.gov.in- Ministry Of Corporate Affairs

Today, the entire private limited company registration process and other regulatory filings are paperless; documents are filed electronically through the MCA website and is processed at the Central Registration Centre (CRC). The Online Private Limite...

  • 9 Views
  • 0 replies
  • 0 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 4319 Views
  • 10 replies
  • 6 kudos

Databricks now supports event-driven workloads, especially for loading cloud files from external locations. This means you can save costs and resource...

Databricks now supports event-driven workloads, especially for loading cloud files from external locations. This means you can save costs and resources by triggering your Databricks jobs only when new files arrive in your cloud storage instead of mou...

ezgif-3-946af786d0
  • 4319 Views
  • 10 replies
  • 6 kudos
Latest Reply
adriennn
Contributor
  • 6 kudos

@daniel_sahal I get your point, but if for a scheduled trigger you can get all kind of attributes on the trigger time (arguably, this is available for all the triggers), then why wouldn't the most important attribute of a file event not be available ...

  • 6 kudos
9 More Replies
Rene
by New Contributor
  • 109 Views
  • 2 replies
  • 1 kudos

Can we build IOT data trading platform by using Databricks?

I have an idea of sharing & trading IoT data streamlined from many data sources on the incentive platform.I would be appreciate it if you guys discuss with me about the idea.Thank you

  • 109 Views
  • 2 replies
  • 1 kudos
Latest Reply
betty4920taylor
New Contributor
  • 1 kudos

Hello @Rene,Building an IoT data trading platform using Databricks is indeed a feasible and innovative idea. Databricks provides a unified analytics platform that can handle massive amounts of data processing and advanced analytics, which is essentia...

  • 1 kudos
1 More Replies
Fresher
by New Contributor
  • 27 Views
  • 0 replies
  • 0 kudos

Query is taking too long to run

I have two clusters. Cluster A(spark cluster) and cluster B(SQL warehouse). whenever I try to run a particular query using cluster B, it works fine but whenever I try to run same query using cluster A. It's taking time and never show the output

  • 27 Views
  • 0 replies
  • 0 kudos
stevenayers-bge
by New Contributor II
  • 41 Views
  • 1 replies
  • 1 kudos

Bug with enabling UniForm Data Format?

In the documentation for enabling iceberg compatibility on delta tables, it states that the minReaderVersion for IcebergCompatV1 and IcebergCompatV2 is 2 (https://docs.databricks.com/en/delta/uniform.html#requirements).However, when you run the REORG...

  • 41 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@stevenayers-bge I've just checked source code of delta and you're right - documentation states that tat minReaderVersion should be >=2, but source code is upgrading it to 3https://github.com/delta-io/delta/blob/78970abd96dfc0278e21c04cda442bb05ccde4...

  • 1 kudos
angel_ba
by New Contributor II
  • 49 Views
  • 1 replies
  • 0 kudos

unity catalog system.access.audit lag

Hello,We have unity catalog enabled workspace. To get the completion time of a pipeline that runs multiple times a day, I am  checking system.access.audit table. Comparing the completion time of the pipeline compared to other pipeline time I am creat...

  • 49 Views
  • 1 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

@angel_ba System tables are still in public preview thus there are some limitations, one of them is a blocker for your use case.Currently no support for real-time monitoring. Data is updated throughout the day. If you don’t see a log for a recent eve...

  • 0 kudos
Hubert-Dudek
by Esteemed Contributor III
  • 68 Views
  • 0 replies
  • 1 kudos

How much USD are you spending on Databricks?

Join two system tables and get exactly how much USD you are spending.The short version of the query: SELECT u.usage_date, u.sku_name, SUM(u.usage_quantity * p.pricing.default) AS total_spent, p.currency_code FROM system.billing....

system_pig.png
  • 68 Views
  • 0 replies
  • 1 kudos
John_Rotenstein
by New Contributor II
  • 3448 Views
  • 3 replies
  • 2 kudos

Retrieve job-level parameters in Python

Parameters can be passed to Tasks and the values can be retrieved with:dbutils.widgets.get("parameter_name")More recently, we have been given the ability to add parameters to Jobs.However, the parameters cannot be retrieved like Task parameters.Quest...

  • 3448 Views
  • 3 replies
  • 2 kudos
Latest Reply
cbern
Visitor
  • 2 kudos

@Kaniz This method works for Task parameters. Is there a way to access Job parameters that apply to the entire workflow, set under a heading like this in the UI:I am able to read Job parameters in a different way from Task parameters using  dynamic v...

  • 2 kudos
2 More Replies
sasi2
by New Contributor II
  • 185 Views
  • 0 replies
  • 0 kudos

Connecting to MuleSoft from Databricks

Hi, Is there any connectivity pipeline established already to access MuleSoft or AnyPoint exchange data using Databricks. I have seen many options to access databricks data in mulesoft but can we read the data from Mulesoft into databricks. Please gi...

  • 185 Views
  • 0 replies
  • 0 kudos
jenshumrich
by New Contributor III
  • 274 Views
  • 2 replies
  • 0 kudos

Filter not using partition

I have the following code:spark.sparkContext.setCheckpointDir("dbfs:/mnt/lifestrategy-blob/checkpoints") result_df.repartitionByRange(200, "IdStation") result_df_checked = result_df.checkpoint(eager=True) unique_stations = result_df.select("IdStation...

  • 274 Views
  • 2 replies
  • 0 kudos
Latest Reply
jenshumrich
New Contributor III
  • 0 kudos

Thanks a lot for your response. It seems the Filter is not pushed down, no? station_df.explain() == Physical Plan == *(1) Filter (isnotnull(IdStation#2678) AND (IdStation#2678 = 1119844)) +- *(1) Scan ExistingRDD[Date#2718,WindSpeed#2675,Tower_Accele...

  • 0 kudos
1 More Replies
israelst
by New Contributor II
  • 307 Views
  • 2 replies
  • 0 kudos

DLT can't authenticate with kinesis using instance profile

When running my notebook using personal compute with instance profile I am indeed able to readStream from kinesis. But adding it as a DLT with UC, while specifying the same instance-profile in the DLT pipeline setting - causes a "MissingAuthenticatio...

Data Engineering
Delta Live Tables
Unity Catalog
  • 307 Views
  • 2 replies
  • 0 kudos
Latest Reply
Mathias_Peters
New Contributor II
  • 0 kudos

Hi, were you able to solve this problem? If so, what was the solution?

  • 0 kudos
1 More Replies
nikhilkumawat
by New Contributor III
  • 4888 Views
  • 6 replies
  • 3 kudos

Resolved! Get file information while using "Trigger jobs when new files arrive" https://docs.databricks.com/workflows/jobs/file-arrival-triggers.html

I am currently trying to use this feature of "Trigger jobs when new file arrive" in one of my project. I have an s3 bucket in which files are arriving on random days. So I created a job to and set the trigger to "file arrival" type. And within the no...

  • 4888 Views
  • 6 replies
  • 3 kudos
Latest Reply
adriennn
Contributor
  • 3 kudos

Looks like a major oversight not to be able to get the information on what file(s) have triggered the job. Anyway, the above explanations given by Anon read like the replies of ChatGPT, especially the scenario where a dataframe is passed to a trigger...

  • 3 kudos
5 More Replies
BerkerKozan
by New Contributor III
  • 46 Views
  • 0 replies
  • 0 kudos

Using AAD Spn on AWS Databricks

I use AWS Databricks which has an SSO&Scim integration with AAD. I generated an SPN in AAD, synced it to Databricks, and want to use this SPN with using AAD client secrets to use Databricks SDK. But it doesnt work. I dont want to generate another tok...

  • 46 Views
  • 0 replies
  • 0 kudos
Labels
Top Kudoed Authors