cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

JensH
by New Contributor III
  • 2666 Views
  • 3 replies
  • 2 kudos

Resolved! How to pass parameters to a "Job as Task" from code?

Hi,I would like to use the new "Job as Task" feature but Im having trouble to pass values.ScenarioI have a workflow job which contains 2 tasks.Task_A (type "Notebook"): Read data from a table and based on the contents decide, whether the workflow in ...

Data Engineering
job
parameters
workflow
  • 2666 Views
  • 3 replies
  • 2 kudos
Latest Reply
Walter_C
Valued Contributor II
  • 2 kudos

I found the following information: value is the value for this task value’s key. This command must be able to represent the value internally in JSON format. The size of the JSON representation of the value cannot exceed 48 KiB.You can refer to https...

  • 2 kudos
2 More Replies
Alessandro
by New Contributor
  • 405 Views
  • 1 replies
  • 0 kudos

Update jobs parameter, when running, from API

Hi, When a Job is running, I would like to change the parameters with an API call.I know that I can set parameters value from API when I start a job from API, or that I can update the default value if the job isn't running, but I didn't find an API c...

  • 405 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Valued Contributor II
  • 0 kudos

No, there is currently no option to change parameters while the job is running, from the UI you will be able to modify them but it wont affect the current run, it will be applied on the new job runs you trigger. 

  • 0 kudos
User16826992185
by New Contributor II
  • 5713 Views
  • 2 replies
  • 3 kudos

Databricks Auto-Loader vs. Delta Live Tables

What is the difference between Databricks Auto-Loader and Delta Live Tables? Both seem to manage ETL for you but I'm confused on where to use one vs. the other.

  • 5713 Views
  • 2 replies
  • 3 kudos
Latest Reply
SteveL
New Contributor II
  • 3 kudos

You say "...__would__ be a piece..." and "...DLT __would__ pick up...".Is DLT built upon AL?

  • 3 kudos
1 More Replies
Shivam_Pawar
by New Contributor III
  • 8138 Views
  • 11 replies
  • 4 kudos

Databricks Lakehouse Fundamentals Badge

I have successfully passed the test after completion of the course with 95%. But I have'nt recieved any badge from your side as promised. I have been provided with a certificate which looks fake by itself. I need to post my credentials on Linkedin wi...

  • 8138 Views
  • 11 replies
  • 4 kudos
Latest Reply
Shruti_Prajapat
New Contributor II
  • 4 kudos

Even I'm facing similar issue. I have completed the training and the quiz successful and able to download a course completion certificate. Certificate doesn't have any ID and looking very generic and fake. Have signed up for the https://credentials.d...

  • 4 kudos
10 More Replies
Maxi1693
by New Contributor II
  • 1034 Views
  • 1 replies
  • 0 kudos

Resolved! Error java.lang.NullPointerException using Autoloader

Hi!I am pulling data from a Blob storage to Databrick using Autoloader. This process is working well for almost 10 resources, but for a specific one I am getting this error  java.lang.NullPointerException.Looks like this issue in when I connect to th...

  • 1034 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 0 kudos

@Maxi1693  - The value for the schemaEvolutionMode should be a string. could you please try changing the below from .option("cloudFiles.schemaEvolutionMode", None)    to  .option("cloudFiles.schemaEvolutionMode", "none") and let us know. Refe...

  • 0 kudos
FurqanAmin
by New Contributor II
  • 913 Views
  • 5 replies
  • 1 kudos

Logs not coming up in the UI - while being written to DBFS

I have a few spark-submit jobs that are being run via Databricks workflows. I have configured logging in DBFS and specified a location in my GCS bucket.The logs are present in that GCS bucket for the latest run but whenever I try to view them from th...

FurqanAmin_1-1705921514830.png FurqanAmin_0-1705921735529.png FurqanAmin_0-1705922202627.png
Data Engineering
logging
LOGS
ui
  • 913 Views
  • 5 replies
  • 1 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 1 kudos

Yes, I meant to set it to None. Is the issue specific to any particular cluster? Or do you see the issue with all the clusters in your workspace?

  • 1 kudos
4 More Replies
Noman_Q
by New Contributor II
  • 589 Views
  • 2 replies
  • 1 kudos

Error Running Delta Live Pipeline.

Hi Guys, I am new to the Delta pipeline. I have created a pipeline and now when i try to run the pipeline i get the error message "PERMISSION_DENIED: You are not authorized to create clusters. Please contact your administrator" even though I can crea...

  • 589 Views
  • 2 replies
  • 1 kudos
Latest Reply
Noman_Q
New Contributor II
  • 1 kudos

Thank you for responding @Palash01 . thanks for giving me the direction so to get around it i had to get permission to "unrestricted cluster creation". 

  • 1 kudos
1 More Replies
rt-slowth
by Contributor
  • 556 Views
  • 3 replies
  • 0 kudos

why the userIdentity is anonymous?

Do you know why the userIdentity is anonymous in AWS Cloudtail's logs even though I have specified an instance profile?

  • 556 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?This...

  • 0 kudos
2 More Replies
joeyslaptop
by New Contributor II
  • 2021 Views
  • 5 replies
  • 2 kudos

How to add a column to a new table containing the original source filenames in DataBricks.

If this isn't the right spot to post this, please move it or refer me to the right area.I recently learned about the "_metadata.file_name".  It's not quite what I need.I'm creating a new table in DataBricks and want to add a USR_File_Name column cont...

Data Engineering
Databricks
filename
import
SharePoint
Upload
  • 2021 Views
  • 5 replies
  • 2 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 2 kudos

Hi, Could you please elaborate more on the expectation here? 

  • 2 kudos
4 More Replies
William_Scardua
by Valued Contributor
  • 242 Views
  • 1 replies
  • 0 kudos

Cluster types pricing

Hy guys,How can I get the pricing of cluster types (standard_D*, standard_E*, standart_F*, etc.) ?Im doing a study to decrease the price of my actual cluster.Have any idea ?Thank you, thank you

  • 242 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 0 kudos

Hey, you can use the pricing calculator here: https://www.databricks.com/product/pricing/product-pricing/instance-types

  • 0 kudos
JJ_LVS1
by New Contributor III
  • 1306 Views
  • 4 replies
  • 1 kudos

FiscalYear Start Period Is not Correct

Hi, I'm trying to create a calendar dimension including a fiscal year with a fiscal start of April 1. I'm using the fiscalyear library and am setting the start to month 4 but it insists on setting April to month 7.runtime 12.1My code snipet is:start_...

  • 1306 Views
  • 4 replies
  • 1 kudos
Latest Reply
DataEnginner
New Contributor II
  • 1 kudos

 import fiscalyear import datetime def get_fiscal_date(year,month,day): fiscalyear.setup_fiscal_calendar(start_month=4) v_fiscal_month=fiscalyear.FiscalDateTime(year, month, day).fiscal_month #To get the Fiscal Month v_fiscal_quarter=fiscalyea...

  • 1 kudos
3 More Replies
harlemmuniz
by New Contributor II
  • 455 Views
  • 2 replies
  • 1 kudos

Issue with Job Versioning with “Run Job” tasks and Deployments between envinronments

Hello,I am writing to bring to your attention an issue that we have encountered while working with Databricks and seek your assistance in resolving it.When running a Job of Workflow with the task "Run Job" and clicking on "View YAML/JSON," we have ob...

  • 455 Views
  • 2 replies
  • 1 kudos
Latest Reply
harlemmuniz
New Contributor II
  • 1 kudos

Hi @Kaniz, thank you for your fast response.However, the versioned JSON or YAML (via Databricks Asset Bundle) in the Job UI should also include the job_name, or we have to change it manually by replacing the job_id with the job_name. For this reason,...

  • 1 kudos
1 More Replies
442027
by New Contributor II
  • 443 Views
  • 1 replies
  • 0 kudos

Default delta log retention interval is different than in documentation?

It notes in the documentation here that the default delta log retention interval is 30 days - however when I create checkpoints in the delta log to trigger the cleanup - historical records from 30 days aren't removed; i.e. current day checkpoint is a...

  • 443 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

you need to set SET TBLPROPERTIES ('delta.checkpointRetentionDuration' = '30 days',)

  • 0 kudos
Mrk
by New Contributor II
  • 3753 Views
  • 4 replies
  • 3 kudos

Resolved! Insert or merge into a table with GENERATED IDENTITY

Hi,When I create an identity column using the GENERATED ALWAYS AS IDENTITY statement and I try to INSERT or MERGE data into that table I keep getting the following error message:Cannot write to 'table', not enough data columns; target table has x col...

  • 3753 Views
  • 4 replies
  • 3 kudos
Latest Reply
Aboladebaba
New Contributor II
  • 3 kudos

You can run the INSERT by passing the subset of columns you want to provide values for... for example your insert statement would be something like:INSERT INTO target_table_with_identity_col(<list-of-cols-names-without-the-identity-column>SELECT(<lis...

  • 3 kudos
3 More Replies
ilarsen
by Contributor
  • 947 Views
  • 3 replies
  • 1 kudos

Structured Streaming Auto Loader UnknownFieldsException and Workflow Retries

Hi. I am using structured streaming and auto loader to read json files, and it is automated by Workflow.  I am having difficulties with the job failing as schema changes are detected, but not retrying.  Hopefully someone can point me in the right dir...

  • 947 Views
  • 3 replies
  • 1 kudos
Latest Reply
ilarsen
Contributor
  • 1 kudos

Another point I have realised, is that the task and the parent notebook (which then calls the child notebook that runs the auto loader part) does not fail if the schema-changed failure occurs during the auto loader process.  It's the child notebook a...

  • 1 kudos
2 More Replies
Labels
Top Kudoed Authors