cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

joeyslaptop
by New Contributor II
  • 2057 Views
  • 5 replies
  • 2 kudos

How to add a column to a new table containing the original source filenames in DataBricks.

If this isn't the right spot to post this, please move it or refer me to the right area.I recently learned about the "_metadata.file_name".  It's not quite what I need.I'm creating a new table in DataBricks and want to add a USR_File_Name column cont...

Data Engineering
Databricks
filename
import
SharePoint
Upload
  • 2057 Views
  • 5 replies
  • 2 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 2 kudos

Hi, Could you please elaborate more on the expectation here? 

  • 2 kudos
4 More Replies
William_Scardua
by Valued Contributor
  • 244 Views
  • 1 replies
  • 0 kudos

Cluster types pricing

Hy guys,How can I get the pricing of cluster types (standard_D*, standard_E*, standart_F*, etc.) ?Im doing a study to decrease the price of my actual cluster.Have any idea ?Thank you, thank you

  • 244 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Esteemed Contributor
  • 0 kudos

Hey, you can use the pricing calculator here: https://www.databricks.com/product/pricing/product-pricing/instance-types

  • 0 kudos
JJ_LVS1
by New Contributor III
  • 1342 Views
  • 4 replies
  • 1 kudos

FiscalYear Start Period Is not Correct

Hi, I'm trying to create a calendar dimension including a fiscal year with a fiscal start of April 1. I'm using the fiscalyear library and am setting the start to month 4 but it insists on setting April to month 7.runtime 12.1My code snipet is:start_...

  • 1342 Views
  • 4 replies
  • 1 kudos
Latest Reply
DataEnginner
New Contributor II
  • 1 kudos

 import fiscalyear import datetime def get_fiscal_date(year,month,day): fiscalyear.setup_fiscal_calendar(start_month=4) v_fiscal_month=fiscalyear.FiscalDateTime(year, month, day).fiscal_month #To get the Fiscal Month v_fiscal_quarter=fiscalyea...

  • 1 kudos
3 More Replies
harlemmuniz
by New Contributor II
  • 479 Views
  • 2 replies
  • 1 kudos

Issue with Job Versioning with “Run Job” tasks and Deployments between envinronments

Hello,I am writing to bring to your attention an issue that we have encountered while working with Databricks and seek your assistance in resolving it.When running a Job of Workflow with the task "Run Job" and clicking on "View YAML/JSON," we have ob...

  • 479 Views
  • 2 replies
  • 1 kudos
Latest Reply
harlemmuniz
New Contributor II
  • 1 kudos

Hi @Kaniz, thank you for your fast response.However, the versioned JSON or YAML (via Databricks Asset Bundle) in the Job UI should also include the job_name, or we have to change it manually by replacing the job_id with the job_name. For this reason,...

  • 1 kudos
1 More Replies
442027
by New Contributor II
  • 450 Views
  • 1 replies
  • 0 kudos

Default delta log retention interval is different than in documentation?

It notes in the documentation here that the default delta log retention interval is 30 days - however when I create checkpoints in the delta log to trigger the cleanup - historical records from 30 days aren't removed; i.e. current day checkpoint is a...

  • 450 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

you need to set SET TBLPROPERTIES ('delta.checkpointRetentionDuration' = '30 days',)

  • 0 kudos
Mrk
by New Contributor II
  • 3820 Views
  • 4 replies
  • 3 kudos

Resolved! Insert or merge into a table with GENERATED IDENTITY

Hi,When I create an identity column using the GENERATED ALWAYS AS IDENTITY statement and I try to INSERT or MERGE data into that table I keep getting the following error message:Cannot write to 'table', not enough data columns; target table has x col...

  • 3820 Views
  • 4 replies
  • 3 kudos
Latest Reply
Aboladebaba
New Contributor II
  • 3 kudos

You can run the INSERT by passing the subset of columns you want to provide values for... for example your insert statement would be something like:INSERT INTO target_table_with_identity_col(<list-of-cols-names-without-the-identity-column>SELECT(<lis...

  • 3 kudos
3 More Replies
ilarsen
by Contributor
  • 975 Views
  • 3 replies
  • 1 kudos

Structured Streaming Auto Loader UnknownFieldsException and Workflow Retries

Hi. I am using structured streaming and auto loader to read json files, and it is automated by Workflow.  I am having difficulties with the job failing as schema changes are detected, but not retrying.  Hopefully someone can point me in the right dir...

  • 975 Views
  • 3 replies
  • 1 kudos
Latest Reply
ilarsen
Contributor
  • 1 kudos

Another point I have realised, is that the task and the parent notebook (which then calls the child notebook that runs the auto loader part) does not fail if the schema-changed failure occurs during the auto loader process.  It's the child notebook a...

  • 1 kudos
2 More Replies
Aidonis
by New Contributor III
  • 7048 Views
  • 3 replies
  • 3 kudos

Copilot Databricks integration

Given Copilot has now been released as a paid for product. Do we have a timeline when it will be integrated into Databricks?Our team are using VScode alot for Copilot and we think it would be super awesome to have it on our Databricks environment. Ou...

  • 7048 Views
  • 3 replies
  • 3 kudos
Latest Reply
prasad_vaze
New Contributor III
  • 3 kudos

@Vartika no josephk didn't answer Aidan's question.  It's about comparing copilot with databricks assistant  and can copilot be used in databricks workspace?

  • 3 kudos
2 More Replies
xneg
by Contributor
  • 6922 Views
  • 12 replies
  • 9 kudos

PyPI library sometimes doesn't install during workflow execution

I have a workflow that is running upon a job cluster and contains a task that requires prophet library from PyPI:{ "task_key": "my_task", "depends_on": [ { "task_key": "<...>...

  • 6922 Views
  • 12 replies
  • 9 kudos
Latest Reply
Vartika
Moderator
  • 9 kudos

Hey @Eugene Bikkinin​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feed...

  • 9 kudos
11 More Replies
Michael_Galli
by Contributor II
  • 630 Views
  • 1 replies
  • 0 kudos

How to add a Workflow File Arrival trigger on a file in a Unity Catalog Volume in Azure Databricks

I have a UC volume wil XLSX files, and would like to run a workflow when a new file arrives in the Volume.I was thinking of a workflow file arrival trigger.But that does not work when I add the physical ADLS location of the root folder:External locat...

Michael_Galli_0-1706024182211.png
  • 630 Views
  • 1 replies
  • 0 kudos
Latest Reply
Michael_Galli
Contributor II
  • 0 kudos

Worked it out with Microsoft.-> only works with external volumes, not managed.https://learn.microsoft.com/en-us/azure/databricks/workflows/jobs/file-arrival-triggers 

  • 0 kudos
Bram
by New Contributor II
  • 2562 Views
  • 7 replies
  • 0 kudos

Configuration spark.sql.sources.partitionOverwriteMode is not available.

Dear, In the current setup, we are using dbt as a modeling tool for our data lakehouse.For a specific use case, we want to use the insert_overwrite strategy, where dbt will replace all data for a specific partition:Databricks configurations | dbt Dev...

  • 2562 Views
  • 7 replies
  • 0 kudos
Latest Reply
nad__
New Contributor II
  • 0 kudos

Hi!I have same issue with insert_overwrite on Databricks with SQL Warehouse. Do you have any solution or updates? Or is it still not supported by Databricks? 

  • 0 kudos
6 More Replies
ShlomoSQM
by New Contributor
  • 452 Views
  • 2 replies
  • 0 kudos

Autoloader, toTable

"In autoloader there is the option ".toTable(catalog.volume.table_name)", I have an autoloder script that reads all the files from a source volume in unity catalog, inside the source I have two different files with two different schemas.I want to sen...

  • 452 Views
  • 2 replies
  • 0 kudos
Latest Reply
Palash01
Contributor III
  • 0 kudos

Hey @ShlomoSQM, looks like @shan_chandra suggested a feasible solution, just to add a little more context this is how you can achieve the same if you have a column that can help you identify what is type1 and type 2file_type1_stream = readStream.opti...

  • 0 kudos
1 More Replies
dbdude
by New Contributor II
  • 4481 Views
  • 3 replies
  • 0 kudos

AWS Secrets Works In One Cluster But Not Another

Why can I use boto3 to go to secrets manager to retrieve a secret with a personal cluster but I get an error with a shared cluster?NoCredentialsError: Unable to locate credentials 

  • 4481 Views
  • 3 replies
  • 0 kudos
Latest Reply
drii_cavalcanti
New Contributor III
  • 0 kudos

Hey @Szpila , have you found a solution for it? I am currently encountering the same issue.

  • 0 kudos
2 More Replies
Data_Engineeri7
by New Contributor
  • 617 Views
  • 3 replies
  • 0 kudos

Global or environment parameters.

Hi All,Need a help on creating utility file that can be use in pyspark notebook.Utility file contain variables like database and schema names. So I need to pass this variables in other notebook wherever I am using database and schema.Thanks   

  • 617 Views
  • 3 replies
  • 0 kudos
Latest Reply
KSI
New Contributor II
  • 0 kudos

You can use:${param_catalog}.schema.tablename.Pass actual value in the notebook through a job param "param_catalog" or widget utils through text called "param_catalog"

  • 0 kudos
2 More Replies
ilarsen
by Contributor
  • 1460 Views
  • 2 replies
  • 1 kudos

Resolved! Schema inference with auto loader (non-DLT and DLT)

Hi. Another question, this time about schema inference and column types.  I have dabbled with DLT and structured streaming with auto loader (as in, not DLT).  My data source use case is json files, which contain nested structures. I noticed that in t...

  • 1460 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @ilarsen , Certainly! Let’s delve into the nuances of schema inference and column types in the context of Delta Live Tables (DLT) and structured streaming with auto loader.   DLT vs. Structured Streaming: DLT (Delta Live Tables) is a managed servi...

  • 1 kudos
1 More Replies
Labels
Top Kudoed Authors