cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

malla_aayush
by Databricks Partner
  • 880 Views
  • 2 replies
  • 1 kudos

Resolved! Not able to find lab for Data Engineering Learning Path

I am not able to find the data engineering learning path , i did open partner databricks academy lab which redirected to uplimit where i also enrolled myself to instructor led course but not able to see any labs.

  • 880 Views
  • 2 replies
  • 1 kudos
Latest Reply
junaid-databrix
New Contributor III
  • 1 kudos

You are right the self paced e-learning courses does not include any labs. However, they are available on instructor led courses available on Uplimit. I recently enrolled for one and here is how it worked for me:1. On Uplimit portal enroll for an upc...

  • 1 kudos
1 More Replies
susanne
by Databricks Partner
  • 1741 Views
  • 3 replies
  • 0 kudos

Resolved! Authentication failure Lakeflow SQL Server Ingestion

Hi all I am trying to create a Lakeflow Ingestion Pipeline for SQL Server, but I am running into the following authentication error when using my Databricks Database User for the connection:Gateway is stopping. Authentication failure while obtaining ...

  • 1741 Views
  • 3 replies
  • 0 kudos
Latest Reply
susanne
Databricks Partner
  • 0 kudos

Hi @szymon_dybczak,thanks a lot, that did the trick

  • 0 kudos
2 More Replies
Alena
by New Contributor II
  • 795 Views
  • 1 replies
  • 0 kudos

Programmatically set minimum workers for a job cluster based on file size?

I’m running an ingestion pipeline with a Databricks job:A file lands in S3A Lambda is triggeredThe Lambda runs a Databricks jobThe incoming files vary a lot in size, which makes processing times vary as well. My job cluster has autoscaling enabled, b...

  • 795 Views
  • 1 replies
  • 0 kudos
Latest Reply
kerem
Contributor
  • 0 kudos

Hi Alena, Jobs API has update functionality to be able to do that: https://docs.databricks.com/api/workspace/jobs_21/updateIf for some reason you can’t update your pipeline before you trigger it you can also consider creating a new job with desired c...

  • 0 kudos
Nick_Pacey
by New Contributor III
  • 956 Views
  • 2 replies
  • 0 kudos

Question on best method to deliver Azure SQL Server data into Databricks Bronze and Silver.

Hi,We have a Azure SQL Server (replicating from an On Prem SQL Server) that is required to be in Databricks bronze and beyond.This database has 100s of tables that are all required.  Size of tables will vary from very small up to the biggest tables 1...

  • 956 Views
  • 2 replies
  • 0 kudos
Latest Reply
kerem
Contributor
  • 0 kudos

Hey Nick,Have you tried the SQL Server connector with Lakeflow Connect? This should provide native connection to your SQL server, potentially allowing for incremental updates and CDC setup. https://learn.microsoft.com/en-us/azure/databricks/ingestion...

  • 0 kudos
1 More Replies
yit
by Databricks Partner
  • 604 Views
  • 1 replies
  • 0 kudos

Unable to Upcast DECIMAL Field in Autoloader

I’m using Autoloader to read Parquet files and write them to a Delta table. I want to enforce a schema in which Column1 is defined as DECIMAL(10,2). However, in the Parquet files being ingested, Column1 is defined as DECIMAL(8,2).When Autoloader read...

  • 604 Views
  • 1 replies
  • 0 kudos
Latest Reply
kerem
Contributor
  • 0 kudos

Hi Yit,To potentially simplify your issue, why not read this column as String in your stream and then cast it to DECIMAL(10, 2) afterwards? That should eliminate the rescue behaviour. Kerem Durak

  • 0 kudos
ManojkMohan
by Honored Contributor II
  • 690 Views
  • 2 replies
  • 0 kudos

Resolved! Compute kind SERVERLESS_REPL_VM is not allowed to use cluster scoped libraries.

i have a s3 uri 's3://salesforcedatabricksorders/orders_data.xlsx' , i have created a connector between data bricks and salesfoce, i am first gettig the orders_data.xlsx to databricks layer perform basic transformation on it and then send it to sales...

ManojkMohan_0-1754430186158.png
  • 690 Views
  • 2 replies
  • 0 kudos
Latest Reply
kerem
Contributor
  • 0 kudos

Hello,I’ve come across the same issue reading an Excel file into a PySpark dataframe via Serverless compute. As the error states with Serverless, you cannot install a cluster scoped library so you have to use notebook scoped libraries (%pip install…)...

  • 0 kudos
1 More Replies
Pratikmsbsvm
by Contributor
  • 1397 Views
  • 1 replies
  • 1 kudos

Resolved! How to Create Metadata driven Data Pipeline in Databricks

I am creating a Data Pipeline as shown below.1. Files from multiple input source is coming to respective folder in bronze layer.2. Using Databricks to perform Transformation and load transformed data to Azure SQL. also to ADLS Gen2 Silver (not shown ...

Pratikmsbsvm_0-1754408926145.png
  • 1397 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @Pratikmsbsvm ,It's totally realistic requirement. In fact you can find many articles that suggests some approaches how to design such control table. Take for example following article: https://medium.com/dbsql-sme-engineering/a-primer-for-metadat...

  • 1 kudos
Sainath368
by Contributor
  • 1019 Views
  • 1 replies
  • 1 kudos

Resolved! How to Retrieve the spark.statistics.createdAt When Statistics Were Last Updated in Databricks?

Hi everyone,I regularly (once a week) run the analyze table compute statistics command on all my tables in Databricks to keep statistics up to date for query optimization.In the Spark table UI catalog, I can see some statistics metadata like spark.st...

Sainath368_0-1754309683688.png
  • 1019 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika
Community Manager
  • 1 kudos

Hello @Sainath368! sql.statistics.createdAt reflects the epoch time when statistics were created. Unfortunately, there's no direct command available to check when the statistics were last updated. As a workaround, you can manually set the current tim...

  • 1 kudos
Itai_Sharon
by New Contributor II
  • 1505 Views
  • 3 replies
  • 1 kudos

dbutils.notebook.run() getting general error instead specific

Hi, In a python file using dbutils.notebook.run() I'm running specific notebook.The notebook is failling but i'm getting a general error log instead the real specific log.When I'm running the notebook directly - I'm getting the specific error log.gen...

  • 1505 Views
  • 3 replies
  • 1 kudos
Latest Reply
Itai_Sharon
New Contributor II
  • 1 kudos

@Vinay_M_RBTW, when trying to run a job using Databricks API, I encounter the same issue (general "FAILED: Workload failed"):from databricks.sdk import WorkspaceClient client = WorkspaceClient() run = client.jobs.run_now(job_id) error message:state_...

  • 1 kudos
2 More Replies
Sadam97
by New Contributor III
  • 876 Views
  • 2 replies
  • 1 kudos

databricks job cancel does not wait for termination of streaming tasks

We have created databricks jobs and each has multiple tasks. Each task is 24/7 running streaming with checkpoint enabled. We want it to be stateful when cancel and run the job but it seems like, when we cancel the job run it kill the parent process a...

  • 876 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vidhi_Khaitan
Databricks Employee
  • 1 kudos

If the “reporting” layer is essentially micro-batching over bounded backlogs, run it with availableNow (or a scheduled batch job) so each run is naturally bounded and exits cleanly on its own, no manual cancel. This greatly reduces chances of partial...

  • 1 kudos
1 More Replies
Srajole
by New Contributor
  • 940 Views
  • 1 replies
  • 1 kudos

Write data issue

My Databricks job is completing successful but my data is not written into the target table, source path is correct, each n every thing is correct, but I am not sure y data is not written into the delta table.

  • 940 Views
  • 1 replies
  • 1 kudos
Latest Reply
Vidhi_Khaitan
Databricks Employee
  • 1 kudos

hi @Srajole ,There are a bunch of possibilities as to why the data is not being written into the table -You’re writing to a path different from the table’s storage location, or using a write mode that doesn’t replace data as expected.spark.sql("DESCR...

  • 1 kudos
dbr_data_engg
by New Contributor III
  • 2262 Views
  • 2 replies
  • 0 kudos

Using Databrick Bladebridge or Lakebridge for SQL Migration

Getting Transpile Error while executing command for Databrick Bladebridge or Lakebridge,databricks labs lakebridge transpile --source-dialect mssql --input-source "<Path>/sample.sql" --output-folder "<Path>\output"Error :TranspileError(code=FAILURE, ...

  • 2262 Views
  • 2 replies
  • 0 kudos
Latest Reply
Abhimanyu
Databricks Partner
  • 0 kudos

did you find a solution? 

  • 0 kudos
1 More Replies
juanjomendez96
by Contributor
  • 1507 Views
  • 2 replies
  • 3 kudos

Resolved! Best practices for compute usage

Hello there!I am writing this open message to know how you guys are using the computes in your work cases.Currently, in my company, we have multiple compute instances that can be differentiated into two main types:Clusters with a large instance for b...

  • 1507 Views
  • 2 replies
  • 3 kudos
Latest Reply
radothede
Valued Contributor II
  • 3 kudos

Hello @juanjomendez96 ,to my best knowledge and experience autoscaled shared cluster (using smaller instances) works good for most 2nd-case scenario (clusters for ad-hoc/development team usage).This approach allows You to reuse the resources across t...

  • 3 kudos
1 More Replies
VicS
by Databricks Partner
  • 1695 Views
  • 1 replies
  • 1 kudos

Resolved! How to install SAP JDBC on job cluster via asset bundles

I'm trying to use the SAP JDBC driver to read data in my Spark application which I deploy via asset bundles with job computes.I was able to install the SAP JDBC Driver on a general purpose cluster by adding the jar (com.sap.cloud.db.jdbc:ngdbc:2.25.9...

  • 1695 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @VicS ,To add a Maven package to a job task definition , in libraries, specify a maven mapping for each Maven package to be installed. For each mapping, specify the following: resources: jobs: my_job: # ... tasks: - task_...

  • 1 kudos
Labels