cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

zmsoft
by New Contributor III
  • 365 Views
  • 5 replies
  • 6 kudos

Azure Synapse vs Databricks

Hi there,I would like to know the difference between Azure Databricks and Azure Synapse, which use case is Databricks appropriate and which use case is Synapse appropriate? What are the differences in their functions? What are the differences in thei...

  • 365 Views
  • 5 replies
  • 6 kudos
Latest Reply
thelogicplus
  • 6 kudos

share you use case i will suggest you about technology difference and which could be benefical for you. I love Data brick due to many awesome feature that help sql developer to programmer(python/Scala) to solve the use case on DataBricks. but if you ...

  • 6 kudos
4 More Replies
ayush19
by New Contributor III
  • 8 Views
  • 0 replies
  • 0 kudos

Running a jar on Databricks shared cluster using Airflow

Hello,I have a requirement to run a jar already installed on a Databricks cluster. It needs to be orchestrated using Apache Airflow. I followed the docs for the operator which can be used to do so https://airflow.apache.org/docs/apache-airflow-provid...

  • 8 Views
  • 0 replies
  • 0 kudos
sanjay
by Valued Contributor II
  • 16384 Views
  • 21 replies
  • 18 kudos

Resolved! How to limit number of files in each batch in streaming batch processing

Hi,I am running batch job which processes incoming files. I am trying to limit number of files in each batch process so added maxFilesPerTrigger option. But its not working. It processes all incoming files at once.(spark.readStream.format("delta").lo...

  • 16384 Views
  • 21 replies
  • 18 kudos
Latest Reply
mjedy7
Visitor
  • 18 kudos

Hi @Sandeep ,Can we usespark.readStream.format("delta").option(""maxBytesPerTrigger", "50G").load(silver_path).writeStream.option("checkpointLocation", gold_checkpoint_path).trigger(availableNow=True).foreachBatch(foreachBatchFunction).start() 

  • 18 kudos
20 More Replies
Jefke
by New Contributor II
  • 89 Views
  • 5 replies
  • 0 kudos

Resolved! Cloud_files function

Hi I'm fairly new to to Databricks and in some examples, blogs,... I see the cloud_files() function being used. But I'm always unable to find any documentation on it? Is there any reason for this? And what is the exact use case for the function? Most...

  • 89 Views
  • 5 replies
  • 0 kudos
Latest Reply
JissMathew
New Contributor II
  • 0 kudos

Hi @Jefke ,The cloud_files() function in Databricks is part of the Databricks Auto Loader, a tool used for incremental data ingestion from cloud storage like Azure Blob Storage, Amazon S3, or Google Cloud Storage. This function is specifically optimi...

  • 0 kudos
4 More Replies
Skully
by New Contributor
  • 138 Views
  • 1 replies
  • 0 kudos

Workflow Fail safe query

I have a large SQL query that includes multiple Common Table Expressions (CTEs) and joins across various tables, totaling approximately 2,500 lines. I want to ensure that if any part of the query or a specific CTE fails—due to a missing table or colu...

  • 138 Views
  • 1 replies
  • 0 kudos
Latest Reply
LingeshK
Databricks Employee
  • 0 kudos

There are few options you can try. Based of the information shared, I am assuming a skeleton for you complicated query as follows: WITH cte_one AS (SELECT *FROM view_one),-- Other CTEs...-- Your main query logicSELECTFROM cte_one-- Joins and other cl...

  • 0 kudos
Krizofe
by New Contributor II
  • 2198 Views
  • 6 replies
  • 3 kudos

Resolved! Migrating data from synapse to databricks

Hello team,I have a requirement of moving all the table from Azure Synapse (dedicated sql pool) to databricks.we have a data coming up from source to azure data lake frequently.we have Azure data factory to load data (data flow does the basic transfo...

  • 2198 Views
  • 6 replies
  • 3 kudos
Latest Reply
thelogicplus
  • 3 kudos

Hi @Krizofe , Just gone through you deatils and thought our similar experience  with  Azure Synapse to databrick migration. We faced a similar situation and were initially hesitant, One of the my colleague recommanded to use Travinto Technologies acc...

  • 3 kudos
5 More Replies
Vetrivel
by New Contributor III
  • 420 Views
  • 2 replies
  • 1 kudos

Resolved! SSIS packages migration to Databricks Workflows

We are doing POC to migrate SSIS packages to Databricks workflows as part of our effort to build the analytics layer, including dimension and fact tables. How can we accelerate or automate the SSIS package migration to Databricks environment?

  • 420 Views
  • 2 replies
  • 1 kudos
Latest Reply
thelogicplus
  • 1 kudos

Hi  Vetrivel,There are many company  which have accelator , who can help you to migrate ssis to databricks, check with travinto.com.   we are using their accelator with services from travinto and migrated  200+ till today 24-Nov-2024. These guys are ...

  • 1 kudos
1 More Replies
EDDatabricks
by Contributor
  • 1580 Views
  • 1 replies
  • 1 kudos

Multiple DLT pipelines same target table

Is it possible to have multiple DLT pipelines write data concurrently and in append mode to the same Delta table? Because of different data sources, with different data volumes and required processing, we would like to have different pipelines stream...

Data Engineering
Delta tables
DLT pipeline
  • 1580 Views
  • 1 replies
  • 1 kudos
Latest Reply
claudiayuan
  • 1 kudos

hello! did you get the answer?

  • 1 kudos
somedeveloper
by New Contributor
  • 74 Views
  • 3 replies
  • 0 kudos

Databricks Setting Dynamic Local Configuration Properties

It seems that Databricks is somehow setting the properties of local spark configurations for each notebook. Can someone point me to exactly how and where this is being done? I would like to set the scheduler to utilize a certain pool by default, but ...

  • 74 Views
  • 3 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

You will need to leverage cluster-level Spark configurations or global init scripts.  This will allow you to set "spark.scheduler.poo" property automatically for all workloads on the cluster. You can try navigationg to "Compute", select the cluster y...

  • 0 kudos
2 More Replies
oakhill
by New Contributor III
  • 113 Views
  • 8 replies
  • 1 kudos

Is Delta Live Tables not supported anymore? How do I use it in Python?

Hi!Any time I try to import "dlt" in a notebook session to develop Pipelines, I get an error message saying DLT is not supported on Spark Connect clusters. These are very generic clusters, I've tried runtime 14, 15 and the latest 16, using shared clu...

  • 113 Views
  • 8 replies
  • 1 kudos
Latest Reply
BigRoux
Databricks Employee
  • 1 kudos

Oakhill, we do provide free onboard training. You might be interested in the "Get Started with Data Engineering on Databricks" session.  You can register here: https://www.databricks.com/training/catalog.  When you are searching the catalog of traini...

  • 1 kudos
7 More Replies
Sega2
by New Contributor III
  • 447 Views
  • 1 replies
  • 0 kudos

cannot import name 'Buffer' from 'typing_extensions' (/databricks/python/lib/python3.10/site-package

I am trying to add messages to an azure service bus from a notebook. But I get error from title. Any suggestions how to solve this?import asynciofrom azure.servicebus.aio import ServiceBusClientfrom azure.servicebus import ServiceBusMessagefrom azure...

  • 447 Views
  • 1 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

@Sega2 it sounds like the error occurs because the typing_extensions library version in your Databricks environment is outdated and does not include the Buffer class, which is being imported by one of the Azure libraries. Can you first try: %pip inst...

  • 0 kudos
kalebkemp
by New Contributor
  • 166 Views
  • 1 replies
  • 0 kudos

FileReadException error when creating materialized view reading two schemas

Hi all. I'm getting an error `com.databricks.sql.io.FileReadException` when attempting to create a materialized view which reads tables from two different schemas in the same catalog. Is this just a limitation in databricks or do I potentially have s...

  • 166 Views
  • 1 replies
  • 0 kudos
Latest Reply
VZLA
Databricks Employee
  • 0 kudos

@kalebkemp  Can you please check if this is not an access issue?: SHOW GRANTS ON SCHEMA my_catalog.my_other_schema; Also test if you can successfully run a query that access data from both schemas: SELECT * FROM my_catalog.my_schema.some_table JOIN m...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels