cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Dayaa
by New Contributor II
  • 2327 Views
  • 3 replies
  • 4 kudos

Resolved! Load data into Azure SQL Database from Azure Databricks ( restricted table not a whole workspace tables)

Hi ,I want to share limited tables in my databricks workspace and users will connect to my databricks through Azure Data factory and will load data into Azure SQL. Is this possible using Delta Sharing? Or any other method or tool?

  • 2327 Views
  • 3 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Dayananthan Marimuthu​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your...

  • 4 kudos
2 More Replies
JacintoArias
by New Contributor III
  • 6667 Views
  • 5 replies
  • 1 kudos

Spark predicate pushdown on parquet files when using limit

Hi,While developing an ETL for a large dataset I want to get a sample of the top rows to check that my the pipeline "just runs", so I add a limit clause when reading the dataset.I'm surprised to see that instead of creating a single task as in a sho...

  • 6667 Views
  • 5 replies
  • 1 kudos
Latest Reply
JacekLaskowski
New Contributor III
  • 1 kudos

It's been a while since the question was asked, and in the meantime Delta Lake 2.2.0 hit the shelves with the exact feature the OP asked about, i.e. LIMIT pushdown:LIMIT pushdown into Delta scan. Improve the performance of queries containing LIMIT cl...

  • 1 kudos
4 More Replies
lizou
by Contributor II
  • 3272 Views
  • 3 replies
  • 0 kudos

bug: add csv data UI: missing leading zero

use add data UI, add csv manually, even set data type as string, the leading zero will be missingexample csvval1,val20012345, abcafter load data, 123,abc is stored in table

image image
  • 3272 Views
  • 3 replies
  • 0 kudos
Latest Reply
lizou
Contributor II
  • 0 kudos

there are no issues using spark.read in notebooksthe issue is specific to using Add Data User interface and adding a csv file manually.

  • 0 kudos
2 More Replies
Anonymous
by Not applicable
  • 1265 Views
  • 2 replies
  • 0 kudos
  • 1265 Views
  • 2 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

If you are looking for incrementally loading data from Azure SQL, checkout one of our technology partners that support change-data-capture or setup debezium for sql-server. These solutions could land data in a streaming fashion to kafka/kinesis/even...

  • 0 kudos
1 More Replies
User16137833804
by Databricks Employee
  • 1389 Views
  • 1 replies
  • 1 kudos
  • 1389 Views
  • 1 replies
  • 1 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 1 kudos

You could have the single node cluster where proxy is installed monitored by one of the tools like cloudwatch, azure monitor, datadog etc and have it configured to send alerts on node failure

  • 1 kudos
User16826994223
by Honored Contributor III
  • 5625 Views
  • 1 replies
  • 0 kudos

What does it mean that Delta Lake supports multi-cluster writes

What does it mean that Delta Lake supports multi-cluster writes ,Please explain , Ca we write same delta table with Multiple cluster

  • 5625 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

It means that Delta Lake does locking to make sure that queries writing to a table from multiple clusters at the same time won’t corrupt the table. However, it does not mean that if there is a write conflict (for example, update and delete the same t...

  • 0 kudos
Anonymous
by Not applicable
  • 883 Views
  • 0 replies
  • 0 kudos

Append subset of columns to target Snowflake table

I’m using the databricks-snowflake connector to load data into a Snowflake table. Can someone point me to any example of how we can append only a subset of columns to a target Snowflake table (for example some columns in the target snowflake table ar...

  • 883 Views
  • 0 replies
  • 0 kudos
Labels