cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Aj2
by New Contributor III
  • 10030 Views
  • 4 replies
  • 1 kudos

Resolved! How to connect to DB2-AS400?

What are the steps needed to connect to a DB2-AS400 source to pull data to lake using Databricks? I believe it requires establishing a jdbc connection, but I couldnot find much details online

  • 10030 Views
  • 4 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Ajay Menon​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 1 kudos
3 More Replies
Anonymous
by Not applicable
  • 2182 Views
  • 3 replies
  • 2 kudos

www.dbdemos.ai

Getting started with Databricks is being made very easy now. Presenting dbdemos.If you're looking to get started with Databricks, there's good news: dbdemos makes it easier than ever. This platform offers a range of demos that you can install directl...

  • 2182 Views
  • 3 replies
  • 2 kudos
Latest Reply
FJ
Contributor III
  • 2 kudos

That's a great share Suteja. Is that supposed to work with the Databricks Community edition account? Had a strange error while trying. Any help is appreciated!Thanks,F

  • 2 kudos
2 More Replies
Mr__D
by New Contributor II
  • 6767 Views
  • 2 replies
  • 3 kudos

Do we really need Autoloader for batch processing.?

Hi All,It seem AutoLoader is good option for even driven data ingestion but if my job runs only once , do I still need autoloader ? I dont want to spend money to spin a cluster whole day.I know we have RunOnce option available while running a job but...

  • 6767 Views
  • 2 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Deepak Bhatt​ Help us build a vibrant and resourceful community by recognizing and highlighting insightful contributions. Mark the best answers and show your appreciation!Thanks and regards

  • 3 kudos
1 More Replies
Erik_L
by Contributor II
  • 6149 Views
  • 1 replies
  • 0 kudos

How to merge parquets with different column types

ProblemI have a directory in S3 with a bunch of data files, like "data-20221101.parquet". They all have the same columns: timestamp, reading_a, reading_b, reading_c. In the earlier files, the readings are floats, but in the later ones they are double...

  • 6149 Views
  • 1 replies
  • 0 kudos
Latest Reply
mathan_pillai
Databricks Employee
  • 0 kudos

1) Can you let us know what was the error message when you don't set the schema & use mergeSchema2) What happens when you define schema (with FloatType) & use mergeSchema ? what error message do you get ?

  • 0 kudos
laksh
by New Contributor II
  • 1345 Views
  • 2 replies
  • 0 kudos

Real time data quality validation (Streaming data ingestion)

I was wondering how the Unity Catalog would help in data quality validations for real time (streaming data) data ingestion. 

  • 1345 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @arun laksh​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 0 kudos
1 More Replies
Lu_Wang_SA_DBX
by Databricks Employee
  • 1033 Views
  • 0 replies
  • 2 kudos

We will host the first Databricks Bay Area User Group meeting in the Databricks Mountain View office on March 14 2:30-5:00pm PT.We'll have Dave Ma...

We will host the first Databricks Bay Area User Group meeting in the Databricks Mountain View office on March 14 2:30-5:00pm PT.We'll have Dave Mariani - CTO & Founder at AtScale, and Riley Phillips - Enterprise Solution Engineer at Matillion to shar...

  • 1033 Views
  • 0 replies
  • 2 kudos
User16835756816
by Valued Contributor
  • 7212 Views
  • 1 replies
  • 6 kudos

How can I simplify my data ingestion by processing the data as it arrives in cloud storage?

This post will help you simplify your data ingestion by utilizing Auto Loader, Delta Optimized Writes, Delta Write Jobs, and Delta Live Tables. Pre-Req: You are using JSON data and Delta Writes commandsStep 1: Simplify ingestion with Auto Loader Delt...

  • 7212 Views
  • 1 replies
  • 6 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 6 kudos

This post will help you simplify your data ingestion by utilizing Auto Loader, Delta Optimized Writes, Delta Write Jobs, and Delta Live Tables.Pre-Req: You are using JSON data and Delta Writes commandsStep 1: Simplify ingestion with Auto Loader Delta...

  • 6 kudos
MadelynM
by Databricks Employee
  • 769 Views
  • 0 replies
  • 1 kudos

vimeo.com

Auto Loader provides Python and Scala methods to ingest new data from a folder location into a Delta Lake table by using directory listing or file notifications. Here's a quick video (7:00) on how to use Auto Loader for Databricks on AWS with Databri...

  • 769 Views
  • 0 replies
  • 1 kudos
Labels