by
Aj2
• New Contributor III
- 11846 Views
- 4 replies
- 1 kudos
What are the steps needed to connect to a DB2-AS400 source to pull data to lake using Databricks? I believe it requires establishing a jdbc connection, but I couldnot find much details online
- 11846 Views
- 4 replies
- 1 kudos
Latest Reply
Hi @Ajay Menon Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
3 More Replies
- 2504 Views
- 3 replies
- 2 kudos
Getting started with Databricks is being made very easy now. Presenting dbdemos.If you're looking to get started with Databricks, there's good news: dbdemos makes it easier than ever. This platform offers a range of demos that you can install directl...
- 2504 Views
- 3 replies
- 2 kudos
Latest Reply
That's a great share Suteja. Is that supposed to work with the Databricks Community edition account? Had a strange error while trying. Any help is appreciated!Thanks,F
2 More Replies
by
Mr__D
• New Contributor II
- 7358 Views
- 2 replies
- 3 kudos
Hi All,It seem AutoLoader is good option for even driven data ingestion but if my job runs only once , do I still need autoloader ? I dont want to spend money to spin a cluster whole day.I know we have RunOnce option available while running a job but...
- 7358 Views
- 2 replies
- 3 kudos
Latest Reply
Hi @Deepak Bhatt Help us build a vibrant and resourceful community by recognizing and highlighting insightful contributions. Mark the best answers and show your appreciation!Thanks and regards
1 More Replies
- 7644 Views
- 1 replies
- 0 kudos
ProblemI have a directory in S3 with a bunch of data files, like "data-20221101.parquet". They all have the same columns: timestamp, reading_a, reading_b, reading_c. In the earlier files, the readings are floats, but in the later ones they are double...
- 7644 Views
- 1 replies
- 0 kudos
Latest Reply
1) Can you let us know what was the error message when you don't set the schema & use mergeSchema2) What happens when you define schema (with FloatType) & use mergeSchema ? what error message do you get ?
by
laksh
• New Contributor II
- 1556 Views
- 2 replies
- 0 kudos
I was wondering how the Unity Catalog would help in data quality validations for real time (streaming data) data ingestion.
- 1556 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @arun laksh Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
1 More Replies
- 1186 Views
- 0 replies
- 2 kudos
We will host the first Databricks Bay Area User Group meeting in the Databricks Mountain View office on March 14 2:30-5:00pm PT.We'll have Dave Mariani - CTO & Founder at AtScale, and Riley Phillips - Enterprise Solution Engineer at Matillion to shar...
- 1186 Views
- 0 replies
- 2 kudos
- 7537 Views
- 1 replies
- 6 kudos
This post will help you simplify your data ingestion by utilizing Auto Loader, Delta Optimized Writes, Delta Write Jobs, and Delta Live Tables. Pre-Req: You are using JSON data and Delta Writes commandsStep 1: Simplify ingestion with Auto Loader Delt...
- 7537 Views
- 1 replies
- 6 kudos
Latest Reply
This post will help you simplify your data ingestion by utilizing Auto Loader, Delta Optimized Writes, Delta Write Jobs, and Delta Live Tables.Pre-Req: You are using JSON data and Delta Writes commandsStep 1: Simplify ingestion with Auto Loader Delta...
- 926 Views
- 0 replies
- 1 kudos
Auto Loader provides Python and Scala methods to ingest new data from a folder location into a Delta Lake table by using directory listing or file notifications. Here's a quick video (7:00) on how to use Auto Loader for Databricks on AWS with Databri...
- 926 Views
- 0 replies
- 1 kudos