cancel
Showing results for 
Search instead for 
Did you mean: 
SantiNath_Dey
New Contributor III
since ‎03-08-2026
Friday

User Stats

  • 11 Posts
  • 0 Solutions
  • 2 Kudos given
  • 1 Kudos received

User Activity

We are extracting data from SQL Server/Oracle using ADF and storing it in Parquet format. When reading the files in Databricks using spark.read.parquet, decimal values are getting truncated—for example, 1245.1111111189979 becomes 1245.111111118. This...
Hi,As part of our Databricks modernization program for the customer, we have received a request from the client regarding a Catalog template that they would like to populate to capture and articulate their business benefits.I wanted to check if we ha...
Currently implementing a Data Quality framework using the DQX framework with a metadata-driven architecture. The solution incorporates various data quality checks such as null checks, duplicate detection, date validation, and numeric validations.Coul...
We are implementing an incremental load for semi-structured data [ complex nested Json file ]using Auto Loader. To handle schema drifts—such as new fields, changes in column order, or data type and precision modifications (e.g., Decimal and Integer)—...
Hi ,We have multiple complex JSON files and need to flatten them, especially handling array data types. Whenever an array is present, we need to create a new child table and establish a relationship between the master and child tables. This will foll...
Kudos from
Kudos given to