cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Autoload files in wide table format, but store it unpivot in Streaming Table

simensma
New Contributor II

Hey, I get wide table format in csv file. Where each sensor have its own column. I want to store it in Delta Live Streaming Table. But since it is inefficient to process it and storage space, due to varying frequency and sensor amount. I want to transform into long format for the bronze raw data table. Where it has ID, SensorID and Value as columns instead.

Is this possible with autoloader and Delta Live Streaming Table using for example melt function between?

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

@Simen Småriset​ :

Here's a general outline of the steps you can follow:

  1. Set up your Delta Live Streaming Table using the Autoloader feature. This allows you to automatically load new data as it arrives in your specified directory.
  2. Create a Databricks notebook or script where you'll perform the transformation. You can use programming languages like Python or Scala.
  3. Use the Autoloader to read the incoming wide-format CSV files into a DataFrame. The Autoloader feature will automatically detect new files and load them into the DataFrame.
  4. Perform the transformation from wide to long format using the melt function or any suitable logic. The melt function reshapes the DataFrame by unpivoting the sensor columns into rows, with columns for ID, SensorID, and Value.
  5. Write the transformed DataFrame into a Delta table, which will serve as your bronze raw data table. This table will have the desired long-format structure with ID, SensorID, and Value as columns.

View solution in original post

3 REPLIES 3

Anonymous
Not applicable

@Simen Småriset​ :

Here's a general outline of the steps you can follow:

  1. Set up your Delta Live Streaming Table using the Autoloader feature. This allows you to automatically load new data as it arrives in your specified directory.
  2. Create a Databricks notebook or script where you'll perform the transformation. You can use programming languages like Python or Scala.
  3. Use the Autoloader to read the incoming wide-format CSV files into a DataFrame. The Autoloader feature will automatically detect new files and load them into the DataFrame.
  4. Perform the transformation from wide to long format using the melt function or any suitable logic. The melt function reshapes the DataFrame by unpivoting the sensor columns into rows, with columns for ID, SensorID, and Value.
  5. Write the transformed DataFrame into a Delta table, which will serve as your bronze raw data table. This table will have the desired long-format structure with ID, SensorID, and Value as columns.

Vartika
Moderator
Moderator

Hi @Simen Småriset​,

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

simensma
New Contributor II

Yes it was resolved, but loading it into long format instead of being wide formats.

But thanks for the answers.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!