cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Autoload files in wide table format, but store it unpivot in Streaming Table

simensma
New Contributor II

Hey, I get wide table format in csv file. Where each sensor have its own column. I want to store it in Delta Live Streaming Table. But since it is inefficient to process it and storage space, due to varying frequency and sensor amount. I want to transform into long format for the bronze raw data table. Where it has ID, SensorID and Value as columns instead.

Is this possible with autoloader and Delta Live Streaming Table using for example melt function between?

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

@Simen Småriset​ :

Here's a general outline of the steps you can follow:

  1. Set up your Delta Live Streaming Table using the Autoloader feature. This allows you to automatically load new data as it arrives in your specified directory.
  2. Create a Databricks notebook or script where you'll perform the transformation. You can use programming languages like Python or Scala.
  3. Use the Autoloader to read the incoming wide-format CSV files into a DataFrame. The Autoloader feature will automatically detect new files and load them into the DataFrame.
  4. Perform the transformation from wide to long format using the melt function or any suitable logic. The melt function reshapes the DataFrame by unpivoting the sensor columns into rows, with columns for ID, SensorID, and Value.
  5. Write the transformed DataFrame into a Delta table, which will serve as your bronze raw data table. This table will have the desired long-format structure with ID, SensorID, and Value as columns.

View solution in original post

3 REPLIES 3

Anonymous
Not applicable

@Simen Småriset​ :

Here's a general outline of the steps you can follow:

  1. Set up your Delta Live Streaming Table using the Autoloader feature. This allows you to automatically load new data as it arrives in your specified directory.
  2. Create a Databricks notebook or script where you'll perform the transformation. You can use programming languages like Python or Scala.
  3. Use the Autoloader to read the incoming wide-format CSV files into a DataFrame. The Autoloader feature will automatically detect new files and load them into the DataFrame.
  4. Perform the transformation from wide to long format using the melt function or any suitable logic. The melt function reshapes the DataFrame by unpivoting the sensor columns into rows, with columns for ID, SensorID, and Value.
  5. Write the transformed DataFrame into a Delta table, which will serve as your bronze raw data table. This table will have the desired long-format structure with ID, SensorID, and Value as columns.

Vartika
Moderator
Moderator

Hi @Simen Småriset​,

Hope everything is going great.

Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we can help you. 

Cheers!

simensma
New Contributor II

Yes it was resolved, but loading it into long format instead of being wide formats.

But thanks for the answers.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.