cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to read a fixed length file in Spark using DataFrame API and SCALA

alexraj84
New Contributor

I have a fixed length file ( a sample is shown below) and I want to read this file using DataFrames API in Spark using SCALA(not python or java). Using DataFrames API there are ways to read textFile, json file and so on but not sure if there is a way to read a fixed-length file. I was searching the internet for this and found a github link, but I got to download

spark-fixedwidth-assembly-1.0.jar
for this purpose however I was unable to figure out the jar anywhere. I am completely lost here and need your suggestions and help.

Here is the file

    56 apple     TRUE 0.56
    45 pear      FALSE1.34
    34 raspberry TRUE 2.43
    34 plum      TRUE 1.31
    53 cherry    TRUE 1.4 
    23 orange    FALSE2.34
    56 persimmon FALSE23.2 

The fixed width of each columns are 3, 10, 5, 4

Please suggest your opinion.

2 REPLIES 2

User16789201666
Databricks Employee
Databricks Employee

Nagendra
New Contributor II

Find the below solution which can be used.

Let us consider this is the data in the file.

EMP ID   First Name              Last Name               

        1Chris                   M                                             

        2John                    R                                            

        3Amit                    C                                            

EMP ID: starts at 1 to 10 characters

First Name:  starts at 11 to 25 characters

Last name: starts at 36 to 25 characters

DOB: starts at 61 to 30 characters.

Step 1:

Load the file into a dataframe with the below options, this will create a data frame with only one column and all rows:

DFSchema = "single_col string"

Df = (spark.read

                            .option("header",False)

                           .schema(DFSchema)

                           .csv(file_path)

                 )

Step 2:

Now the single column has to be broken into 3 columns. Follow the steps below

Option 1: Create column by column manually

Df = (Df.withColumn(“EMP_ID”, Df.single_col.substr(1,10))

             .withColumn(“First Name”, Df.single_col.substr (11,25))

.withColumn(“Last Name”, Df.single_col.substr(36,25))

)

Option 2: Create columns pro-grammatically

Define the column length details:

Col_Details = [(‘EMP_ID’,(1,10),(‘ First Name’,(11,25)),(‘Last Name’,(36,25))]

Using loop separate the column programmatically:

for col_info in a:

 df = df.withColumn(col_info[0],DataFrameReader.value.substr(col_info[1][0],col_info[1][1]))

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group