cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to read a fixed length file in Spark using DataFrame API and SCALA

alexraj84
New Contributor

I have a fixed length file ( a sample is shown below) and I want to read this file using DataFrames API in Spark using SCALA(not python or java). Using DataFrames API there are ways to read textFile, json file and so on but not sure if there is a way to read a fixed-length file. I was searching the internet for this and found a github link, but I got to download

spark-fixedwidth-assembly-1.0.jar
for this purpose however I was unable to figure out the jar anywhere. I am completely lost here and need your suggestions and help.

Here is the file

    56 apple     TRUE 0.56
    45 pear      FALSE1.34
    34 raspberry TRUE 2.43
    34 plum      TRUE 1.31
    53 cherry    TRUE 1.4 
    23 orange    FALSE2.34
    56 persimmon FALSE23.2 

The fixed width of each columns are 3, 10, 5, 4

Please suggest your opinion.

2 REPLIES 2

User16789201666
Contributor II
Contributor II

Nagendra
New Contributor II

Find the below solution which can be used.

Let us consider this is the data in the file.

EMP ID   First Name              Last Name               

        1Chris                   M                                             

        2John                    R                                            

        3Amit                    C                                            

EMP ID: starts at 1 to 10 characters

First Name:  starts at 11 to 25 characters

Last name: starts at 36 to 25 characters

DOB: starts at 61 to 30 characters.

Step 1:

Load the file into a dataframe with the below options, this will create a data frame with only one column and all rows:

DFSchema = "single_col string"

Df = (spark.read

                            .option("header",False)

                           .schema(DFSchema)

                           .csv(file_path)

                 )

Step 2:

Now the single column has to be broken into 3 columns. Follow the steps below

Option 1: Create column by column manually

Df = (Df.withColumn(“EMP_ID”, Df.single_col.substr(1,10))

             .withColumn(“First Name”, Df.single_col.substr (11,25))

.withColumn(“Last Name”, Df.single_col.substr(36,25))

)

Option 2: Create columns pro-grammatically

Define the column length details:

Col_Details = [(‘EMP_ID’,(1,10),(‘ First Name’,(11,25)),(‘Last Name’,(36,25))]

Using loop separate the column programmatically:

for col_info in a:

 df = df.withColumn(col_info[0],DataFrameReader.value.substr(col_info[1][0],col_info[1][1]))

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.