How to read a fixed length file in Spark using DataFrame API and SCALA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-04-2016 10:52 AM
I have a fixed length file ( a sample is shown below) and I want to read this file using DataFrames API in Spark using SCALA(not python or java). Using DataFrames API there are ways to read textFile, json file and so on but not sure if there is a way to read a fixed-length file. I was searching the internet for this and found a github link, but I got to download
spark-fixedwidth-assembly-1.0.jar
for this purpose however I was unable to figure out the jar anywhere. I am completely lost here and need your suggestions and help.
Here is the file
56 apple TRUE 0.56
45 pear FALSE1.34
34 raspberry TRUE 2.43
34 plum TRUE 1.31
53 cherry TRUE 1.4
23 orange FALSE2.34
56 persimmon FALSE23.2
The fixed width of each columns are 3, 10, 5, 4
Please suggest your opinion.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-05-2016 09:58 AM
I think you got the answer in stackoverflow
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2021 04:50 AM
Find the below solution which can be used.
Let us consider this is the data in the file.
EMP ID First Name Last Name
1Chris M
2John R
3Amit C
EMP ID: starts at 1 to 10 characters
First Name: starts at 11 to 25 characters
Last name: starts at 36 to 25 characters
DOB: starts at 61 to 30 characters.
Step 1:
Load the file into a dataframe with the below options, this will create a data frame with only one column and all rows:
DFSchema = "single_col string"
Df = (spark.read
.option("header",False)
.schema(DFSchema)
.csv(file_path)
)
Step 2:
Now the single column has to be broken into 3 columns. Follow the steps below
Option 1: Create column by column manually
Df = (Df.withColumn(“EMP_ID”, Df.single_col.substr(1,10))
.withColumn(“First Name”, Df.single_col.substr (11,25))
.withColumn(“Last Name”, Df.single_col.substr(36,25))
)
Option 2: Create columns pro-grammatically
Define the column length details:
Col_Details = [(‘EMP_ID’,(1,10),(‘ First Name’,(11,25)),(‘Last Name’,(36,25))]
Using loop separate the column programmatically:
for col_info in a:
df = df.withColumn(col_info[0],DataFrameReader.value.substr(col_info[1][0],col_info[1][1]))

