cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to read file in pyspark with “]|[” delimiter

lambarc
New Contributor II

The data looks like this:

pageId]|[page]|[Position]|[sysId]|[carId 0005]|[bmw]|[south]|[AD6]|[OP4

There are atleast 50 columns and millions of rows.

I did try to use below code to read:

dff = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").option("delimiter", "]|[").load(trainingdata+"part-00000")

it gives me following error:

IllegalArgumentException: u'Delimiter cannot be more than one character: ]|['

7 REPLIES 7

SamKlingner
New Contributor III

Sorry if this notifies everyone again - struggling with the text editor here!

I'm not sure of a workaround for directly splitting with more than one character. 2 ways around this that I can see:

The first approach would be to split using "|" and then replace the angle brackets:

dff = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").option("delimiter", "|").load(trainingdata+"part-00000")

from pyspark.sql.functions import regexp_replace dff = dff.withColumn('pageId', regexp_replace('[pageId]','[[]]',''))....other columns....

Given that you have 50 columns you might want to loop through this:

dffs_headers = dff.dtypes
for i in dffs_headers: 
    newColumnLabel = i[0].replace('[','').replace(']','') 
    dff = dff.withColumn(newColumnLabel, regexp_replace(i[0],'[[]]','')).drop(i[0]) 
You'll still need to build a function or add cases to this loop to correctly cast each column though. The second way would be go via RDD:
dff = sc.textfile(trainingdata+"part-00000").map(lambda x: x.replace('[','').replace(']','').split('|')).toDF() 

But that still leaves you with the casting problem Hope that helps!

The code block button just isn't playing nice, sorry.

sher
Valued Contributor II

val df = spark.read.format("csv")

.option("header",true)

.option("sep","||")

.load("file load")

display(df) 

try this

Meghala
Valued Contributor II

Might be usefull ​

rohit199912
New Contributor II

you might also try the blow option.

1). Use a different file format: You can try using a different file format that supports multi-character delimiters, such as text JSON.

2). Use a custom Row class: You can write a custom Row class to parse the multi-character delimiter yourself, and then use the spark.read.text API to read the file as text. You will then need to apply the custom Row class to each line in the text file to extract the values

this one works. thanks.

Manoj12421
Valued Contributor II

Yes this one is useful but what if we need to use it in CSV format only than is there any other query if you can share.​ @ROHIT AGARWAL​ 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group