01-18-2017 01:14 PM
The data looks like this:
pageId]|[page]|[Position]|[sysId]|[carId 0005]|[bmw]|[south]|[AD6]|[OP4
There are atleast 50 columns and millions of rows.
I did try to use below code to read:
dff = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").option("delimiter", "]|[").load(trainingdata+"part-00000")
it gives me following error:
IllegalArgumentException: u'Delimiter cannot be more than one character: ]|['
01-19-2017 09:25 AM
Sorry if this notifies everyone again - struggling with the text editor here!
I'm not sure of a workaround for directly splitting with more than one character. 2 ways around this that I can see:
The first approach would be to split using "|" and then replace the angle brackets:
dff = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").option("delimiter", "|").load(trainingdata+"part-00000")from pyspark.sql.functions import regexp_replace dff = dff.withColumn('pageId', regexp_replace('[pageId]','[[]]',''))....other columns....
Given that you have 50 columns you might want to loop through this:
dffs_headers = dff.dtypes
for i in dffs_headers:
newColumnLabel = i[0].replace('[','').replace(']','')
dff = dff.withColumn(newColumnLabel, regexp_replace(i[0],'[[]]','')).drop(i[0])
You'll still need to build a function or add cases to this loop to correctly cast each column though. The second way would be go via RDD:
dff = sc.textfile(trainingdata+"part-00000").map(lambda x: x.replace('[','').replace(']','').split('|')).toDF()
But that still leaves you with the casting problem Hope that helps!
01-19-2017 09:26 AM
The code block button just isn't playing nice, sorry.
01-11-2023 09:39 AM
val df = spark.read.format("csv")
.option("header",true)
.option("sep","||")
.load("file load")
display(df)
try this
02-04-2023 04:37 AM
Might be usefull
01-31-2023 10:59 PM
you might also try the blow option.
1). Use a different file format: You can try using a different file format that supports multi-character delimiters, such as text JSON.
2). Use a custom Row class: You can write a custom Row class to parse the multi-character delimiter yourself, and then use the spark.read.text API to read the file as text. You will then need to apply the custom Row class to each line in the text file to extract the values
02-03-2023 03:39 AM
this one works. thanks.
02-04-2023 06:12 AM
Yes this one is useful but what if we need to use it in CSV format only than is there any other query if you can share. @ROHIT AGARWAL
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group