05-16-2019 01:49 AM
05-16-2019 05:15 AM
Hi @THIAM HUAT TAN
I don't think there is a way to specify that when reading it. However, after reading it, you can create monotonically increasing id (new column), and then filter for those ids that are greater than 4.
Alternatively you can apply take(4) and create rdd out of it. Then apply subtract transformation between the original rdd and the small rdd.
please let us know whether it works for you
Thanks
05-16-2019 08:47 PM
My sample data is as above, and I need the data from Row 6 onwards, with Row 6 as the header. Row 1 to Row 5 are redundant. Not sure how to implement your suggestion. Thanks.
04-20-2020 04:39 AM
I also have same issue. Is it resolved ?
What is the resolution ?Please advise. Thanks06-03-2020 09:26 AM
any resolution?
06-21-2020 08:59 AM
I resolved it using the function monotonically_increasing_id and a little logic to set the Column Name.
To do this is necessary Java 1.8, because raise a error on function 'collect()' in Java 11.
df = df.withColumn('index', F.monotonically_increasing_id())
cols = df.columns
values = df.filter('index = 0').collect() # here define the skipped lines
for i in range(len(cols)):
if cols[i] != 'index':
df = df.select(df.columns).withColumnRenamed(cols[i], values[0][i])
05-11-2021 05:23 AM
According to the docs of
spark.read.csv(...)
the path
argument can be an RDD of strings:
path : str or list
string, or list of strings, for input path(s), or RDD of Strings storing CSV rows.
With that, you may use
spark.sparkContext.textFile(...)
in combination with zipWithIndex(...)
to perform the necessary row filtering. Putting things together this may look as follows:
n_skip_rows = ?
row_rdd = spark.sparkContext
.textFile(your_csv_file) \
.zipWithIndex() \
.filter(lambda row: row[1] >= n_skip_rows) \
.map(lambda row: row[0])
df = spark_session.read.csv(row_rdd, ...)
Hope that helps.
11-28-2022 07:54 AM
You can provide the `skipRows` option while reading.
spark.read.format("csv").option("skipRows", 4).load("<filepath>")
09-21-2023 12:48 AM
The option...
.option("skipRows", <number of rows to skip>)
...works for me as well. However, I am surprised that the official Spark doc does not list it as a CSV Data Source Option: https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option
@User16844409535 Did you find documentation on that somewhere else?
Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections.
Click here to register and join today!
Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.