How filter condition working in spark dataframe?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2017 06:42 AM
I have a table in hbase with 1 billions records.I want to filter the records based on certain condition (by date).
For example:
Dataframe.filter(col(date) === todayDate)
Filter will be applied after all records from the table will be loaded into memory or I will get filtered records?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-19-2018 02:11 AM
Hello @senthil kumar
To pass external values to the filter (or where) transformations you can use the "lit" function in the following way:
Dataframe.filter(col(date) == lit(todayDate))
don´t know if that helps. Be careful with the schema infered by the dataframe. If you have that your column is of string type then try to pass a string. If you are working with timestamps make "todayDate" a timestamp, and so on.
You should import the "lit" function in the same way as you import the "col" function:
from pyspark.sql.functions import lit, col
This works in python. I can not say if this works for scala too. The variable todayDate could be the changing variable of a loop. Let´s say
dates_list=["25-03-1990","25-04-1990","25-05-1990"]
for todayDate in dates_list:
Dataframe.filter(col(date) == lit(todayDate))
## Transformations or actions you want to do ##
I think there is a better way to do it with spark functions, but I didnt have the chance to look into it.
Filter will be applied after all records from the table will be loaded into memory or I will get filtered records?
I guess that the file from where data is readed is already related with the dataframe "Dataframe". Apache spark does not make modifications into the data, it just keeps track of what transformations and actions you want to do over them and then process just the needed computations for the output you have choosen. This is done with repartitions (and executors) and lazy evaluation. I can´t find the "Gentle introduction to apache spark" which helps to understand those concepts. This link could help: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3463...
Good luck!! 🙂
muk!!

