cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How filter condition working in spark dataframe?

senthilkumar
New Contributor

I have a table in hbase with 1 billions records.I want to filter the records based on certain condition (by date).

For example:

Dataframe.filter(col(date) === todayDate)

Filter will be applied after all records from the table will be loaded into memory or I will get filtered records?

1 REPLY 1

muk1
New Contributor II

Hello @senthil kumar​ 

To pass external values to the filter (or where) transformations you can use the "lit" function in the following way:

Dataframe.filter(col(date) == lit(todayDate))

don´t know if that helps. Be careful with the schema infered by the dataframe. If you have that your column is of string type then try to pass a string. If you are working with timestamps make "todayDate" a timestamp, and so on.

You should import the "lit" function in the same way as you import the "col" function:

from pyspark.sql.functions import lit, col

This works in python. I can not say if this works for scala too. The variable todayDate could be the changing variable of a loop. Let´s say

dates_list=["25-03-1990","25-04-1990","25-05-1990"]
for todayDate in dates_list:
    Dataframe.filter(col(date) == lit(todayDate))
    ## Transformations or actions you want to do ##

I think there is a better way to do it with spark functions, but I didnt have the chance to look into it.

Filter will be applied after all records from the table will be loaded into memory or I will get filtered records?

I guess that the file from where data is readed is already related with the dataframe "Dataframe". Apache spark does not make modifications into the data, it just keeps track of what transformations and actions you want to do over them and then process just the needed computations for the output you have choosen. This is done with repartitions (and executors) and lazy evaluation. I can´t find the "Gentle introduction to apache spark" which helps to understand those concepts. This link could help: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3463...

Good luck!! 🙂

muk!!

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!