cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Loop through Dataframe in Python

FernandoBenedet
New Contributor

Hello,

Imagine you have a dataframe with cols: A, B, C. I want to add a column D based on some calculations of columns B and C of the previous record of the df. Which is the best way of doing this? I am trying to avoid looping through the df. I am using python.

Thanks.

Fernando.

2 REPLIES 2

ColbyCarrillo
New Contributor II

I would probably use a window function within pyspark.

Link to Databricks blog: https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html

Another option is to use lag or lead columns to help you capture the data in the relative position you desire.

You can find theme here in the SQL functions list: https://docs.databricks.com/spark/latest/spark-sql/language-manual/functions.html

quincybatten
New Contributor II

Iterating through pandas dataFrame objects is generally slow. Pandas Iteration beats the whole purpose of using DataFrame. It is an anti-pattern and is something you should only do when you have exhausted every other option. It is better look for a List Comprehensions , vectorized solution or DataFrame.apply() method.

Pandas DataFrame loop using list comprehension:

result = [(x, y,z) for x, y,z in zip(df['Name'], df['Promoted'],df['Grade'])]

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group