10-17-2022 03:11 PM
Hi,
I have a few questions about "Pandas API on Spark". Thanks for your time to read my questions
1) Input to these functions are Pandas DataFrame or PySpark DataFrame?
2) When I use any pandas function (like isna, size, apply, where, etc ), does it run only on one node or multi nodes?
Thanks.
10-24-2022 11:49 AM
I would like to share the following information, that might help you.
Pandas API on Spark fills this gap by providing pandas equivalent APIs that work on Apache Spark. Pandas API on Spark is useful not only for pandas users but also PySpark users, because pandas API on Spark supports many tasks that are difficult to do with PySpark, for example plotting data directly from a PySpark DataFrame. Doc https://docs.databricks.com/_static/notebooks/pandas-to-pandas-api-on-spark-in-10-minutes.html
10-18-2022 05:46 AM
Hi @Mohammad Saber ,
Pandas dataset lives in the single machine, and is naturally iterable locally within the same machine. However, pandas-on-Spark dataset lives across multiple machines, and they are computed in a distributed manner. It is difficult to be locally iterable and it is very likely users collect the entire data into the client side without knowing it. Therefore, it is best to stick to using pandas-on-Spark APIs.
Please refer:
https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html
https://docs.databricks.com/languages/pandas-spark.html
Please let us know if you need further clarification on the same. We are more than happy to assist you further.
10-18-2022 02:21 PM
@Debayan Mukherjee
Thanks for your help.
I have a question about terms : "Pandas dataset" and "pandas-on-Spark dataset".
When you say "dataset", does it refer to "DataFrame"?
If I create "pandas-on-Spark dataset ", can I apply Pandas functions on it, or I should convert it to "pandas dataset" before such a computation?
If I need to convert it to "pandas dataset", I think computation will be done on a single node. Is it correct?
10-24-2022 11:49 AM
I would like to share the following information, that might help you.
Pandas API on Spark fills this gap by providing pandas equivalent APIs that work on Apache Spark. Pandas API on Spark is useful not only for pandas users but also PySpark users, because pandas API on Spark supports many tasks that are difficult to do with PySpark, for example plotting data directly from a PySpark DataFrame. Doc https://docs.databricks.com/_static/notebooks/pandas-to-pandas-api-on-spark-in-10-minutes.html
10-25-2022 02:02 AM
Thanks for your reply.
I just want to confirm that Pandas API on Spark uses the parallelism capability of Spark (computations on multi nodes).
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group