Hi @dave_d, The "Columnar To Row" node in your query execution plan is a part of the Apache Sparkโข SQL execution engine.
Spark SQL uses a columnar format for in-memory computations to optimize for data locality in computations, which can significantly improve the performance of your queries. However, not all operations can be performed in a columnar format. Some operations, such as certain types of joins or aggregations, require data to be in a row format. When such an operation is encountered in the execution plan, Spark SQL must convert the data from columnar to row format. This is what the "Columnar To Row" node in your execution plan is doing.
In your case, the join operation in your query might be causing the conversion from columnar to row format. Even though you're writing the result back to another table, the join operation might require the data to be in a row format for processing.