cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

To get Number of rows inserted after performing an Insert operation into a table

HariharaSam
Contributor

Consider we have two tables A & B.

qry = """

INSERT INTO Table A

Select * from Table B where Id is null

"""

spark.sql(qry)

I need to get the number of records inserted after running this in databricks.

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

@@ROWCOUNT is rather T-SQL function not Spark SQL. I haven't found something like that in documentation but there is other way as every insert anyway return num_affected_rows and num_inserted_rows fields.

So you can for example use

df.first()['num_inserted_rows'] 

or subquery and select in sql syntax.

I am including example screenshots.

image.png 

image.png 

image.png 

View solution in original post

10 REPLIES 10

Kaniz_Fatma
Community Manager
Community Manager

Hi @Hariharan Sambath​ , You can use @@ROW_COUNT just after your insert statements.

Hi ,

I am getting an syntax error when I run @@ROW_COUNT after the insert statement.

I am running the code in Databricks

image

Hi @Hariharan Sambath​ ,

use @@ROW_COUNT just after your insert statements,

qry = """
 
INSERT INTO Table A
 
Select * from Table B where Id is null
 
Select @@ROWCOUNT
 
"""
spark.sql(qry)

Hi @Kaniz Fatma​ ,

I have tried the way you have mentioned but it still throws an error.

Hi ,

My requirement here is I will be creating a function using Python code to perform insert operation to a Delta table , that is why I am running it in an Python cell.

I will be passing a table name to that function and I need to get the number of records inserted into the table once the function is executed.

So any solution to achieve this?

Hubert-Dudek
Esteemed Contributor III

@@ROWCOUNT is rather T-SQL function not Spark SQL. I haven't found something like that in documentation but there is other way as every insert anyway return num_affected_rows and num_inserted_rows fields.

So you can for example use

df.first()['num_inserted_rows'] 

or subquery and select in sql syntax.

I am including example screenshots.

image.png 

image.png 

image.png 

Hi @Hubert Dudek​ 

Your approach is working for me.

Thank you.

Hubert-Dudek
Esteemed Contributor III

Great! Please when you can select as best answer.

Tim3
New Contributor II

@Hubert Dudek​, when I execute a similar piece of code in VSCode executed through databricks-connect, the dataframe contains 1 row with no columns, which is a problem. Executing the same code in a notebook on the same cluster works as you stated. Is this possibly a bug in databricks-connect?

GRCL
New Contributor III

Almost same advice than Hubert, I use the history of the delta table :

df_history.select(F.col('operationMetrics')).collect()[0].operationMetrics['numOutputRows']

You can find also other 'operationMetrics' values, like 'numTargetRowsDeleted'.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!