Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

PYODBC very slow - 30 minutes to write 6000 rows

New Contributor

Along withh several other issues I'm encountering, I am finding pandas dataframe to_sql being very slow

I am writing to an Azure SQL database and performance is woeful. This is a test database and it has S3 100DTU and one user, me as it's configuration. It's quick enough for most other tasks, but terrible for this.

I don't see any isssues in the database monitoring to suggest I've exceeded DTU or something.

This is my connection configuration, with appropriate redactions

connection_string = "Driver={ODBC Driver 18 for SQL Server};" \
                    ";" \
                    "Database=datawarehousename;TrustServerCertificate=yes;" \
connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
engine = create_engine(connection_url, fast_executemany=True)

This is my dataframe.to_sql write to database

dfoutDB.to_sql("ResourceLogRpt", schema="reporting", con=engine, if_exists='append', chunksize=10000, index=False)

Changing chunksize isn't making a difference

In between reading data I've manipulated URL data to extract directory elements and parameters

Is there a way to verify the fast_executemany=True is doing anything?

Any other thoughts on diagnosing what is causing this to be slow.?

My cluster is 56 Gb and 8 cores DBR 1.4 with Scala 3.2.1, I would have thought enough as all other data frame manipulation is very fast, but if there's an impact that I need to understand I'd appreciate telling me what I misunderstood




Esteemed Contributor III
Esteemed Contributor III

Hi, Thanks for reaching out to

Could you please run a profiler trace on the SQL and determine what's the issue going on?

Also, please refer to add fast_executemany feature, which will be "off" by default, this helps to speed up bulk actions.

Honored Contributor

Hi @Peter McLarty​ 

Does @Debayan Mukherjee​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?

We'd love to hear from you.


Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!