JDBC Connection query row limit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Anyone know how to increase the amount of rows returned in a JDBC query? Currently we're receiving 1000 rows per query.
Have tried adding a LIMIT 5000 to the end of the query, but no luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Hi @glevin ,
By default the limit is set to 1000 rows for JDBC queries. You can try specifying the maxRows parameter in the JDBC url to 5000. Example below
jdbc:spark://<databricks-host>:443/default;httpPath=<http-path>;AuthMech=3;maxRows=5000;
Hope this helps!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Thanks for your response! I just tried your suggestion, but it's still returning 1000 rows.
Here's my full connection string:
jdbc:databricks://adb-xxxx.xx.azuredatabricks.net:443;httpPath=sql/protocolv1/o/xxxx/xxxx;AuthMech=11;Auth_Flow=1;OAuth2ClientId=xxxx;OAuth2Secret=xxxx;ConnCatalog=ndp_uat;ConnSchema=appian;EnableArrow=0;maxRows=5000;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Hi @glevin , No problem, we can try a different approach. Can you please try the fetchsize option as below?
jdbc:databricks://adb-xxxx.xx.azuredatabricks.net:443;httpPath=sql/protocolv1/o/xxxx/xxxx;AuthMech=11;Auth_Flow=1;OAuth2ClientId=xxxx;OAuth2Secret=xxxx;ConnCatalog=ndp_uat;ConnSchema=appian;EnableArrow=0;fetchsize=5000;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
Just tried, same issue, still returns 1000 rows
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a week ago
@glevin , That is unfortunate. Here are few other options you could try
1. Please ensure you are using latest version of the JDBC driver
2. Try setting the fetch size before the query execution instead of the url as below
SET spark.databricks.jdbc.fetchsize = 5000;
SELECT * FROM my_table;
3. If you are open to using Rest API, it would help in getting all the records.
Reference links for API: https://docs.databricks.com/api/workspace/introduction
https://docs.databricks.com/api/workspace/statementexecution/executestatement
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Saturday
Hi @glevin ,
Can I check what is your data source (Azure SQL, Oracle, MySQL etc.), may be the driver of your source is just picking the default fetch value?
Can you share your profiler output which shows, it's returning 1000 records not 5K as desired?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
20 hours ago
Thanks all for your help.
Looks like the bottleneck is the tool I'm using the make the connection (Appian). It limits JDBC responses to 1000 rows.

