11 hours ago
Dear Experts,
I have a requirement to implement PostgreSQL CDC using Databricks Lakeflow Connect. While setting up the tables, I am unable to see the list of available tables, even though the connection settings appear to be correct.
Could you please suggest what might be causing this issue or what I should verify?
With regards,
Hari
9 hours ago
Hi @harisrinivasay ,
Try to add database name first and select schema. Then tables should be visible for you ๐
5 hours ago
Hi @harisrinivasay,
@szymon_dybczak is correct. You must enter the database name. Lakeflow Connect can only connect to and query that database, and list the schemas and tables if you provide the correct name. If the name is incorrect or if you donโt click the "+" button, the list will remain empty.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.
9 hours ago
Hi @harisrinivasay ,
Try to add database name first and select schema. Then tables should be visible for you ๐
5 hours ago
Hi @harisrinivasay,
@szymon_dybczak is correct. You must enter the database name. Lakeflow Connect can only connect to and query that database, and list the schemas and tables if you provide the correct name. If the name is incorrect or if you donโt click the "+" button, the list will remain empty.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.
5 hours ago
The current behavior is a bit disappointing. Unfortunately, this functionality works only with Classic Compute. When I select Serverless, no databases are displayed at all.
Additionally, there is another major performance concern. In my environment, I have around 10 databases, each containing 1,000+ tables. The system attempts to fetch the complete list across all databases, which takes more than 10 minutes. This experience is far from ideal, waiting that long every single time just to add a new table is not acceptable.
The Databricks team should seriously consider improving the performance and user experience in this area.
4 hours ago
Hi @harisrinivasay,
Your feedback is valid and aligned with areas the product team is still actively evolving. Since this is a Public Preview connector, Databricks explicitly expects scalability and UX feedback like yours to shape GA behaviour.
Concretely, youโll get the most traction by opening a support ticket and referencing the need for table discovery to work cleanly when serverless is selected in the UI and also the latency youโre seeing, enumerating ~1k tables and the requirement to add tables without repeated long waits. Also, if you have a Databricks account team for your company, you can loop them in and point them at that ticket and your environment scale.
Check this page for full details on the other limitations. You can see it says the below.
Databricks recommends ingesting 250 or fewer tables per pipeline. However, there is no limit on the number of rows or columns that are supported within these tables.
In a big environment like yours, the UIs "browse and select tables" pattern becomes painful because it tries to enumerate a very large metadata surface. You may want to consider API/Declarative Automation Bundles instead of the UI for table selection as a workaround.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.