<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: &amp;quot;Something went wrong, please try again later.&amp;quot; On Sync tables for PostgreSQL in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140351#M51394</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/28304"&gt;@Etyr&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;This is interesting, I think you are encountering a situation where recreating a synced table with the same name leaves the UC UI with cached or stale metadata for the prior table object, even though the new pipeline and Postgres table are fine.&lt;/P&gt;
&lt;P&gt;To avoid this, can you try the following: after recreating, force a catalog refresh in the UI (attach a running SQL warehouse and click “Refresh catalog”), or hard-refresh your browser. This helps the UI invalidate cached metadata for the prior object and pick up the new one.&lt;/P&gt;</description>
    <pubDate>Tue, 25 Nov 2025 23:20:04 GMT</pubDate>
    <dc:creator>stbjelcevic</dc:creator>
    <dc:date>2025-11-25T23:20:04Z</dc:date>
    <item>
      <title>"Something went wrong, please try again later." On Sync tables for PostgreSQL</title>
      <link>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140079#M51344</link>
      <description>&lt;P&gt;I'm using the Sync feature to load up a Snowflake view from a catalog to postgreSQL (to expose data to API's for faster response times).&lt;/P&gt;&lt;P&gt;I'm been playing around scripting the creation of the sync. And when I create + delete and recreate the same sync/pipeline with same name and DB, I get this error when trying to access to the sync table in my catalog.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Etyr_0-1763982491712.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/21923iBBA5525F625D0D17/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Etyr_0-1763982491712.png" alt="Etyr_0-1763982491712.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;The pipelines are in success, the data is accessible on PostgreSQL. But i have this "display" error. It only happens when I delete a synced table (+ delete in postgre + pipeline) and recreate it with the same name/information. Changing the db or table name won't give me this issue.&lt;/P&gt;</description>
      <pubDate>Mon, 24 Nov 2025 11:12:26 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140079#M51344</guid>
      <dc:creator>Etyr</dc:creator>
      <dc:date>2025-11-24T11:12:26Z</dc:date>
    </item>
    <item>
      <title>Re: "Something went wrong, please try again later." On Sync tables for PostgreSQL</title>
      <link>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140351#M51394</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/28304"&gt;@Etyr&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;This is interesting, I think you are encountering a situation where recreating a synced table with the same name leaves the UC UI with cached or stale metadata for the prior table object, even though the new pipeline and Postgres table are fine.&lt;/P&gt;
&lt;P&gt;To avoid this, can you try the following: after recreating, force a catalog refresh in the UI (attach a running SQL warehouse and click “Refresh catalog”), or hard-refresh your browser. This helps the UI invalidate cached metadata for the prior object and pick up the new one.&lt;/P&gt;</description>
      <pubDate>Tue, 25 Nov 2025 23:20:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140351#M51394</guid>
      <dc:creator>stbjelcevic</dc:creator>
      <dc:date>2025-11-25T23:20:04Z</dc:date>
    </item>
    <item>
      <title>Re: "Something went wrong, please try again later." On Sync tables for PostgreSQL</title>
      <link>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140384#M51407</link>
      <description>&lt;P&gt;Thank you for your response&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/167034"&gt;@stbjelcevic&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;&lt;BR /&gt;So tried to refreh the catalog and the schema when the table was deleted in Postgre + unity catalog (the sync one) and removed the pipeline:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;from databricks.sdk import WorkspaceClient
from databricks.sdk.service.database import (
    NewPipelineSpec,
    SyncedDatabaseTable,
    SyncedTableSchedulingPolicy,
    SyncedTableSpec,
)

from my_project.data_accessor import data_handler
from my_project.settings import settings

# Databricks workspace client (host anonymised)
w = WorkspaceClient(
    host="https://adb-XXXXXXXXXXXXXXX.azuredatabricks.net/",
    auth_type="azure-cli",
)

# Environments anonymised
env = "dev"
read_env = "prod"
settings.subscription_env = env

# Primary key definitions (table names anonymised as examples)
datas = {
    "table_asset_xref": [
        "ID",
        "CONTEXT_ID",
        "ASSET_ID",
        "ASSET_TYPE",
    ],
}

for table_name, pk_columns in datas.items():

    # Refresh foreign catalog/schema names anonymised
    data_handler.fetch_all(
        f"REFRESH FOREIGN CATALOG catalog_project_standard_{env}_region"
    )
    print(f"Refreshed foreign catalog for environment: {env}")

    data_handler.fetch_all(
        f"REFRESH FOREIGN SCHEMA catalog_project_standard_{env}_region.sync_schema"
    )
    print(f"Refreshed foreign schema for environment: {env}")

    # Create synced table
    synced_table = w.database.create_synced_database_table(
        SyncedDatabaseTable(
            # Target table in PostgreSQL (names anonymised)
            name=(
                f"catalog_project_standard_{env}_region.sync_schema.{table_name}"
            ),
            # Matches Databricks DB connection configuration (anonymised)
            database_instance_name="db_instance",
            logical_database_name=f"catalog_project_foreign_{read_env}_region",
            spec=SyncedTableSpec(
                # Source table full name (anonymised)
                source_table_full_name=(
                    f"catalog_project_foreign_{read_env}_region.db.{table_name}"
                ),
                primary_key_columns=pk_columns,
                scheduling_policy=SyncedTableSchedulingPolicy.SNAPSHOT,
                create_database_objects_if_missing=True,
                new_pipeline_spec=NewPipelineSpec(
                    storage_catalog=f"catalog_project_standard_{env}_region",
                    storage_schema="sync_schema",
                ),
            ),
        )
    )

    print(f"Created synced table: {synced_table.name}")

    # Retrieve pipeline ID and update configuration
    pipeline_id = synced_table.data_synchronization_status.pipeline_id
    w.pipelines.update(
        pipeline_id=pipeline_id,
        budget_policy_id="00000000-0000-0000-0000-000000000000",  # anonymised
        name=f"Sync to PostgreSQL {table_name}",
        catalog=f"catalog_project_standard_{env}_region",
        schema=f"db_schema_{env}",
        tags={
            "DOMAIN": "DATA",
            "PROJECT": "DATA_PLATFORM",
            "PROCESS": "SYNC_PIPELINE",
            "TOOLS": "DATABRICKS",
            "TARGET": "POSTGRESQL",
        },
    )

    print(f"Updated pipeline: {pipeline_id}")&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;Here is the sample code i'm using. The `data_handler` object is connected to the SQL Warehouse of the same workspace. It's a custom package to make configurations simplier for us regarding the "env" we select. Behind, it does execute the sql command to the warehouse. I don't have errors on the SQL commands.&lt;BR /&gt;&lt;BR /&gt;But sadly the issue is persisting. I also change my web browser thinking it could be a cache on the browser, but it's the same.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Nov 2025 08:52:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/quot-something-went-wrong-please-try-again-later-quot-on-sync/m-p/140384#M51407</guid>
      <dc:creator>Etyr</dc:creator>
      <dc:date>2025-11-26T08:52:05Z</dc:date>
    </item>
  </channel>
</rss>

