This behavior is expected given how Databricks handles Iceberg tables in Unity Catalog.
Iceberg tables in Unity Catalog do not support a LOCATION clause. Databricks requires Iceberg tables to be created as managed tables (with UC controlling the storage location) or registered via a foreign catalog; specifying LOCATION 's3://...' with USING iceberg is not supported.
Ref Doc - https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-ddl-create-table-using

The error mentions REPLACE because Databricks’ CREATE TABLE semantics include an internal “replace” capability for Delta and Iceberg; mixing “external path + Iceberg” triggers an unsupported operation for managed Iceberg, even if you didn’t explicitly use REPLACE in your SQL.
You can do two things here:
1) Run the CTAS without LOCATION, and fully-qualify the table name in a Unity Catalog catalog and schema
CREATE TABLE <catalog>.<schema>.nation2_iceberg
USING iceberg
AS
SELECT *
FROM parquet.`s3://xyz/sf/nation.RV6Vad/data/56`;
Managed tables are the recommended table type.
2) If you need a specific S3 path for the table
Placing managed Iceberg table data at a user-specified path via LOCATION isn’t supported. If the table is already managed by another Iceberg catalog (for example, Glue or Snowflake), you should register it as a foreign Iceberg table via Lakehouse Federation rather than using a path. Foreign Iceberg is read‑only in Databricks, with updates governed by the external catalogue.
Alternatives:
Use a managed Iceberg table in UC and let external Iceberg clients read/write to it through the Iceberg REST catalog API (credential vending supported for some clients). This preserves Iceberg semantics while UC manages storage and governance.
If controlling the path is a hard requirement for writes from Databricks, use an external Delta table with LOCATION (Delta supports LOCATION), or create the Iceberg table in an external catalog and write to it outside Databricks, then read it in Databricks as a foreign table.