Hello @HoussemBL,
You can use below code example:
import dlt
@dlt.create_streaming_table(
name="your_table_name",
path="s3://your-bucket/your-path/",
schema="schema-definition"
)
def your_table_function():
return ( spark.readStream
.format("your_format")
.option("your_option_key", "your_option_value")
.load("your_source_path")
)
When using Unity Catalog with DLT pipelines, tables are stored in the storage location specified for the target schema. If a schema storage location is not specified, tables are stored in the catalog storage location. If neither schema nor catalog storage locations are specified, tables are stored in the root storage location of the metastore. This could be why the tables are in non-readable locations if the storage paths are not explicitly defined