Hi @Subhas1729 ,
Default location for data assets section of the pipeline configuration UI sets the default catalog and schema for a pipeline. This default catalog and schema are used for all dataset definitions and table reads, unless overridden within the query. So if you don't specify it in your code it will still work.

If you just want to know what catalog/schema is currently active in your session, query it directly:
current_catalog = spark.sql("SELECT current_catalog()").collect()[0][0]
current_schema = spark.sql("SELECT current_schema()").collect()[0][0]
The approach you used would work if you set pipeline configrations. Then you can refer for key/value pairs defined there using following code:
spark.conf.get("your_key")
So in your case you can set catalog and schema config and read it with above code.

If the answer was helpful, please consider marking it as solution