I am testing Databricks with non-AWS S3 object storage.  I can access the non-AWS S3 bucket by setting these parameters:
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "XXXXXXXXXXXXXXXXXXXX")
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "XXXXXXXXXXXXXXXXXXXXXXXXXXXX")
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "XXXXXXXXXXXX.com")
I can read the csv files in the bucket 
spark.read.format("csv").option("inferschema","true").option("header","true").option("sep","|").load("s3://deltalake/10g_csv/reason.csv")
When trying to create external table from this csv, got AWS Security token service invalid error.  Since I am not using AWS s3 bucket, is there a way to skip this checking. 
I can see Databricks created parquet file and _delta_log folder in this external bucket location but it did not complete the delta table creation.  It did not create 00000000000000000000.crc and 00000000000000000000.json in the _delta_log folder.
 

Any suggestion how to bypass AWS security token check as I am not using AWS S3 bucket.  When I use Databricks community edition to test, external tables are created successfully in the same non-AWS S3 bucket.  Both Databricks on AWS and community edition compute are using same Databricks version.
Both are at 14.0 (Scala 2.12 and Spark 3.5.0).