@shiva charan velichala :
It's possible that the parquet files that you exported from postgres snapshot were encrypted or compressed. If that's the case, you'll need to decrypt and/or decompress the files before you can read them with Databricks.
Additionally, if the schema is not being inferred correctly, you can specify the schema manually using the schema parameter of the read function in Databricks. For example:
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
my_schema = StructType([
StructField("column1", StringType(), True),
StructField("column2", IntegerType(), True),
...
])
df = spark.read.schema(my_schema).parquet("/path/to/parquet/files")
Replace column1, column2, etc. with the actual column names in your schema.
If you're still having issues, you may want to try opening the parquet files in another program (such as Apache Arrow) to see if you're able to access them there.