Hi @leymariv,
You can check the schema of data in delta sharing table, using df.printSchema to better understand the JSON structure. Use from_json function to flatten or normalize the data to respective columns.
Additionally, you can understand how data is being loaded into the table by using the DESCRIBE HISTORY command. Look for append or merge conditions in the operation column and refer to the operationMetrics column for data metrics.

If you notice that data is being loaded incrementally (append or merge) into the Delta Sharing table, you can read the data version by version or timestamp by timestamp using below code.

Alternatively, you can specify a range for the timestamp or version to further narrow down the data read.

Further, you can leverage Spark Structured Streaming to read data from delta sharing table.

Regards,
Hari Prasad
Regards,
Hari Prasad