I am trying to read a csv file from storage location using spark.read function. Also, i am explicitly passing the schema to the function. However, the data is not loading in proper column of the dataframe. Following are the code details:
from pyspark.sql.types import StructType, StructField, StringType, DateType, DoubleType
# Define the schema
schema = StructType([
StructField('TRANSACTION', StringType(), True),
StructField('FROM', StringType(), True),
StructField('TO', StringType(), True),
StructField('DA_RATE', DateType(), True),
StructField('CURNCY_F', StringType(), True),
StructField('CURNCY_T', StringType(), True)
])
# Read the CSV file with the specified schema
df = spark.read.format("csv") \
.option("header", "true") \
.option("delimiter", "|") \
.schema(schema) \
.load("abfss://xyz@abc.dfs.core.windows.net/my/2024-04-10/abc_2*.csv")
**Data in the csv file**
DA_RATE|CURNCY_F|CURNCY_T
2024-02-26|AAA|MMM
2024-02-26|AAA|NNN
2024-02-26|BBB|YYY
2024-02-26|CCC|KKK
2024-02-27|DDD|SSS
Output I am getting
TRANSACTION FROM TO DA_RATE CURNCY_F CURNCY_T
2024-02-26 AAA MMM null null null
2024-02-26 AAA NNN null null null
2024-02-26 BBB YYY null null null
2024-02-26 CCC KKK null null null
**Output I am expected**
TRANSACTION FROM TO DA_RATE CURNCY_F CURNCY_T
null null null 2024-02-26 AAA MMM
null null null 2024-02-26 AAA NNN
null null null 2024-02-26 BBB YYY