so basically, you cannot create a partitioned table based on a single csv file by simply using sql create table partitioned by () location 'pathToCsv' file? because the single csv file does not have the paritioned file structure on that location.
I understand the location clause means the external table and the real data is stored here. It is confusing sometimes because the location here actually is where the csv file located, and after you create a table from this csv file, nothing happens at this location. If you use scala spark, read this csv into a dataframe, then write it back to the same location, keep the csv format, and add partition, like spark.read.csv.load("pathToCsv").write.option("path", "PathToCSV").partitionBy("partitionColumn").mode("overwrite").format("csv").saveAsTable("mytable"), you will have some new files at that location.