cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Show Existing Header From CSV I External Table

Frantz
New Contributor II

Hello, is there a way to load csv data into an external table without the _c0, _c1 columns showing?

1 ACCEPTED SOLUTION

Accepted Solutions

Frantz
New Contributor II

My question was answered in a separate thread here.

View solution in original post

4 REPLIES 4

Frantz
New Contributor II

The databricks community discussion post  creation sucks. I've been trying for the past 20 minutes to post a question but I keep getting a "Correct Highlighted errors" message. When I correct the "errors", the message still does not go through. I've resulted to posting test messages just to see what goes through. 

Kaniz
Community Manager
Community Manager

Hi @Frantz, I'm sorry you're experiencing difficulties posting on the Databricks community discussion. Hang in there, and hopefully, we can get this sorted out soon so you can participate fully in the community discussions.

Kaniz
Community Manager
Community Manager

Hi @Frantz! Setting the header option to true allows you to easily avoid the preset default column names (_c0, _c1, etc.) when using PySpark to load CSV data into an external table. This allows you to use the first row of your CSV file as the column names, which can be done as follows:

# Assuming you have a CSV file named "file.csv"
dff = spark.read.format("csv") \
   .option("delimiter", ",") \
   .option("header", "true") \
   .option("inferSchema", "true") \
   .load("file.csv")

This code snippet uses the header=True setting to ensure that the first row of the CSV file is recognized as column names. Additionally, the inferSchema=True setting allows for automatic inference of column data types. As a result, the DataFrame dff will contain the same column names as the original CSV file rather than the generic _c0, _c1, etc.

 

Frantz
New Contributor II

My question was answered in a separate thread here.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.