cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16765131552
by Contributor III
  • 3473 Views
  • 3 replies
  • 0 kudos

COPY INTO: How to add a partitioning?

The command COPY INTO from Databricks provides an idempotent file ingestion into a delta table, see here. From the docs, an example command looks like this:COPY INTO delta.`target_path` FROM (SELECT key, index, textData, 'constant_value' FROM 'sour...

  • 3473 Views
  • 3 replies
  • 0 kudos
Latest Reply
Mooune_DBU
Valued Contributor
  • 0 kudos

If you're looking to partition your `target_path` table, then it's recommended to define the partition keys prior to the COPY INTO command (at the DDL level)E.g.// Drop table if it already exists without the partition key defined (OPTIONNAL) DROP TAB...

  • 0 kudos
2 More Replies
User15787040559
by Databricks Employee
  • 3838 Views
  • 1 replies
  • 0 kudos

How many records does Spark use to infer the schema? entire file or just the first "X" number of records?

It depends. If you specify the schema it will be zero, otherwise it will do a full file scan which doesn’t work well processing Big Data at a large scale.CSV files Dataframe Reader https://spark.apache.org/docs/latest/api/python/reference/api/pyspark...

  • 3838 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

As indicated there are ways to manage the amount of data being sampled for inferring schema. However as a best practice for production workloads its always best to define the schema explicitly for consistency, repeatability and robustness of the pipe...

  • 0 kudos
j_weaver
by New Contributor III
  • 1247 Views
  • 1 replies
  • 0 kudos
  • 1247 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16752246141
New Contributor III
  • 0 kudos

Pandas works for single machine computations, so any pandas code you write on Databricks will run on the driver of the cluster. Pyspark and Koalas are both distributed frameworks for when you have large datasets. You can use Pyspark and Koalas inte...

  • 0 kudos
User16790091296
by Contributor II
  • 1039 Views
  • 0 replies
  • 0 kudos

docs.databricks.com

What is Databricks Database?A Databricks database is a collection of tables. A Databricks table is a collection of structured data. You can cache, filter, and perform any operations supported by Apache Spark DataFrames on Databricks tables. You can q...

  • 1039 Views
  • 0 replies
  • 0 kudos
Jasam
by New Contributor
  • 10152 Views
  • 3 replies
  • 0 kudos

how to infer csv schema default all columns like string using spark- csv?

I am using spark- csv utility, but I need when it infer schema all columns be transform in string columns by default. Thanks in advance.

  • 10152 Views
  • 3 replies
  • 0 kudos
Latest Reply
jhoop2002
New Contributor II
  • 0 kudos

@peyman what if I don't want to manually specify the schema? For example, I have a vendor that can't build a valid .csv file. I just need to import it somewhere so I can explore the data and find the errors. Just like the original author's question?...

  • 0 kudos
2 More Replies
rlgarris
by Databricks Employee
  • 16247 Views
  • 12 replies
  • 0 kudos

Resolved! How do I create a single CSV file from multiple partitions in Databricks / Spark?

Using sparkcsv to write data to dbfs, which I plan to move to my laptop via standard s3 copy commands. The default for spark csv is to write output into partitions. I can force it to a single partition, but would really like to know if there is a ge...

  • 16247 Views
  • 12 replies
  • 0 kudos
Latest Reply
ChristianHomber
New Contributor II
  • 0 kudos

Without access to bash it would be highly appreciated if an option within databricks (e.g. via dbfsutils) existed.

  • 0 kudos
11 More Replies
Nik
by New Contributor III
  • 13477 Views
  • 19 replies
  • 0 kudos

write from a Dataframe to a CSV file, CSV file is blank

Hi i am reading from a text file from a blob val sparkDF = spark.read.format(file_type) .option("header", "true") .option("inferSchema", "true") .option("delimiter", file_delimiter) .load(wasbs_string + "/" + PR_FileName) Then i test my Datafra...

  • 13477 Views
  • 19 replies
  • 0 kudos
Latest Reply
nl09
New Contributor II
  • 0 kudos

Create temp folder inside output folder. Copy file part-00000* with the file name to output folder. Delete the temp folder. Python code snippet to do the same. fpath=output+'/'+'temp' def file_exists(path): try: dbutils.fs.ls(path) return...

  • 0 kudos
18 More Replies
AnandJ_Kadhi
by New Contributor II
  • 5957 Views
  • 2 replies
  • 1 kudos

Handle comma inside cell of CSV

We are using spark-csv_2.10 > version 1.5.0 and reading the csv file column which contains comma " , " as one of the character. The problem we are facing is like that it treats the rest of line after the comma as new column and data is not interpre...

  • 5957 Views
  • 2 replies
  • 1 kudos
Latest Reply
User16857282152
Contributor
  • 1 kudos

Take a look here for options, http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=dataframereader#pyspark.sql.DataFrameReader.csv If a csv file has commas then the tradition is to quote the string that contains the comma, In ...

  • 1 kudos
1 More Replies
tripplehay777
by New Contributor
  • 14590 Views
  • 1 replies
  • 0 kudos

How can I create a Table from a CSV file with first column with data in dictionary format (JSON like)?

I have a csv file with the first column containing data in dictionary form (keys: value). [see below] I tried to create a table by uploading the csv file directly to databricks but the file can't be read. Is there a way for me to flatten or conver...

0693f000007OoIpAAK
  • 14590 Views
  • 1 replies
  • 0 kudos
Latest Reply
MaxStruever
New Contributor II
  • 0 kudos

This is apparently a known issue, databricks has their own csv format handler which can handle this https://github.com/databricks/spark-csv SQL API CSV data source for Spark can infer data types: CREATE TABLE cars USING com.databricks.spark.csv OP...

  • 0 kudos
EmilianoParizz1
by New Contributor
  • 8638 Views
  • 4 replies
  • 0 kudos

How to set the timestamp format when reading CSV

I have a Databricks 5.3 cluster on Azure which runs Apache Spark 2.4.0 and Scala 2.11.I'm trying to parse a CSV file with a custom timestamp format but I don't know which datetime pattern format Spark uses.My CSV looks like this: Timestamp, Name, Va...

  • 8638 Views
  • 4 replies
  • 0 kudos
Latest Reply
wellington72019
New Contributor II
  • 0 kudos

# in python: explicitly define the schema, read in CSV data using the schema and a defined timestamp format.... <a href="http://thestoreguide.co.nz/auckland/orewa/mcdonalds-orewa-akl-0931/">McDonald’s in Orewa</a>

  • 0 kudos
3 More Replies
DominicRobinson
by New Contributor II
  • 13217 Views
  • 4 replies
  • 0 kudos

Issues with UTF-16 files and unicode characters

Can someone please offer some insight - I've spent days trying to solve this issue We have the task of loading in hundreds of tab seperated text files encoded in UTF-16 little endian with a tab delimiter. Our organisation is an international one and...

  • 13217 Views
  • 4 replies
  • 0 kudos
Latest Reply
User16817872376
New Contributor III
  • 0 kudos

You can also always read in the file as a textFile, and then run a UTF-16 decoder/encoder library as a UDF on the text.

  • 0 kudos
3 More Replies
mlm
by New Contributor
  • 12089 Views
  • 5 replies
  • 0 kudos

How to prevent spark-csv from adding quotes to JSON string in dataframe

I have a sql dataframe with a column that has a json string in it (e.g. {"key":"value"}). When I use spark-csv to save the dataframe it changes the field values to be "{""key"":""valule""}". Is there a way to turn that off?

  • 12089 Views
  • 5 replies
  • 0 kudos
Latest Reply
AshleyPan
New Contributor II
  • 0 kudos

Do quote or escape options only work with "Write" instead of "read"? Our source files contain doube quotes. We'd like to add backsplash (escape) in front each double quote before converting the values from out dataframes to json outputs.

  • 0 kudos
4 More Replies
ArvindShyamsund
by New Contributor II
  • 9575 Views
  • 12 replies
  • 0 kudos

Resolved! Custom line separator

I see that https://github.com/apache/spark/pull/18581 will enable defining custom Line Separators for many sources, including CSV. Apart from waiting on this PR to make it into the main Databricks runtime, is there any other alternative to support d...

  • 9575 Views
  • 12 replies
  • 0 kudos
Latest Reply
DanielTomes
New Contributor II
  • 0 kudos

You can use newAPIHadoopFile SCALA import org.apache.hadoop.io.LongWritable import org.apache.hadoop.io.Text import org.apache.hadoop.conf.Configuration import org.apache.hadoop.mapreduce.lib.input.TextInputFormat val conf = new Configuration conf.s...

  • 0 kudos
11 More Replies
XinZodl
by New Contributor III
  • 14035 Views
  • 3 replies
  • 1 kudos

Resolved! How to parse a file with newline character, escaped with \ and not quoted

Hi! I am facing an issue when reading and parsing a CSV file. Some records have a newline symbol, "escaped" by a \, and that record not being quoted. The file might look like this: Line1field1;Line1field2.1 \ Line1field2.2;Line1field3; Line2FIeld1;...

  • 14035 Views
  • 3 replies
  • 1 kudos
Latest Reply
XinZodl
New Contributor III
  • 1 kudos

Solution is "sparkContext.wholeTextFiles"

  • 1 kudos
2 More Replies
Labels