cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

mashaye
by New Contributor
  • 26985 Views
  • 6 replies
  • 2 kudos

How can I call a stored procedure in Spark Sql?

I have seen the following code: val url = "jdbc:mysql://yourIP:yourPort/test? user=yourUsername; password=yourPassword" val df = sqlContext .read .format("jdbc") .option("url", url) .option("dbtable", "people") .load() But I ...

  • 26985 Views
  • 6 replies
  • 2 kudos
Latest Reply
j500sut
New Contributor III
  • 2 kudos

This doesn't seem to be supported. There is an alternative but requires using pyodbc and adding to your init script. Details can be found here: https://datathirst.net/blog/2018/10/12/executing-sql-server-stored-procedures-on-databricks-pyspark I hav...

  • 2 kudos
5 More Replies
tripplehay777
by New Contributor
  • 19065 Views
  • 1 replies
  • 0 kudos

How can I create a Table from a CSV file with first column with data in dictionary format (JSON like)?

I have a csv file with the first column containing data in dictionary form (keys: value). [see below] I tried to create a table by uploading the csv file directly to databricks but the file can't be read. Is there a way for me to flatten or conver...

0693f000007OoIpAAK
  • 19065 Views
  • 1 replies
  • 0 kudos
Latest Reply
MaxStruever
New Contributor II
  • 0 kudos

This is apparently a known issue, databricks has their own csv format handler which can handle this https://github.com/databricks/spark-csv SQL API CSV data source for Spark can infer data types: CREATE TABLE cars USING com.databricks.spark.csv OP...

  • 0 kudos
tonyp
by New Contributor II
  • 17962 Views
  • 1 replies
  • 1 kudos

How to pass a python variables to shell script.?

How to pass a python variables to shell script.in databricks notebook, The python parameters can passed from the 1 st cmd to next %sh cmd .?

  • 17962 Views
  • 1 replies
  • 1 kudos
Latest Reply
erikvisser1
New Contributor II
  • 1 kudos

I found the answer here: https://stackoverflow.com/questions/54662605/how-to-pass-a-python-variables-to-shell-script-in-azure-databricks-notebookbles basically: %python import os l =['A','B','C','D'] os.environ['LIST']=' '.join(l)print(os.getenv('L...

  • 1 kudos
EmilianoParizz1
by New Contributor
  • 11382 Views
  • 4 replies
  • 0 kudos

How to set the timestamp format when reading CSV

I have a Databricks 5.3 cluster on Azure which runs Apache Spark 2.4.0 and Scala 2.11.I'm trying to parse a CSV file with a custom timestamp format but I don't know which datetime pattern format Spark uses.My CSV looks like this: Timestamp, Name, Va...

  • 11382 Views
  • 4 replies
  • 0 kudos
Latest Reply
wellington72019
New Contributor II
  • 0 kudos

# in python: explicitly define the schema, read in CSV data using the schema and a defined timestamp format.... <a href="http://thestoreguide.co.nz/auckland/orewa/mcdonalds-orewa-akl-0931/">McDonald’s in Orewa</a>

  • 0 kudos
3 More Replies
martinch
by New Contributor II
  • 22519 Views
  • 4 replies
  • 0 kudos

DROP TABLE IF EXISTS does not work

When I try to run the command spark.sql("DROP TABLE IF EXISTS table_to_drop") and the table does not exist, I get the following error: AnalysisException: "Table or view 'table_to_drop' not found in database 'null';;\nDropTableCommand `table_to_drop...

  • 22519 Views
  • 4 replies
  • 0 kudos
Latest Reply
StevenWilliams
New Contributor II
  • 0 kudos

I agree about this being a usability bug. Documentation clearly states that if the optional flag "IF EXISTS" is provided that the statement will do nothing.https://docs.databricks.com/spark/latest/spark-sql/language-manual/drop-table.htmlDrop Table ...

  • 0 kudos
3 More Replies
Dee
by New Contributor
  • 12581 Views
  • 2 replies
  • 0 kudos

Resolved! How to Change Schema of a Spark SQL

I am new to Spark and just started an online pyspark tutorial. I uploaded the json data in DataBrick and wrote the commands as follows: df = sqlContext.sql("SELECT * FROM people_json") df.printSchema() from pyspark.sql.types import * data_schema =...

  • 12581 Views
  • 2 replies
  • 0 kudos
Latest Reply
bhanu2448
New Contributor II
  • 0 kudos

http://www.bigdatainterview.com/

  • 0 kudos
1 More Replies
GuidoPereyra_
by New Contributor II
  • 7984 Views
  • 2 replies
  • 0 kudos

Databricks Delta - UPDATE error

Hi, We got the following error when we tried to UPDATE a delta table running concurrent notebooks that all end with an update to the same table. " com.databricks.sql.transaction.tahoe.ConcurrentAppendException: Files were added matching 'true' by a ...

  • 7984 Views
  • 2 replies
  • 0 kudos
Latest Reply
GuidoPereyra_
New Contributor II
  • 0 kudos

Hi @matt@direction.consulting I just found the following doc https://docs.azuredatabricks.net/delta/isolation-level.html#set-the-isolation-level. In my case, I could fixed partitioning the table and I think is the only way for concurrent update in t...

  • 0 kudos
1 More Replies
kali_tummala
by New Contributor II
  • 11087 Views
  • 5 replies
  • 0 kudos

Why Databricks spark is faster than AWS EMR Spark ?

https://databricks.com/blog/2017/07/12/benchmarking-big-data-sql-platforms-in-the-cloud.html Hi All, just wondering why Databricks Spark is lot faster on S3 compared with AWS EMR spark both the systems are on spark version 2.4 , is Databricks have ...

  • 11087 Views
  • 5 replies
  • 0 kudos
Latest Reply
RafiKurlansik
Databricks Employee
  • 0 kudos

I think you can get some pretty good insight into the optimizations on Databricks here:https://docs.databricks.com/delta/delta-on-databricks.html Specifically, check out the sections on caching, z-ordering, and join optimization. There's also a grea...

  • 0 kudos
4 More Replies
DanielAnderson
by New Contributor
  • 7082 Views
  • 1 replies
  • 0 kudos

"AmazonS3Exception: The bucket is in this region" error

I have read access to an S3 bucket in an AWS account that is not mine. For more than a year I've had a job successfully reading from that bucket using dbutils.fs.mount(...) and sqlContext.read.json(...). Recently the job started failing with the exc...

  • 7082 Views
  • 1 replies
  • 0 kudos
Latest Reply
Chandan
New Contributor II
  • 0 kudos

@andersource Looks like the bucket is in us-east-1 but you've configured your AmazonS3 Cloud platform with us-west-2. Can you try switching configuring the client to use us-east-1 ? I hope it will work for you. Thank you

  • 0 kudos
User16301465121
by New Contributor
  • 11420 Views
  • 3 replies
  • 0 kudos

How can I exit from a Notebook which is used as a job?

How can I quit from a notebook in the middle of an execution based on some condition?

  • 11420 Views
  • 3 replies
  • 0 kudos
Latest Reply
SamsonXia
New Contributor II
  • 0 kudos

exit(value: String): voidCalling dbutils.notebook.exit in a job causes the notebook to complete successfully. If you want to cause the job to fail, throw an exception.

  • 0 kudos
2 More Replies
_not_provid1755
by New Contributor
  • 7486 Views
  • 3 replies
  • 0 kudos

Write empty dataframe into csv

I'm writing my output (entity) data frame into csv file. Below statement works well when the data frame is non-empty. entity.repartition(1).write.mode(SaveMode.Overwrite).format("csv").option("header", "true").save(tempLocation) It's not working wh...

  • 7486 Views
  • 3 replies
  • 0 kudos
Latest Reply
mrnov
New Contributor II
  • 0 kudos

the same problem here (similar code and the same behavior with Spark 2.4.0, running with spark submit on Win and on Lin) dataset.coalesce(1) .write() .option("charset", "UTF-8") .option("header", "true") .mode(SaveMod...

  • 0 kudos
2 More Replies
rishigc
by New Contributor
  • 18533 Views
  • 1 replies
  • 0 kudos

Split a row into multiple rows based on a column value in Spark SQL

Hi, I am trying to split a record in a table to 2 records based on a column value. Please refer to the sample below. The input table displays the 3 types of Product and their price. Notice that for a specific Product (row) only its corresponding col...

  • 18533 Views
  • 1 replies
  • 0 kudos
Latest Reply
mathan_pillai
Databricks Employee
  • 0 kudos

Hi @rishigc You can use something like below. SELECT explode(arrays_zip(split(Product, '+'), split(Price, '+') ) as product_and_price from df or df.withColumn("product_and_price", explode(arrays_zip(split(Product, '+'), split(Price, '+'))).select( ...

  • 0 kudos
siddhu308
by New Contributor II
  • 7034 Views
  • 2 replies
  • 0 kudos

column wise sum in PySpark dataframe

i have a dataframe of 18000000rows and 1322 column with '0' and '1' value. want to find how many '1's are in every column ??? below is DataSet se_00001 se_00007 se_00036 se_00100 se_0010p se_00250

  • 7034 Views
  • 2 replies
  • 0 kudos
Latest Reply
mathan_pillai
Databricks Employee
  • 0 kudos

Hi Siddhu, You can use df.select(sum("col1"), sum("col2"), sum("col3")) where col1, col2, col3 are the column names for which you would like to find the sum please let us know if it answers your question Thanks

  • 0 kudos
1 More Replies
Pascalvan_Belle
by New Contributor
  • 9463 Views
  • 1 replies
  • 0 kudos

How to create a surrogate key sequence which I can use in SCD cases?

Hi Community I would like to know if there is an option to create an integer sequence which persists even if the cluster is shut down. My target is to use this integer value as a surrogate key to join different tables or do Slowly changing dimensio...

  • 9463 Views
  • 1 replies
  • 0 kudos
Latest Reply
girivaratharaja
New Contributor III
  • 0 kudos

Hi @pascalvanbellen ,There is no concept of FK, PK, SK in Spark. But Databricks Delta automatically takes care of SCD type scenarios. https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html#slowly-changing-data-scd-type-2 ...

  • 0 kudos
srchella
by New Contributor
  • 4010 Views
  • 1 replies
  • 0 kudos

How to take distinct of multiple columns ( > than 2 columns) in pyspark datafarme ?

I have 10+ columns and want to take distinct rows by multiple columns into consideration. How to achieve this using pyspark dataframe functions ?

  • 4010 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sandeep
Contributor III
  • 0 kudos

You can use dropDuplicates https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=distinct#pyspark.sql.DataFrame.dropDuplicates

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels