cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Raghav2
by New Contributor II
  • 10029 Views
  • 1 replies
  • 0 kudos

AnalysisException: [COLUMN_ALREADY_EXISTS] The column `<col>` already exists. Consider to choose an

Hey Guys,          I'm facing this exception while trying to read public s3 bucket "Analysis Exception: [COLUMN_ALREADY_EXISTS] The column `<column name>` already exists. Consider to choose another name or rename the existing column.",also thing is I...

  • 10029 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

You can use dbutils to read the file.%fshead <s3 path>

  • 0 kudos
kll
by New Contributor III
  • 19167 Views
  • 4 replies
  • 0 kudos

PythonException: TypeError: float() argument must be a string or a number, not 'NoneType'

I get an PythonException: float() argument must be a string or a number, not 'NoneType' when attempting to save a DataFrame as a Delta Table. Here's the line of code that I am running:```df.write.format("delta").saveAsTable("schema1.df_table", mode="...

  • 19167 Views
  • 4 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Even though the code throws the issue while write, the issue can be in the code before as spark is lazily evaluated. The error "TypeError: float() argument must be a string or a number, not 'NoneType'" generally comes when we pass a variable to float...

  • 0 kudos
3 More Replies
erigaud
by Honored Contributor
  • 11127 Views
  • 4 replies
  • 6 kudos

Resolved! Save to parquet with fixed size

I have a large dataframe (>1TB) I have to save in parquet format (not delta for this use case). When I save the dataframe using .format("parquet") it results in several parquet files. I want these files to be a specific size (ie not larger than 500Mb...

  • 11127 Views
  • 4 replies
  • 6 kudos
Latest Reply
Lakshay
Databricks Employee
  • 6 kudos

In addition to the solutions provided above, we can also control the behavior by specifying maximum records per file if we have a rough estimate of how many records should be written to a file to reach 500 MB size.df.write.option("maxRecordsPerFile",...

  • 6 kudos
3 More Replies
kll
by New Contributor III
  • 10155 Views
  • 5 replies
  • 0 kudos

AnalysisException : when attempting to save a spark DataFrame as delta table

I get an, `AnalysisException Failed to merge incompatible data types LongType and StringTypewhen attempting to run the below command, `df.write.format("delta").saveAsTable("schema.k_adhoc.df", mode="overwrite")` I am casting the column before saving:...

  • 10155 Views
  • 5 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

The issue seems to be because the job is trying to merge columns with different schema. Could you please make sure that the schema matches for the columns.

  • 0 kudos
4 More Replies
alexisjohnson
by New Contributor III
  • 18553 Views
  • 5 replies
  • 7 kudos

Resolved! Window function using last/last_value with PARTITION BY/ORDER BY has unexpected results

Hi, I'm wondering if this is the expected behavior when using last or last_value in a window function? I've written a query like this:select col1, col2, last_value(col2) over (partition by col1 order by col2) as column2_last from values ...

Screen Shot 2021-11-18 at 12.48.25 PM Screen Shot 2021-11-18 at 12.48.32 PM
  • 18553 Views
  • 5 replies
  • 7 kudos
Latest Reply
Carv
New Contributor II
  • 7 kudos

For those stumbling across this; it seems LAST_VALUE emulates the same functionality as it does in SQL Server which does not, in most people's minds, have a proper row/range frame for the window. You can adjust it with the below syntax.I understand l...

  • 7 kudos
4 More Replies
Enzo_Bahrami
by New Contributor III
  • 1003 Views
  • 0 replies
  • 0 kudos

Connect File Arrival Trigger to on-prem file server

Hello everyone!I was wondering if there is any way to connect File Arrival Trigger to an on-prem file server. Can I use JDBC or ODBC? will those connect to an on-prem file server (not a SQL server)Thank you

Data Engineering
File Arrival Trigger
  • 1003 Views
  • 0 replies
  • 0 kudos
Volkan_Gumuskay
by New Contributor III
  • 12414 Views
  • 6 replies
  • 3 kudos

Resolved! Is there a way to run a single or selected lines in a notebook?

Assume we have a given cellprint('A') print('B') print('C')I want to run only the below line.print('B')Obviously, I can seperate the cell into three and run the one I want, but this is timely. This is a feature I use so often (e.g. in pycharm) and wo...

  • 12414 Views
  • 6 replies
  • 3 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 3 kudos

@Volkan_Gumuskay This is also available as an option in the notebook run options.

  • 3 kudos
5 More Replies
Hemant
by Valued Contributor II
  • 4755 Views
  • 2 replies
  • 3 kudos

Row_Num function in spark-sql

I have a doubt row_num with order by in spark-sql gives different result(non-deterministic output) every time i execute it?​It's due to parallelism in spark ?​​Any approach how to takle it?​I order by with a date column and a integer column and take...

  • 4755 Views
  • 2 replies
  • 3 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 3 kudos

@Hemant If the order by clause provided yields a unique result, then we would get deterministic output. For ex:If we create a rowID for this dataset, with CustomerID used in OrderBy clause, then depending upon the runtime, we may get non-deterministi...

  • 3 kudos
1 More Replies
alexiswl
by Contributor
  • 13000 Views
  • 3 replies
  • 0 kudos

Resolved! Merge Schema Error Message despite setting option to true

Has anyone come across this error before:```A schema mismatch detected when writing to the Delta table (Table ID: d4b9c839-af0b-4b62-aab5-1072d3a0fa9d). To enable schema migration using DataFrameWriter or DataStreamWriter, please set: '.option("merge...

  • 13000 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @alexiswl  Share the wisdom! By marking the best answers, you help others in our community find valuable information quickly and efficiently. Thanks!

  • 0 kudos
2 More Replies
Yogybricks
by New Contributor II
  • 2877 Views
  • 2 replies
  • 0 kudos
  • 2877 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Yogybricks  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 0 kudos
1 More Replies
zsucic1
by New Contributor III
  • 4811 Views
  • 2 replies
  • 0 kudos

Resolved! Trigger file_arrival of job on Delta Lake table change

Is there a way to avoid having to create an external data location Simply to trigger a job when new data comes to a specific Delta Lake table?

  • 4811 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @zsucic1  Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too. Cheers!

  • 0 kudos
1 More Replies
KalingaSena
by New Contributor II
  • 5558 Views
  • 3 replies
  • 0 kudos

Not able to execute below SQL query in databricks notebook because of Pare error

Hi Team,I am unable to run the below command and it is giving me a parse error. Can any one point out the issue with the code:   

KalingaSena_1-1689140837096.png
  • 5558 Views
  • 3 replies
  • 0 kudos
Latest Reply
BkP
Contributor
  • 0 kudos

Hi,From the error , it looks like there is no space between the brackets and the "in" keyword after the where clause. Can you please try again see if you facing the same error.  

  • 0 kudos
2 More Replies
apiury
by New Contributor III
  • 3320 Views
  • 2 replies
  • 1 kudos

Consume gold data layer from web application

Hello!We are developing a web application in .NET, we need to consume data in gold layer, (as if we had a relational database), how can we do it? export data to sql server from gold layer?

  • 3320 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @apiury  Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your ...

  • 1 kudos
1 More Replies
NithinTiruveedh
by New Contributor II
  • 30542 Views
  • 12 replies
  • 0 kudos

How can I split a Spark Dataframe into n equal Dataframes (by rows)? I tried to add a Row ID column to acheive this but was unsuccessful.

I have a dataframe that has 5M rows. I need to split it up into 5 dataframes of ~1M rows each. This would be easy if I could create a column that contains Row ID. Is that possible?

  • 30542 Views
  • 12 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @NithinTiruveedh  Thank you for posting your question in our community! We are happy to assist you. To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answ...

  • 0 kudos
11 More Replies
BkP
by Contributor
  • 1283 Views
  • 0 replies
  • 0 kudos

Higher Order Function: AGGREGATE not working in the example notebook mentioned in Documentation

Hi All,I am running a sample notebook from Databricks Documentation section on Higher Order Function on my community edition. I am running this notebook on DBR 12.2 LTS.Databricks Documentation URL : https://docs.databricks.com/optimizations/higher-o...

  • 1283 Views
  • 0 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels