cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

erigaud
by Honored Contributor
  • 12162 Views
  • 7 replies
  • 5 kudos

Resolved! Autoloader Excel Files

Hello, I looked at the documentation but could not find what I wanted. Is there a way to load Excel files using an autoloader and if yes, what options should be given to specify format, sheet name etc ? Thank you friends !

  • 12162 Views
  • 7 replies
  • 5 kudos
Latest Reply
Hemant
Valued Contributor II
  • 5 kudos

Unfortunately, Databricks autoloader doesn't support Excel file types to incrementally load new files.Link:https://docs.databricks.com/ingestion/auto-loader/options.html If your excel file contains a single sheet then there is a workaround.

  • 5 kudos
6 More Replies
sumit23
by New Contributor
  • 1813 Views
  • 0 replies
  • 0 kudos

[Error] [SECRET_FUNCTION_INVALID_LOCATION]: While running secret function with create or replace

Hi, recently we made an upgrade to our databricks warehouse, transitioning from SQL Classic to SQL PRO.However, we started encountering the following error message when attempting to execute the "CREATE or REPLACE" table query with the secret functio...

  • 1813 Views
  • 0 replies
  • 0 kudos
BasavarajAngadi
by Contributor
  • 4844 Views
  • 4 replies
  • 1 kudos

Resolved! Question on Transaction logs and versioning in data bricks ?

Hi Experts ,No doubt data bricks supports ACID properties. What when it comes to versioning how much such versions will data bricks captures ? For Example : If i do any DML operations on top of Delta tables every time when i do it captures the tran...

  • 4844 Views
  • 4 replies
  • 1 kudos
Latest Reply
stefnhuy
New Contributor III
  • 1 kudos

Hey,As a data enthusiast myself, I find this topic quite intriguing. Data Bricks indeed does a fantastic job in supporting ACID properties, ensuring data integrity, and allowing for versioning.To address BasavarajAngadi's question, Data Bricks effici...

  • 1 kudos
3 More Replies
hamzatazib96
by New Contributor III
  • 88235 Views
  • 21 replies
  • 12 kudos

Resolved! Read file from dbfs with pd.read_csv() using databricks-connect

Hello all, As described in the title, here's my problem: 1. I'm using databricks-connect in order to send jobs to a databricks cluster 2. The "local" environment is an AWS EC2 3. I want to read a CSV file that is in DBFS (databricks) with pd.read_cs...

  • 88235 Views
  • 21 replies
  • 12 kudos
Latest Reply
so16
New Contributor II
  • 12 kudos

Please guys I need your help, I got the same issue still after readed all your comments.I am using Databricks-connect(version 13.1) on pycharm and trying to load file that are on the dbfs storage.spark = DatabricksSession.builder.remote( host=host...

  • 12 kudos
20 More Replies
dataengineer17
by New Contributor II
  • 24568 Views
  • 5 replies
  • 3 kudos

Databricks execution failed with error state: InternalError, error message: failed to update run

I am receiving this error Databricks execution failed with error state: InternalError, error message: failed to update run GlobalRunId(xx,RunId(yy))This is appears as an error message in azure data factory when I use it to schedule a databricks noteb...

  • 24568 Views
  • 5 replies
  • 3 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 3 kudos

@dataengineer17 It could be coming from the internal jobs service, If the issue persists I would recommend creating a support ticket.

  • 3 kudos
4 More Replies
charry
by New Contributor II
  • 13048 Views
  • 5 replies
  • 9 kudos

Creating a Spark DataFrame from a very large dataset

I am trying to create a DataFrame using Spark but am having some issues with the amount of data I'm using. I made a list with over 1 million entries through several API calls. The list was above the threshold for spark.rpc.message.maxSize and it was ...

  • 13048 Views
  • 5 replies
  • 9 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 9 kudos

Hey @charry Look at this KB article, this should help address the issue.https://kb.databricks.com/execution/spark-serialized-task-is-too-large

  • 9 kudos
4 More Replies
HariharaSam
by Contributor
  • 130362 Views
  • 6 replies
  • 3 kudos

Resolved! Alter Delta table column datatype

Hi ,I am having a delta table and table contains data and I need to alter the datatype for a particular column.For example :Consider the table name is A and column name is Amount with datatype Decimal(9,4).I need alter the Amount column datatype from...

  • 130362 Views
  • 6 replies
  • 3 kudos
Latest Reply
saipujari_spark
Databricks Employee
  • 3 kudos

Hi @HariharaSam The following documents the info about how to alter a Delta table schema.https://docs.databricks.com/delta/update-schema.html

  • 3 kudos
5 More Replies
Data_Engineer_3
by New Contributor III
  • 22492 Views
  • 12 replies
  • 4 kudos

FileNotFoundError: [Errno 2] No such file or directory: '/FileStore/tables/flight_data.zip' The data and file exists in location mentioned above

I am new to learning Spark and working on some practice; I have uploaded a zip file in DBFS /FileStore/tables directory and trying to run a python code to unzip the file; The python code is as: from zipfile import *with ZipFile("/FileStore/tables/fli...

  • 22492 Views
  • 12 replies
  • 4 kudos
Latest Reply
883022
New Contributor II
  • 4 kudos

What if changing the runtime is not an option? I'm experiencing a similar issue using the following:%pip install -r /dbfs/path/to/file.txtThis worked for a while, but now I'm getting the Errno 2 mentioned above. I am still able to print the same file...

  • 4 kudos
11 More Replies
Murthy1
by Contributor II
  • 3785 Views
  • 2 replies
  • 0 kudos

Terraform - Install egg file from S3

I am looking to install Python Egg files on all my clusters. The egg file is located in a S3 location. I tried using the following code which didn't work   resource "databricks_dbfs_file" "app" { source = "${S3_Path}/foo.egg" path = "/FileStore...

  • 3785 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Murthy1  Does @Retired_mod  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly? We'd love to hear from you. Thanks!

  • 0 kudos
1 More Replies
scvbelle
by New Contributor III
  • 3585 Views
  • 2 replies
  • 0 kudos

Resolved! Optimising the creation of a change log for transactional sources in an ETL pipeline

I have multiple transactional sources feeding into my Azure Databricks env (MySQL, MSSQL, MySQL datadumps) for a client company-wide DataLake. They are basically all "managed sources" (using different DBMSs, receiving application dumps, etc), but I d...

Data Engineering
azure
Change Capture
ETL
JDBC
Workflows
  • 3585 Views
  • 2 replies
  • 0 kudos
Latest Reply
scvbelle
New Contributor III
  • 0 kudos

This is the implementaiton of the original pos, for interest (excluded from the original post to reduce length) # some blacklisting and schema renamings where necessary (due to conflicts and/or illegal characters) SourceName=str SchemaName=str TableN...

  • 0 kudos
1 More Replies
bchaubey
by Contributor II
  • 8080 Views
  • 5 replies
  • 0 kudos

Resolved! How to process all Azure storage file from Databricks

Hi,I want to process all files that are in my azure storage using databricks, What is the process?

  • 8080 Views
  • 5 replies
  • 0 kudos
Latest Reply
bchaubey
Contributor II
  • 0 kudos

Could you  please provide me the code with my scenario

  • 0 kudos
4 More Replies
Govind_PS
by New Contributor II
  • 3151 Views
  • 3 replies
  • 2 kudos

Resolved! Databricks Certified Data Engineer Associate Certificate or Badge not received

I have cleared the Databricks certified Data Engineer Associate exam on 14th July 2023 14:30 hrs. I have received the email stating that i have cleared the exam but haven't received the certificate and badge yet. Attaching the screenshot here. Thanks...

Screenshot_2023-07-16-08-31-01-37_e307a3f9df9f380ebaf106e1dc980bb6.jpg
  • 3151 Views
  • 3 replies
  • 2 kudos
Latest Reply
APadmanabhan
Databricks Employee
  • 2 kudos

Hi @Govind_PS Could you please let us know if you've received the credential? if not please share your webassessor email address.

  • 2 kudos
2 More Replies
AW
by New Contributor III
  • 14578 Views
  • 3 replies
  • 8 kudos

Resolved! Creating a service principal with admin role on account level in Azure Databricks using Terraform

Dear Community,In the GUI I can grant the admin role to a service principal with a simple switch.How can I achive the same in Terraform? Do you have some code examples?

switch
  • 14578 Views
  • 3 replies
  • 8 kudos
Latest Reply
AW
New Contributor III
  • 8 kudos

Dear @Pat Sienkiewicz​ , works perfectly! It would be so easy it the documentation would be better... Rg Adrian

  • 8 kudos
2 More Replies
missyT
by New Contributor III
  • 3170 Views
  • 3 replies
  • 4 kudos

Resolved! AI assistant and machine Learning

I am looking to create a basic virtual assistant (AI) that implements machine learning mechanisms.I have some basic knowledge of Python and I have seen some courses on the internet (youtube in particular) that look very interesting.But for the moment...

  • 3170 Views
  • 3 replies
  • 4 kudos
Latest Reply
valeryuaba
New Contributor III
  • 4 kudos

Hey everyone!I'm clearly excited about this topic since I'm a huge fan of AI assistants and machine learning. MissyT, creating a basic virtual assistant with machine learning capabilities is an excellent idea! With your simple knowledge of Python and...

  • 4 kudos
2 More Replies
Data4
by New Contributor II
  • 4492 Views
  • 1 replies
  • 5 kudos

Resolved! Load multiple delta tables at once from Sql server

What’s the best way to efficiently move multiple sql tables in parallel into delta tables

  • 4492 Views
  • 1 replies
  • 5 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 5 kudos

@Data4   To enable parallel read and write operations, the ThreadPool functionality can be leveraged. This process involves specifying a list of tables that need to be read, creating a method for reading these tables from the JDBC source and saving t...

  • 5 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels