cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

gibbona1
by New Contributor II
  • 4033 Views
  • 2 replies
  • 1 kudos

Resolved! Correct setup and format for calling REST API for image classification

I trained a basic image classification model on MNIST using Tensorflow, logging the experiment run with MLflow.Model: "my_sequential" _________________________________________________________________ Layer (type) Output Shape ...

mnist_model_error
  • 4033 Views
  • 2 replies
  • 1 kudos
Latest Reply
Atanu
Databricks Employee
  • 1 kudos

@Anthony Gibbons​  may be this git should work with your use case - https://github.com/mlflow/mlflow/issues/1661

  • 1 kudos
1 More Replies
matt_t
by New Contributor
  • 3271 Views
  • 2 replies
  • 1 kudos

Resolved! S3 sync from bucket to a mounted bucket causing a "[Errno 95] Operation not supported" error for some but not all files

Trying to sync one folder from an external s3 bucket to a folder on a mounted S3 bucket and running some simple code on databricks to accomplish this. Data is a bunch of CSVs and PSVs.The only problem is some of the files are giving this error that t...

  • 3271 Views
  • 2 replies
  • 1 kudos
Latest Reply
Atanu
Databricks Employee
  • 1 kudos

@Matthew Tribby​  does above suggestion work. Please let us know if you need further help on this. Thanks.

  • 1 kudos
1 More Replies
bonjih
by New Contributor
  • 6916 Views
  • 3 replies
  • 3 kudos

Resolved! AttributeError: module 'dbutils' has no attribute 'fs'

Hi,Using db in SageMaker to connect EC2 to S3. Following other examples I get 'AttributeError: module 'dbutils' has no attribute 'fs'....I guess Im missing an import?

  • 6916 Views
  • 3 replies
  • 3 kudos
Latest Reply
Atanu
Databricks Employee
  • 3 kudos

agree with @Werner Stinckens​  . also may try importing dbutils - @ben Hamilton​ 

  • 3 kudos
2 More Replies
Jeff1
by Contributor II
  • 2356 Views
  • 3 replies
  • 5 kudos

Resolved! Understand Spark DataFrames verse R DataFrames

CommunityI’ve been struggling with utilizing R language in databricks and after reading “Mastering Spark with R,” I believe my initial problems stemmed from not understating the difference between Spark DataFrames and R DataFrames within the databric...

  • 2356 Views
  • 3 replies
  • 5 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 5 kudos

As Spark dataframes are handled in distributed way on workers it is better just to use Spark dataframes. Additionally collect is executed on driver and takes whole dataset into memory so it is shouldn't be used in production.

  • 5 kudos
2 More Replies
Bhanu1
by New Contributor III
  • 4204 Views
  • 3 replies
  • 6 kudos

Resolved! Is it possible to mount different Azure Storage Accounts for different clusters in the same workspace?

We have a development and a production data lake. Is it possible to have a production or development cluster access only respective mounts using init scripts?

  • 4204 Views
  • 3 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 6 kudos

Yes it is possible. Additionally mount is permanent and done in dbfs so it is enough to run it one time. you can have for example following configuration:In Azure you can have 2 databricks workspace,cluster in every workspace can have env variable is...

  • 6 kudos
2 More Replies
jstatic
by New Contributor II
  • 4473 Views
  • 5 replies
  • 1 kudos

Resolved! Quick way to know delta table is zordered

Hello,I created a delta table table using SQL and specifying the partitioning and zorder strategy. I then loaded data into it for the first time by doing a write as delta with mode of append and save as table. However, I don’t know of a way to verify...

  • 4473 Views
  • 5 replies
  • 1 kudos
Latest Reply
User16763506477
Contributor III
  • 1 kudos

If there is no data then lines 10 and 11 will not have any impact. I am assuming that line (1-5) is creating an empty table but the actual load is happening when you do df.write operation. Also delta.autoOptimize.autoCompact will not trigger the z-or...

  • 1 kudos
4 More Replies
Shehan92
by New Contributor II
  • 3493 Views
  • 2 replies
  • 4 kudos

Resolved! Error in accessing Delta Tables

I'm getting attached error in accessing delta lake tables in the data bricks workspaceSummary of error: Could not connect to md1n4trqmokgnhr.csnrqwqko4ho.ap-southeast-1.rds.amazonaws.com:3306 : Connection resetAttached detailed error

  • 3493 Views
  • 2 replies
  • 4 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 4 kudos

Caused by: java.sql.SQLNonTransientConnectionException: Could not connect to md1n4trqmokgnhr.csnrqwqko4ho.ap-southeast-1.rds.amazonaws.com:3306 : Connection reset at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:...

  • 4 kudos
1 More Replies
hari
by Contributor
  • 5947 Views
  • 8 replies
  • 4 kudos

Resolved! How to write Change Data from Delta Lake to aws dynamodb

Is there some direct way to write data from DeltaLake to AWS DynamoDB.If there is none, Is there any way to do the same.

  • 5947 Views
  • 8 replies
  • 4 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 4 kudos

Hi @Harikrishnan P H​ ,Did @Werner Stinckens​ reply help you to resolved your issue? if yes, please mark it as best. if not, please let us know.

  • 4 kudos
7 More Replies
Vibhor
by Contributor
  • 5091 Views
  • 5 replies
  • 2 kudos

Resolved! Databricks Data Type Conversion error

In databricks while writing data to curated layer, see error - Failed to execute user defined function (Double => decimal(38,18)). Does anyone know if faced such issue and how to resolve it.

  • 5091 Views
  • 5 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

What happens if you explicitly cast it?I remember having such issues with implicit casting when goin from spark 2.x to 3.x, but these were solved by using explicit casting (not round()).

  • 2 kudos
4 More Replies
Anonymous
by Not applicable
  • 729 Views
  • 1 replies
  • 2 kudos

The Next Databricks Office HoursOur next Office Hours session is scheduled for March 23 2022 - 8:00 am PDT Do you have questions about how to set up o...

The Next Databricks Office HoursOur next Office Hours session is scheduled for March 23 2022 - 8:00 am PDTDo you have questions about how to set up or use Databricks? Do you want to get best practices for deploying your use case or tips on data archi...

  • 729 Views
  • 1 replies
  • 2 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 2 kudos

Signed in!

  • 2 kudos
bchaubey
by Contributor II
  • 1571 Views
  • 1 replies
  • 0 kudos
  • 1571 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16764241763
Honored Contributor
  • 0 kudos

@Bhagwan Chaubey​ May be you can give this a try, if this is a Blob Storage account.https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=environment-variable-windowsFor Datalake storage, please try belowhttps://do...

  • 0 kudos
Santosh09
by New Contributor II
  • 5886 Views
  • 4 replies
  • 3 kudos

Resolved! Writing Spark data frame to ADLS is taking Huge time when Data Frame is of Text data.

Spark data frame with text data when schema is in Struct type spark is taking too much time to write / save / push data to ADLS or SQL Db or download as csv.

image.png
  • 5886 Views
  • 4 replies
  • 3 kudos
Latest Reply
User16764241763
Honored Contributor
  • 3 kudos

@shiva Santosh​ Have to checked the count of the dataframe that you are trying to save to ADLS?As @Joseph Kambourakis​  mentioned the explode can result in 1-many rows, better to check data frame count and see if Spark OOMs in the workspace.

  • 3 kudos
3 More Replies
pawelmitrus
by Contributor
  • 1777 Views
  • 1 replies
  • 2 kudos

github.com

Do I always need to manually invite members of my AAD tenant to ADB workspace, if I don't have SCIM integration configured?EDIT: solved, it works when you go through Azure Portal and get in with "Launch Workspace" button on the ADB resource overview ...

  • 1777 Views
  • 1 replies
  • 2 kudos
Latest Reply
User16764241763
Honored Contributor
  • 2 kudos

Hello @pawelmitrus​ For users with Owner or Contributor roles, they should click on the "Launch Workspace" button in the Azure portal. For other users they should be explicitly granted access to the workspace to be able to login.Regards,Arvind

  • 2 kudos
rachelk05
by New Contributor II
  • 1950 Views
  • 1 replies
  • 4 kudos

Resolved! Databricks Community: Cluster Terminated Reason: Unexpected Launch Failure

Hi,I've been encountering the following error when I try to start a cluster, but the status page says everything is fine. Is something happening or are there other steps I can try?Time2022-03-13 14:40:51 EDTMessageCluster terminated.Reason:Unexpected...

  • 1950 Views
  • 1 replies
  • 4 kudos
Latest Reply
User16753724663
Valued Contributor
  • 4 kudos

Hi @Rachel Kelley​ We have some internal service interruptions due to which we had this issue. Our engineering has applied the fix and the cluster startup works as expected. Sincerely apologies for the inconvenience caused here.Regards,Darshan

  • 4 kudos
Anonymous
by Not applicable
  • 5506 Views
  • 2 replies
  • 0 kudos

How to read a compressed file in spark if the filename does not include the file extension for that compression format?

For example, let's say I have a file called some-file, which is a gzipped text file. If I try spark.read.text('some-file'), it will return a bunch of gibberish since it doesn't know that the file is gzipped. I'm looking to manually tell spark the fil...

  • 5506 Views
  • 2 replies
  • 0 kudos
Latest Reply
Francie
New Contributor II
  • 0 kudos

The community is field for the approval of the terms. The struggle of a great site is recommend for the norms. The value is suggested for the top of the vital paths for the finding members.

  • 0 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels