cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Antoine1
by New Contributor III
  • 6074 Views
  • 7 replies
  • 4 kudos

Seem to have outdated Account Console

Hello, We have been testing for a long time with Databricks and are now going to run it in production. Our tests were done over Databricks for AWS using the Standard plan and have since upgraded to the Premium plan. One of the aims to upgrade plans w...

databricks_actual Databricks_should_look_like
  • 6074 Views
  • 7 replies
  • 4 kudos
Latest Reply
Antoine1
New Contributor III
  • 4 kudos

Hello, Does anyone have a proper way of contacting support ? As explained in some answers on this thread, we aren't able to create a support ticket in the help centre. We have contacted our account executive 10 days ago, to try to understand why we c...

  • 4 kudos
6 More Replies
brian_0305
by New Contributor II
  • 4259 Views
  • 3 replies
  • 2 kudos

Use JDBC connect to databrick default cluster and read table into pyspark dataframe. All the column turned into same as column name

I used code like below to Use JDBC connect to databrick default cluster and read table into pyspark dataframeurl = 'jdbc:databricks://[workspace domain]:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=[path];AuthMech=3;UID=token;PWD=[your_ac...

error
  • 4259 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@yu zhang​ :It looks like the issue with the first code snippet you provided is that it is not specifying the correct query to retrieve the data from your database.When using the load() method with the jdbc data source, you need to provide a SQL quer...

  • 2 kudos
2 More Replies
Erik_L
by Contributor II
  • 2543 Views
  • 3 replies
  • 1 kudos

Resolved! How to keep data in time-based localized clusters after joining?

I have a bunch of data frames from different data sources. They are all time series data in order of a column timestamp, which is an int32 Unix timestamp. I can join them together by this and another column join_idx which is basically an integer inde...

  • 2543 Views
  • 3 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

@Erik Louie​ :If the data frames have different time zones, you can use Databricks' timezone conversion function to convert them to a common time zone. You can use the from_utc_timestamp or to_utc_timestampfunction to convert the timestamp column to ...

  • 1 kudos
2 More Replies
shaunangcx
by New Contributor II
  • 3676 Views
  • 3 replies
  • 0 kudos

Resolved! Command output disappearing (Not sure what's the root cause)

I have a workflow which will run every month and it will create a new notebook containing the outputs from the main notebook. However, after some time, the outputs from the created notebook will disappear. Is there anyway I can retain the outputs?

  • 3676 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Shaun Ang​ :There are a few possible reasons why the outputs from the created notebook might be disappearing:Notebook permissions: It's possible that the user or service account running the workflow does not have permission to write to the destinati...

  • 0 kudos
2 More Replies
sintsan
by New Contributor II
  • 2398 Views
  • 3 replies
  • 0 kudos

Azure Databricks DBFS Root, Storage Account Networking

For an Azure Databricks with vnet injection, we would like to change the networking on the default managed Azure Databricks storage account (dbstorage) from Enabled from all networks to Enabled from selected virtual networks and IP addresses.Can this...

  • 2398 Views
  • 3 replies
  • 0 kudos
Latest Reply
karthik_p
Esteemed Contributor
  • 0 kudos

@Sander Sintjorissen​ usually root storage bucket has below directories present in article https://learn.microsoft.com/en-us/azure/databricks/dbfs/root-locationsto store logs related to auditing you can create another storage and add that. hope this ...

  • 0 kudos
2 More Replies
usman_wains
by New Contributor
  • 834 Views
  • 1 replies
  • 0 kudos

Request for unlock workspace

please unlock my workspace that am easily to login our workspace am waiting a few days ago

  • 834 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Adding @Vidula Khanna​ and @Kaniz Fatma​ for visibility to help you with your request

  • 0 kudos
RayelightOP
by New Contributor II
  • 1502 Views
  • 1 replies
  • 2 kudos

Azure Blob Storage sas-keys expired for Apache Spark Tutorial

"Apache Spark programming with databricks" tutorial uses Blob storage parquet files on Azure. To access those files a sas key is used in the configuration files. Those keys were generated 5 years ago, however they expired in the begining of this mont...

  • 1502 Views
  • 1 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

Adding @Vidula Khanna​ and @Kaniz Fatma​ for visibility to help with your request

  • 2 kudos
kumarPerry
by New Contributor II
  • 2894 Views
  • 3 replies
  • 0 kudos

Notebook connectivity issue with aws s3 bucket using mounting

When connecting to aws s3 bucket using dbfs, application throws error like org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7864387.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7864387.0 (TID 1709732...

  • 2894 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Amrendra Kumar​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us s...

  • 0 kudos
2 More Replies
Robin_200273
by Contributor
  • 17187 Views
  • 8 replies
  • 19 kudos

Resolved! Delta Live Tables failed to launch pipeline cluster

I'm trying to run through the Delta Live Tables quickstart example on Azure Databricks. When trying to start the pipeline I get the following error:Failed to launch pipeline cluster 0408-131049-n3g9vr4r: The operation could not be performed on your a...

  • 17187 Views
  • 8 replies
  • 19 kudos
Latest Reply
kunaldeb
New Contributor III
  • 19 kudos

This communication really helped me. I am now successfully able to execute DLT pipeline. Thanks to all contributor.

  • 19 kudos
7 More Replies
Pawan1
by New Contributor II
  • 1816 Views
  • 1 replies
  • 2 kudos

Your administrator has forbidden Scala UDFs from being run on this cluster. How to enable access to Scala UDF on Azure Databricks cluster ?

Hi All,When i try to run a scala UDF in Azuredatabricks 10.1 (includes Apache Spark 3.2.0, Scala 2.12) cluster i was able to run the udf. However when i tried to run the same notebook in 10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12) cluster i ha...

  • 1816 Views
  • 1 replies
  • 2 kudos
Latest Reply
Debayan
Databricks Employee
  • 2 kudos

Hi, Are you trying this with High concurrency clusters? Also, please tag @Debayan Mukherjee​ with your next response so that I will get notified.

  • 2 kudos
Kumar4567
by New Contributor II
  • 4433 Views
  • 3 replies
  • 0 kudos

disable downloading files for specific group of users ?

I see we can disable/enable download button for entire workspace using download button for notebook results.is there a way to disable/enable this just for specific group of users ?

  • 4433 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kumar4567
New Contributor II
  • 0 kudos

Hi Vidula/Suteja,Sorry no, I could not find what you mentioned. Can you please provide some screenshots ?I only see Admin settings when I click on User icon in the top right corner of the Databricks workspace Under Admin Settings, I see below for Acc...

  • 0 kudos
2 More Replies
Stokholm
by New Contributor III
  • 14841 Views
  • 9 replies
  • 1 kudos

Pushdown of datetime filter to date partition.

Hi Everybody,I have 20 years of data, 600m rows.I have partitioned them on year and month to generated a files size which seems reasonable.(128Mb)All data is queried using timestamp, as all queries needs to filter on the exact hours.So my requirement...

  • 14841 Views
  • 9 replies
  • 1 kudos
Latest Reply
Stokholm
New Contributor III
  • 1 kudos

Hi Guys, thanks for your advices. I found a solution. We upgrade the Databricks Runtime to 12.2 and now the pushdown of the partitionfilter works. The documentation said that 10.4 would be adequate, but obviously it wasn't enough.

  • 1 kudos
8 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels