cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

claudiazi
by New Contributor II
  • 867 Views
  • 5 replies
  • 1 kudos

[TABLE_OR_VIEW_ALREADY_EXISTS] when running create or replace view

When I'm running `dbt run -s model` for different models in parallel on databricks general compute cluster. Im getting the error: TABLE_OR_VIEW_ALREADY_EXISTS. Also, at the same time, the view/table was not created at all.But I run them in sequence, ...

  • 867 Views
  • 5 replies
  • 1 kudos
Latest Reply
claudiazi
New Contributor II
  • 1 kudos

@raphaelblg yes! Im the owner. However, these views are inside the hive_metastore catalog. Could it be the reason?Many thaaanks!

  • 1 kudos
4 More Replies
hukel
by Contributor
  • 1865 Views
  • 2 replies
  • 0 kudos

InconsistentReadException: The file might have been updated during query - CSV backed table

I have some CSV files that I upload to DBFS storage several times a day.   From these CSVs,  I have created SQL tables: CREATE TABLE IF NOT EXISTS masterdata.lookup_host USING CSV OPTIONS (header "true", inferSchema "true") LOCATION '/mnt/masterdata/...

  • 1865 Views
  • 2 replies
  • 0 kudos
Latest Reply
hukel
Contributor
  • 0 kudos

One approach I'm testing (positive results so far, but still early).%sql # Prep and cleanup REFRESH TABLE masterdata.lookup_host; DROP TABLE IF EXISTS t_hosts; # Forcibly cache the needed columns before using the data in another query. CACHE TABLE...

  • 0 kudos
1 More Replies
egndz
by New Contributor II
  • 1479 Views
  • 3 replies
  • 0 kudos

Cluster Memory Issue (Termination)

Hi,I have a single-node personal cluster with 56GB memory(Node type: Standard_DS5_v2, runtime: 14.3 LTS ML). The same configuration is done for the job cluster as well and the following problem applies to both clusters:To start with: once I start my ...

egndz_2-1712845742934.png egndz_1-1712845616736.png
  • 1479 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @egndz, It seems like you’re dealing with memory issues in your Spark cluster, and I understand how frustrating that can be. Initial Memory Allocation: The initial memory allocation you’re observing (18 GB used + 4.1 GB cached) is likely a com...

  • 0 kudos
2 More Replies
SamGreene
by Contributor
  • 189 Views
  • 1 replies
  • 0 kudos

String to date conversion errors

Hi,I am getting data from CDC on SQL Server using Informatica which is writing parquet files to ADLS.  I read the parquet files using DLT and end up with the date data as a string such as this'20240603164746563' I couldn't get this to convert using m...

  • 189 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @SamGreene, You’re on the right track with using the TO_TIMESTAMP function. However, you might be encountering issues due to the format of your timestamp string. The string ‘20240603164746563’ seems to represent a timestamp down to the millisecond...

  • 0 kudos
traillog
by New Contributor
  • 224 Views
  • 1 replies
  • 0 kudos

Unzip multiple zip files in databricks

I have a zip file which in turn has multiple zip files inside it. I tried to write a code in databricks notebook to unzip all these files at once, but I ran into an error. So I started to unzip these one by one, but the code which worked in unzipping...

  • 224 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @traillog, Did you try this ?

  • 0 kudos
rodrigosanchesz
by New Contributor II
  • 188 Views
  • 0 replies
  • 0 kudos

Change adls gen2 attached to Unity Catalog metastore from premium to standard

Hello,Our cloud platform engineer created Azure storage for the production Unity Catalog metastore in our environment, but mistakenly chose the Premium tier instead of the Standard tier.Unfortunately, this decision is impacting our costs on Azure, as...

  • 188 Views
  • 0 replies
  • 0 kudos
Ender
by New Contributor II
  • 637 Views
  • 3 replies
  • 1 kudos

Resolved! Accessing ADLS Gen2 related Hadoop configuration in notebook

I have a cluster in which I have the required configuration to access an ADLS Gen2, and it works without any problems.I want to access this storage using the Hadoop filesystem APIs. To achieve this, I am trying to get the Hadoop configuration from th...

Ender_1-1717335940727.png Ender_3-1717336689720.png
  • 637 Views
  • 3 replies
  • 1 kudos
Latest Reply
Ender
New Contributor II
  • 1 kudos

By the way how do you achieve inline code highlighting in the editor I tried `` but it didn't work.

  • 1 kudos
2 More Replies
Shawn_Eary
by Contributor
  • 286 Views
  • 1 replies
  • 2 kudos

Resolved! DAIS24 Attendee Badge - Oops

I wasn't at DAIS24, but I received one of these emails and it appears to have come from Databricks. I think I was given the badge in error. Can we have it removed?Just Wondering,Shawn

Shawn_Eary_1-1717680651702.png
  • 286 Views
  • 1 replies
  • 2 kudos
Latest Reply
Sujitha
Community Manager
  • 2 kudos

Hi @Shawn_Eary Thank you for bringing this to our attention. We were conducting a test on badges, and some of our community members may have received this email by accident. We have revoked the changes now, please ignore the message.

  • 2 kudos
ravitejasutrave
by New Contributor
  • 274 Views
  • 1 replies
  • 0 kudos

Databricks + confluent schema registry - Schema not found error

I am running a Kafka producer code on Databricks 12.2. I am testing AVRO serialization of message with help of confluent schema registry. I configured 'to_avro' function to read the schema from schema registry, but I am getting the below error> org.a...

  • 274 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @ravitejasutrave,  Ensure that the schema is compatible with the data you’re trying to serialize.Double-check your configuration for connecting to the schema registry. Make sure that the schemaRegistryAddress points to the correct URL where your s...

  • 0 kudos
PrathviS
by New Contributor
  • 265 Views
  • 1 replies
  • 0 kudos

Training link deprecated: How to ingest data for Databricks SQL

I am currently doing a course in Databricks academy: How to ingest data for Databricks SQL.To create a table in the external location I am provided with the link that is not working anymore. Below is the link:wasbs://courseware@dbacademy.blob.core.wi...

  • 265 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @PrathviS, Thank you for sharing your concern with us!   To expedite your request, please list your concerns on our ticketing portal. Our support staff would be able to act faster on the resolution (our standard resolution time is 24-48 hours).

  • 0 kudos
traillog
by New Contributor
  • 236 Views
  • 1 replies
  • 0 kudos

Unable to unzip files recursively and copy into a different folder

I am currently trying to unzip files recursively from one folder(source folder) and copy all the unzipped files into the destination folder using databricks(pyspark). The destination path is still empty even after running this code. I tried looking f...

  • 236 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @traillog,  To recursively unzip files from a source folder, you can use the os.walk() function to traverse through all subdirectories and files.Your current implementation only processes the top-level directory. To handle recursion, you need to i...

  • 0 kudos
vabadzhiev
by New Contributor II
  • 516 Views
  • 2 replies
  • 0 kudos

Tableau Prep Save Output to Databricks

Has anyone run into use cases where your data scientist/data engineer end users build Tableau Prep Flows and steps in Tableau Prep Flow require saving output back into Databricks? There appears to be no native support for this in Tableau Prep if the ...

  • 516 Views
  • 2 replies
  • 0 kudos
Latest Reply
vabadzhiev
New Contributor II
  • 0 kudos

These are awesome suggestions. To expand on our setup, we also have Informatica Cloud - IICS (CMI, CDI, etc.) connected to the entire setup generally used for bringing data from a source (PaaS, SaaS, On-prem SQL, Flat Files or streaming devices) to D...

  • 0 kudos
1 More Replies
kazinahian
by New Contributor III
  • 3144 Views
  • 3 replies
  • 1 kudos

Seeking Tips: Ways to Master Databricks on Azure?

Hello everyone. I'm currently learning Databricks on Azure through a Udemy course. Recently, I was surprised by a charge of $86 from Azure, which has made me cautious about continuing in this manner. Is there a more cost-effective approach to learn D...

  • 3144 Views
  • 3 replies
  • 1 kudos
Latest Reply
LauraMurphy
New Contributor II
  • 1 kudos

Thank you so much for the information.

  • 1 kudos
2 More Replies
samarth10
by New Contributor II
  • 173 Views
  • 1 replies
  • 0 kudos

Assigning a group as USER to service principal

How can we assigning a group as USER to service principal using databricks-sdk, this is not supported?

  • 173 Views
  • 1 replies
  • 0 kudos
Latest Reply
samarth10
New Contributor II
  • 0 kudos

I found this API, https://docs.databricks.com/api/account/accountaccesscontrol/updaterulesetbut its PUT and GET method both requires a parameter "etag", how can someone know this "etag"?

  • 0 kudos
Avvar2022
by Contributor
  • 2067 Views
  • 6 replies
  • 2 kudos

Unity catalog enabled workspace -Is there any way to disable workflow/job creation for certain users

Currently in unity catalog enabled workspace users with "Workspace access" can create workflows/jobs, there is no access control available to restrict users from creating jobs/workflows.Use case: In production there is no need for users, data enginee...

  • 2067 Views
  • 6 replies
  • 2 kudos
Latest Reply
Avvar2022
Contributor
  • 2 kudos

not being able to restrict creation of workflows/jobs, alerts, dashboards make platform admin job difficult to keep system clean and control cost. there is no need for data engineers to create a workflow in production. there is no need for all users ...

  • 2 kudos
5 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!