cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

raktim_das
by New Contributor II
  • 912 Views
  • 2 replies
  • 1 kudos

Creating table in Unity Catalog with file scheme dbfs is not supported

 code:# Define the path for the staging Delta tablestaging_table_path = "dbfs:/user/hive/warehouse/staging_order_tracking"spark.sql( f"CREATE TABLE IF NOT EXISTS staging_order_tracking USING DELTA LOCATION '{staging_table_path}'" )Creating table in U...

  • 912 Views
  • 2 replies
  • 1 kudos
Latest Reply
SaiPrakash_653
  • 1 kudos

I believe, using mount point only we can able to connect to our storage account - containers. If this is anti pattern by data bricks what is the way? Can you please explain what is external location of UC is it our local system folders or something n...

  • 1 kudos
1 More Replies
hiepntp
by New Contributor III
  • 58 Views
  • 2 replies
  • 2 kudos

Resolved! Cannot find "Databricks Apps"

Hi, I saw a demo about "Databricks Apps" 2 months ago. I haven't used Databricks for about 3 months, and I recently recreated a Premium Workspace to try something out (I use Azure), however I can't find "Apps" when I click "New". How can I enable and...

  • 58 Views
  • 2 replies
  • 2 kudos
Latest Reply
hiepntp
New Contributor III
  • 2 kudos

Thankyou, I found it.

  • 2 kudos
1 More Replies
GandinDaniel1
by New Contributor
  • 86 Views
  • 4 replies
  • 0 kudos

Issue Querying Registered Tables on Glue Catalog via Data bricks

Im having an issue to query registered tables on glue catalog thru databricks with the following error: AnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view looker.ccc_data cannot be found.Verify the spelling and correctness of the schema a...

  • 86 Views
  • 4 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Can you specify the full catalog.schema.table? and also check the current schema SELECT current_schema();

  • 0 kudos
3 More Replies
sravs1
by New Contributor
  • 332 Views
  • 3 replies
  • 3 kudos

Data Modelling

What is the 'implicit' or 'by default' data model of databricks or unity catalog ? Is it Data Vault ?

  • 332 Views
  • 3 replies
  • 3 kudos
Latest Reply
fmadeiro
New Contributor
  • 3 kudos

Databricks and Unity Catalog do not enforce a specific data model like Data Vault. The default is a Lakehouse architecture using Delta Lake, which supports flexible schemas, ACID transactions, and schema evolution.Unity Catalog organizes data into me...

  • 3 kudos
2 More Replies
unj1m
by New Contributor III
  • 54 Views
  • 3 replies
  • 0 kudos

Resolved! What version of Python is used for the 16.1 runtime

I'm trying to create a spark udf for a registered model and getting:Exception: Python versions in the Spark Connect client and server are different. To execute user-defined functions, client and server should have the same minor Python version. Pleas...

  • 54 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

No problem! 

  • 0 kudos
2 More Replies
gabrielsantana
by New Contributor
  • 47 Views
  • 2 replies
  • 1 kudos

Delete non-community Databricks account

Hi everyone!I have mistakenly created a non-community account using my personal email address.I would like to delete it in order to create a new account using my business email.How should I proceed? I tried to find this option on the console, with no...

  • 47 Views
  • 2 replies
  • 1 kudos
Latest Reply
Advika
Databricks Employee
  • 1 kudos

Hello @gabrielsantana!Could you please try raising a ticket with the Databricks support team?

  • 1 kudos
1 More Replies
vijaykumar99535
by New Contributor III
  • 2831 Views
  • 1 replies
  • 0 kudos

How to overwrite the existing file using databricks cli

If i use databricks fs cp then it does not overwrite the existing file, it just skip copying the file. Any suggestion how to overwrite the file using databricks cli?

  • 2831 Views
  • 1 replies
  • 0 kudos
Latest Reply
Swap
New Contributor II
  • 0 kudos

You can use the --overwrite option to overwrite your file.https://docs.databricks.com/en/dev-tools/cli/fs-commands.html

  • 0 kudos
gf
by New Contributor
  • 78 Views
  • 1 replies
  • 1 kudos

Resolved! The databricks jdbc driver has a memory leak

https://community.databricks.com/t5/community-platform-discussions/memory-leak/td-p/80756 My question is the same as above Unable to upload pictures, I had to dictateQuestion from ResultFileDownloadMonitor. M_requestList parametersBecause is ResultFi...

  • 78 Views
  • 1 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Hello @gf thanks for your question, it seems that this has been reported with Simba, but no fix has been provided yet, as a temporary workaround, you can consider using reflection to periodically clean up the m_requestList by removing KV pairs whose ...

  • 1 kudos
Demudu
by New Contributor
  • 131 Views
  • 2 replies
  • 2 kudos

How to read Databricks UniForm format tables present in ADLS

We have Databricks UniForm format (iceberg) tables are present in azure data lake storage (ADLS) which has already integrated with Databricks unity catalog. How to read Uniform format tables using Databricks as a query engine?

  • 131 Views
  • 2 replies
  • 2 kudos
Latest Reply
fmadeiro
New Contributor
  • 2 kudos

Query Using Unity Catalog:SQL:sqlCopiar códigoSELECT * FROM catalog_name.schema_name.table_name;PySpark:pythonCopiar códigodf = spark.sql("SELECT * FROM catalog_name.schema_name.table_name") df.display()Direct Access by Path: If not using Unity Catal...

  • 2 kudos
1 More Replies
Cloud_Architect
by New Contributor III
  • 1421 Views
  • 5 replies
  • 0 kudos

How to get the Usage/DBU Consumption report without using system tables

Is there a way to get the usage/DBU consumption report without using system tables?

  • 1421 Views
  • 5 replies
  • 0 kudos
Latest Reply
BahlouBadr
New Contributor
  • 0 kudos

Databricks offers a plethora of services. One such service is a summary of your services rendered on a part of your Assess account called the Usage policy. To access this page, go to Account Settings -> Usage . This page allows you to view usage data...

  • 0 kudos
4 More Replies
Sudheer2
by New Contributor III
  • 149 Views
  • 5 replies
  • 0 kudos

Issue with Adding New Members to Existing Groups During Migration in User group Service Principle

 Hi all,I have implemented a migration process to move groups from a source workspace to a target workspace using the following code. The code successfully migrates groups and their members to the target system, but I am facing an issue when it comes...

  • 149 Views
  • 5 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

I have provided response in https://community.databricks.com/t5/get-started-discussions/migrating-service-principals-from-non-unity-to-unity-enabled/m-p/103017#M4679 

  • 0 kudos
4 More Replies
Tanay
by New Contributor II
  • 91 Views
  • 1 replies
  • 1 kudos

Resolved! Why does a join on (df1.id == df2.id) result in duplicate columns while on="id" does not?

Why does a join with on (df1.id == df2.id) result in duplicate columns, but on="id" does not?I encountered an interesting behavior while performing a join on two Data frames. Here's the scenario: df1 = spark.createDataFrame([(1, "Alice"), (2, "Bob"),...

  • 91 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Contributor III
  • 1 kudos

Hi @Tanay , Your intuition is correct here. In Apache Spark, the difference in behavior between on (df1.id == df2.id) and on="id" in a join stems from how Spark resolves and handles column naming during the join operation.When you use the first synta...

  • 1 kudos
trimethylpurine
by New Contributor II
  • 3541 Views
  • 3 replies
  • 2 kudos

Gathering Data Off Of A PDF File

Hello everyone,I am developing an application that accepts pdf files and inserts the data into my database. The company in question that distributes this data to us only offers PDF files, which you can see attached below (I hid personal info for priv...

  • 3541 Views
  • 3 replies
  • 2 kudos
Latest Reply
NicholasGray
New Contributor II
  • 2 kudos

Thank you so much for the help.

  • 2 kudos
2 More Replies
Roig
by New Contributor II
  • 86 Views
  • 2 replies
  • 0 kudos

Create multiple dashboard subscription with filters

Hi Databricks community, We developed a dashboard that surfaces up several important KPIs for each project we have.In the top filter, we select the project name and the time frame and the dashboard will present the relevant KPIs and charts. I can eas...

  • 86 Views
  • 2 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

You can achieve this by setting up different schedules for each project and specifying the default filter values accordingly   Create the Dashboard: Ensure your dashboard is set up with the necessary filters, including the project filter.   Set Defau...

  • 0 kudos
1 More Replies
hprasad
by New Contributor III
  • 173 Views
  • 3 replies
  • 0 kudos

Databricks Champions Sign-Up Page Failing or Throwing Error

Following link is throwing error when tried to Sign-Uphttps://advocates.databricks.com/users/sign_up?join-code=community 

  • 173 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Let me check on this internally, I am not sure if this actually requires invitation. 

  • 0 kudos
2 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors