cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcing the APJ Databricks Smart Business Insights Challenge: Empowering Data-Driven Decision Mak

Join APJ's premier AI/BI virtual challenge to solve real-world business problems, sharpen your skills, and compete for prizes using the Databricks Data Intelligence Platform. This challenge provides a unique opportunity to work together, apply AI-dri...

  • 2908 Views
  • 0 replies
  • 0 kudos
Tuesday

Community Activity

dplatform_user
by > Visitor
  • 10 Views
  • 0 replies
  • 0 kudos

INVALID_PARAMETER_VALUE.LOCATION_OVERLAP when trying to copy from s3 location

Hi,Currently we are getting an issue when we try to copy a file from s3 location using dbutils.fs.cp, please see example below:source = s3://test-bucket/external/zones/{database_name}/{table_name}/test.csvdestination = s3://test-bucket/external/desti...

  • 10 Views
  • 0 replies
  • 0 kudos
AlexSantiago
by > New Contributor II
  • 4263 Views
  • 16 replies
  • 4 kudos

spotify API get token - raw_input was called, but this frontend does not support input requests.

hello everyone, I'm trying use spotify's api to analyse my music data, but i'm receiving a error during authentication, specifically when I try get the token, above my code.Is it a databricks bug?pip install spotipyfrom spotipy.oauth2 import SpotifyO...

  • 4263 Views
  • 16 replies
  • 4 kudos
Latest Reply
Jenny101
Visitor
  • 4 kudos

It looks like you're facing an authentication issue in an environment that doesn’t support interactive input. Databricks notebooks often don’t allow rawinput() calls. That’s why you’re getting the StdinNotImplementedError. To fix this, try generating...

  • 4 kudos
15 More Replies
saikrishna1020
by > New Contributor
  • 12 Views
  • 0 replies
  • 0 kudos

Community Edition Data recovery

I was using Databricks Community Edition for some practice work, and I had created a few notebooks as part of my learning. However, when I recently tried to log in, I received a message saying, "We were not able to find a Community Edition." Now, non...

  • 12 Views
  • 0 replies
  • 0 kudos
faridaeasmin
by > New Contributor II
  • 11 Views
  • 0 replies
  • 0 kudos

Completed Machine learning course

I have completed my course for Machine learning as part of Learning festival.

  • 11 Views
  • 0 replies
  • 0 kudos
jmeulema
by Databricks Employee
  • 2589 Views
  • 1 replies
  • 2 kudos

Where PySpark and SparkSQL Fit Best in the Enterprise

1. Context2. Performance Differences Between SparkSQL and PySpark DataFrame API3. Functional Differences Between SparkSQL and PySpark4. Additional Considerations Based on Real-World Usage5. Conclusion   1. Context When building a data architecture, a...

Screenshot 2025-02-24 at 10.55.19.png
  • 2589 Views
  • 1 replies
  • 2 kudos
Latest Reply
inba
Visitor
  • 2 kudos

Regarding complex transformations, we can use UDFs in SQL as well. So, we can still use sparkSQL, and delegate complex transformations into UDF.

  • 2 kudos
vziog
by > New Contributor
  • 60 Views
  • 1 replies
  • 0 kudos

Costs from cost managem azure portal are not allligned with costs calculated from usage system table

Hello,the costs regarding the databricks service from cost management in azure portal (45,869...) are not allligned with costs calculated from usage system table (75,34). The costs from the portal are filtered based on the desired period (usage_date ...

  • 60 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @vziog, The Azure portal typically aggregates costs from various billing categories (such as DBUs, infrastructure, storage, and networking) based on usage logs and pricing. On the other hand, the query designed by you extracts detailed cost estima...

  • 0 kudos
AP01
by > New Contributor
  • 89 Views
  • 1 replies
  • 0 kudos

Databricks JDBC Error: Job Aborted Due to Stage Failure (Executor OOM - Error Code 52)

java.sql.SQLException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: SELECT `ma***, Error message from Server: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.s...

Warehousing & Analytics
Databricks JDBC SparkSQL OOM HiveThriftServer Error500051
Databricks SQL
JDBC Driver
SparkSQL
sql
  • 89 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

The executor does not seem to have enough memory to process the assigned tasks, OOM error.

  • 0 kudos
antonionuzzo
by > New Contributor II
  • 41 Views
  • 1 replies
  • 0 kudos

Unexpected Behavior with Azure Databricks and Entra ID SCIM Integration

Hi everyone,I'm currently running some tests for a company that uses Entra ID as the backbone of its authentication system. Every employee with a corporate email address is mapped within the organization's Entra ID.Our company's Azure Databricks is c...

  • 41 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hello @antonionuzzo, This behavior is occurring because Azure Databricks allows workspace administrators to invite users from their organization's Entra ID directory into the Databricks workspace. This capability functions independently of whether th...

  • 0 kudos
vignesh22
by > Visitor
  • 24 Views
  • 1 replies
  • 0 kudos

Pipelines are expected to have at least one table Error While running DLT pipeline

Error :Pipelines are expected to have at least one table defined butno tables were found in your pipeline I wrote simple code as phase 1 debug%sql CREATE OR REFRESH STREAMING TABLE test_table AS SELECT "hello" as greeting; Can u plz help what's wrong...

  • 24 Views
  • 1 replies
  • 0 kudos
Latest Reply
Takuya-Omi
Valued Contributor III
  • 0 kudos

@vignesh22 In Delta Live Tables (DLT), magic commands (such as %sql) are not used. In a DLT pipeline, you need to write SQL code directly. Please try removing %sql and running the DLT pipeline again.

  • 0 kudos
Punit_Prajapati
by > New Contributor
  • 86 Views
  • 3 replies
  • 6 kudos

Resolved! SERVERLESS SQL WAREHOUSE

Hello All,I have two questions regarding the serverless SQL warehouse which are following:1.) If I create a small Serverless SQL Warehouse in Databricks that shows 12 DBUs/hour, will I be charged 12 DBUs even if I don’t run any queries in that hour? ...

  • 86 Views
  • 3 replies
  • 6 kudos
Latest Reply
BigRoux
Databricks Employee
  • 6 kudos

Shua42 hits the nail on the head. If I can be so bold as to summarize: You are only charged when the Warehouse is running regardless of how much or how little you use it.  We do have an auto stop feature you can configure. Essentially, you set a time...

  • 6 kudos
2 More Replies
MingOnCloud
by > New Contributor
  • 1470 Views
  • 0 replies
  • 0 kudos

With Academy Lab Subscription may i practice the collaboration with AWS?

Hi community,I will purchase the $200 subscription, but i want to know if i can practice with the collaboration with AWS, or it's only for serverless compute on databricks platform and i need to configure my personal aws?Thanks.

  • 1470 Views
  • 0 replies
  • 0 kudos
mayuri_s
by > Contributor
  • 22343 Views
  • 9 replies
  • 8 kudos

Does Databricks Academy not provide self-paced e-learning format of the Data Engineering with Databricks course?

Data engineering with Databricks - I want to learn with self-paced e-learning but I cannot find this course in the Academy catalog. Does Databricks Academy not provide self-paced e-learning format of the Data Engineering with Databricks course? I cou...

Catalog contains no course for self-paced e-learning for Data engineering
  • 22343 Views
  • 9 replies
  • 8 kudos
Latest Reply
petergrew
New Contributor
  • 8 kudos

Welcome, Databricks Academy Learners! Embrace this dynamic space for collaborative growth in data engineering and analytics. Connect with us and beyond, share insights, and accelerate your learning journey through shared experiences and expert-led re...

  • 8 kudos
8 More Replies
tgburrin-afs
by > New Contributor
  • 2926 Views
  • 3 replies
  • 0 kudos

Limiting concurrent tasks in a job

I have a job with > 10 tasks in it that interacts with an external system outside of databricks.  At the moment that external system cannot handle more than 3 of the tasks executing concurrently.  How can I limit the number of tasks that concurrently...

  • 2926 Views
  • 3 replies
  • 0 kudos
Latest Reply
_J
New Contributor II
  • 0 kudos

Same thing here; job concurrency is good but nothing for task; some jobs we do have countless parallel tasks so by not controlling it the downstream servers goes to a grinding halt and tasks just terminate.It needs what we call a spinlock on tasks to...

  • 0 kudos
2 More Replies
dplaut
by > New Contributor II
  • 3675 Views
  • 3 replies
  • 0 kudos

Save output of show table extended to table?

I want to save the output of     show table extended in catalogName like 'mysearchtext*';to a table.How do I do that?

  • 3675 Views
  • 3 replies
  • 0 kudos
Latest Reply
njoyb
New Contributor
  • 0 kudos

Use  DESCRIBE EXTENDED customer AS JSON this returns as a json data  . This you can load Applicable to databricks 16.2 and abovehttps://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-aux-describe-table

  • 0 kudos
2 More Replies
YuriS
by > New Contributor
  • 1348 Views
  • 1 replies
  • 0 kudos

VACUUM with Azure Storage Inventory Report is not working

Could someone please advise regarding VACUUM with Azure Storage Inventory Report as i have failed to make it work.DBR 15.4 LTS, VACUUM command is being run with USING INVENTORY clause, as follows:VACUUM schema.table USING INVENTORY ( select 'https://...

  • 1348 Views
  • 1 replies
  • 0 kudos
Latest Reply
Brahmareddy
Honored Contributor III
  • 0 kudos

Hi YuriS,How are you doing today?, As per my understanding, you're absolutely right to look into the USING INVENTORY clause for VACUUM, especially when dealing with large storage footprints. The tricky part is that while this feature is part of open-...

  • 0 kudos
Welcome to the Databricks Community!

Once you are logged in, you will be ready to post content, ask questions, participate in discussions, earn badges and more.

Spend a few minutes exploring Get Started Resources, Learning Paths, Certifications, and Platform Discussions.

Connect with peers through User Groups and stay updated by subscribing to Events. We are excited to see you engage!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog