Databricks Platform Discussions
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the co...
Engage in vibrant discussions covering diverse learning topics within the Databricks Community. Expl...
Hi everyone,I’m curious if anyone has successfully implemented Databricks Genie (chat/agent) for production use.Currently, we’ve enabled a few Genie instances for power users who are comfortable working with data outside of the data team. However, we...
Hi @EmmaThe recommendation was from Genie code as attached below. To me, this is not curated issue, it's 1. maturity of LLM and 2. user prompt quality. I observed several times that users asked questions with confusing perspective, Genie could decode...
I'm unable to start my SQL warehouse (Serverless) due to a RESOURCE_EXHAUSTED error.Error message: Clusters are failing to launch. Cluster launch will be retried. Request to create a cluster failed with an exception: RESOURCE_EXHAUSTED: Cannot create...
Hi @sharath007, Just checked internally. This specific RESOURCE_EXHAUSTED: Cannot create the resource, please try again later message for a serverless SQL warehouse normally indicates that the backing serverless compute pool has run out of capacity (...
We are an education-based company currently developing a course on Databricks, which we plan to publish on platforms such as YouTube and Udemy for educational purposes.We would like to confirm whether any formal permission is required from Databricks...
Hi @Simranpreet No formal permission is required to create and sell/publish an independent educational course about Databricks, but there are specific rules you must follow regarding trademarks, logos, and official Databricks course materials.here's ...
It's a simple answer bro. According to our analysis Azure pipelines and not books match process approximately 40% faster than the snaps analytics. If we really want to optimise your pipelines and perform cost optimisations in your team please migrate...
"I am on a Premium AWS trial workspace (dbc-30503d28-2210). I have two issues:Personal access tokens are grayed out and I cannot generate themMy cluster cannot make outbound HTTP requests to external APIs (getting NameResolutionError when calling api...
Hi, On the personal access tokens greyed out, you need to enable this in the workspace settings. (you'll need to make sure you're a workspace admin first). This is in the advanced setting under workspaces ettings. Are you validating you're a workspa...
Databricks integrating with ServiceNow via Lakeflow Connect for data ingestion and looking for guidance on enforcing integration-user based data access.Observed behaviourU2M OAuth authentication succeeds when ServiceNow access is granted to the works...
Hi @emma_s I’ve reviewed the setup and wanted to clarify the behavior I’m seeing with the ServiceNow connector and U2M OAuth.The ServiceNow connection was created successfully using a U2M OAuth integration user, and that integration user has admin pe...
Has anyone else seen full refresh snapshots trigger outside of their configured refresh window in Lakeflow Connect?Here's our situation:- We have a full refresh window configured to restrict snapshot operations to off-hours- On at least one occasion,...
@lrm_data This is very unlike case for the refresh to be triggered outside the configured window. Though I would still suggest to check the Configured Window and Auto Full Refresh policy once to be sure.If still persists, then you may raise a support...
Has anyone else run into a situation where a breaking schema change on a SQL Server source table leaves their Lakeflow Connect pipeline in a state it can't recover from — even after destroying and recreating the pipeline?Here's what happened to us:- ...
Hello @emma_s + @abhi_dabhi ,Thank you so much! I had destroyed the bundle that included schema, ingestion pipeline and gateway. However, I did not clear out SQL Server CDC so that may have been the issue.I plan to leave the current gateway stopped a...
Error : Cannot launch the cluster because the user specified an invalid argument.Instance ID: failed-2d901c0f-d88d-499a-aInternal error message: The VM launch request to AWS failed, please check your configuration. [details] InvalidParameterCombinati...
The error is coming from AWS, not Databricks: your AWS account is restricted to Free Tier–eligible instance types, but the node type you picked in Databricks maps to an EC2 instance that is not Free Tier–eligible, so AWS rejects the launch request wi...
Hey All!We are trying out the Beta connector for SharePoint and found that the connector will not work at the root-level site. Is there a reason for this limitation. It is unfortunately a hard blocker for us to use the native connector. MUST_START...
How you have made the connection, the reason I am asking because, We have two separete tenants (sharepoint in a separate tenant) and databricks setup in a differenet tenant, at the moment we are using the logic app to bring the data into the platform...
Maybe someone has encountered this problem before?I’m running parallel loading for 10 objects using pool.map. Nine of them complete successfully, but one fails when trying to read a configuration file. The problem occurs occasionally and doesn’t foll...
@AdrianLobacz You can read the configuration once and pass the object into your function instead of reading the same file multiple times. It eliminates the IO overhead and avoids hitting the FUSE layer. When the code triggers parallel processes, they...
Hello Team, During the exam there was a power failure for 5 mins due to which exam got suspended, I have created a support ticket as well but have not received any response yet , Please resolve this on high priority, this is not fair with the time, m...
Hi team, Ive completed a learning path by taking part in Databricks learning festival which was conducted recently but Ive not received my discount voucher yet
Hello, How can handle this error when we use autoloader with spark.readStream (com.databricks.sql.cloudfiles.errors.CloudFilesException) [CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE] Cannot infer schema when the input path `/Volumes/default/landing/source/bund...
Hi @seefoods, The error message seems to indicate there are no files in the source path? You can either define the schema yourself and pass it to schema(...) so Auto Loader doesn’t need to infer anything.. and as soon as files arrive, the stream will...
Hey,I’ve been rethinking my API tooling lately and realized I’ve mostly stayed with Postman out of habit.One thing that stood out again is the free plan limitations. They’re not new, but they make collaboration a bit annoying for small teams unless y...
Hi @john26 ,I use Bruno quite often these days and it become the go-to Postman replacement for engineering-heavy workflows specifically because collections live as plain files in your repo. For someone working across Databricks, ADF, and Azure servic...
| User | Count |
|---|---|
| 1901 | |
| 993 | |
| 930 | |
| 479 | |
| 351 |