cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kennyhsieh
by New Contributor II
  • 859 Views
  • 1 replies
  • 2 kudos

Databricks Taiwan User Community

Would be great have a group for databricks Taiwan community.

  • 859 Views
  • 1 replies
  • 2 kudos
Latest Reply
wutxdata
Databricks Employee
  • 2 kudos

Hey @kennyhsieh , hope it's not too late to reply to the post! A "Databricks User Group Taiwan" has recently been formed on LinkedIn.

  • 2 kudos
spd_dat
by New Contributor III
  • 3679 Views
  • 2 replies
  • 0 kudos

Can AWS workspaces share subnets?

The docs state:"You can choose to share one subnet across multiple workspaces or both subnets across workspaces."as well as:"You can reuse existing security groups rather than create new ones."and on this page:"If you plan to share a VPC and subnets ...

  • 3679 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

AWS WorkSpaces can be configured with subnets that can be shared within an AWS account or across AWS accounts using resource sharing mechanisms, but this depends on the specific AWS service and context. For Databricks workspaces on AWS, documentation...

  • 0 kudos
1 More Replies
mancosta
by New Contributor
  • 2981 Views
  • 1 replies
  • 0 kudos

Joblib with optuna and SB3

Hi everyone,I am training some reinforcement learning models and I am trying to automate the hyperparameter search using optuna. I saw in the documentation that you can use joblib with spark as a backend to train in paralel. I got that working with t...

  • 2981 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Stable Baselines 3 (SB3) models can be optimized with Optuna for hyperparameter search, but parallelizing these searches using Joblib with Spark as the backend—like the classic scikit-learn example—commonly encounters issues. The root problem is that...

  • 0 kudos
December
by New Contributor II
  • 3143 Views
  • 1 replies
  • 0 kudos

NiFi on EKS Fails to Connect to Databricks via JDBC – "Connection reset" Error

I'm using Apache Nifi (running on AWS EKS) to connect to Databricks (with compute on EC2) via JDBC. My JDBC URL is as follows: jdbc:databricks://server_hostname:443/default;transportMode=http;ssl=1;httpPath=my_httppath;AuthMech=3;UID=token;PWD=my_tok...

December_1-1741684901117.png
Get Started Discussions
Connection
JDBC
  • 3143 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

A "Connection reset" error in NiFi when connecting to Databricks via JDBC, despite successful telnet and working connectivity from DBeaver, usually points to subtle protocol or compatibility issues rather than network-level blocks.​ Common Causes JD...

  • 0 kudos
HuyNguyen
by New Contributor
  • 3444 Views
  • 1 replies
  • 0 kudos

.py script execution failed but succeeded when run in Python notebook

Background:My code executing without problem if run in a python notebook. However, the same code fails when execute from a .py script in the workspace. Seems like the 2 execution methods don't have identical version of the packagesError message: Attr...

  • 3444 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The error "AttributeError: 'DeltaMergeBuilder' object has no attribute 'withSchemaEvolution'" when running the same code from a .py script but not in a Python notebook is likely caused by a mismatch in the Delta Lake or Databricks Runtime versions or...

  • 0 kudos
shiv_DB25
by New Contributor II
  • 3950 Views
  • 2 replies
  • 0 kudos

Getting error while installing applicationinsights

Library installation attempted on the driver node of cluster 0210-115502-3lo6gkwd and failed. Pip could not find a version that satisfies the requirement for the library. Please check your library version and dependencies. Error code: ERROR_NO_MATCHI...

  • 3950 Views
  • 2 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

The error indicates that Databricks could not install the applicationinsights or azure-identity libraries because pip could not find a matching distribution, and network connection attempts to the PyPI repository were repeatedly reset. This is common...

  • 0 kudos
1 More Replies
Superstar_Singh
by New Contributor
  • 463 Views
  • 1 replies
  • 0 kudos

Databricks Roadmap 13/11/2025

Can you provide me with a link to the recording that happened today:Databricks Product Roadmap Webinar, Thursday, November 13, 2025, 9:00 AM–10:00 AM GMT

  • 463 Views
  • 1 replies
  • 0 kudos
Latest Reply
KaushalVachhani
Databricks Employee
  • 0 kudos

@Superstar_Singh , Is this the one you are asking about? https://www.databricks.com/resources/webinar/productroadmapwebinar

  • 0 kudos
jay-cunningham
by New Contributor
  • 3340 Views
  • 1 replies
  • 0 kudos

Is there a way to prevent databricks-connect from installing a global IPython Spark startup script?

I'm currently using databricks-connect through VS Code on MacOS. However, this seems to install (and re-install upon deletion) an IPython startup script which initializes a SparkSession. This is fine as far as it goes, except that this script is *glo...

  • 3340 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Databricks Connect on MacOS (and some other platforms) adds a file to the global IPython startup folder, which causes every new IPython session—including those outside the Databricks environment—to attempt loading this SparkSession initialization. Th...

  • 0 kudos
ramisinghl01
by New Contributor
  • 4442 Views
  • 1 replies
  • 0 kudos

PYTEST: Module not found error

Hi,Apologies, as I am trying to use Pytest first time. I know this question has been raised but I went through previous answers but the issue still exists.I am following DAtabricks and other articles using pytest. My structure is simple as -tests--co...

  • 4442 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

Your issue with ModuleNotFoundError: No module named 'test_tran' when running pytest from a notebook is likely caused by how Python sets the module import paths and the current working directory inside Databricks notebooks (or similar environments). ...

  • 0 kudos
Vanamaajay
by New Contributor
  • 3663 Views
  • 1 replies
  • 0 kudos

CloudFormation Stack Failure: Custom::CreateWorkspace in CREATE_FAILED State

I am trying to create a workspace using AWS CloudFormation, but the stack fails with the following error:"The resource CreateWorkspace is in a CREATE_FAILED state. This Custom::CreateWorkspace resource is in a CREATE_FAILED state. Received response s...

  • 3663 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

When a CloudFormation stack fails with “The resource CreateWorkspace is in a CREATE_FAILED state” for a Custom::CreateWorkspace resource, it typically means the Lambda or service backing the custom resource returned a FAILED signal to CloudFormation ...

  • 0 kudos
akshaym0056
by New Contributor
  • 3367 Views
  • 1 replies
  • 0 kudos

How to Define Constants at Bundle Level in Databricks Asset Bundles for Use in Notebooks?

I'm working with Databricks Asset Bundles and need to define constants at the bundle level based on the target environment. These constants will be used inside Databricks notebooks.For example, I want a constant gold_catalog to take different values ...

  • 3367 Views
  • 1 replies
  • 0 kudos
Latest Reply
mark_ott
Databricks Employee
  • 0 kudos

There is currently no explicit, built-in mechanism in Databricks Asset Bundles (as of 2024) for directly defining global, environment-targeted constants at the bundle level that can be seamlessly accessed inside notebooks without using job or task pa...

  • 0 kudos
Naveenkumar1811
by New Contributor III
  • 256 Views
  • 2 replies
  • 0 kudos

Resolved! Compilation Failing with Scala SBT build to be used in Databricks

Hi,We have scala jar build with sbt which is used in Databricks jobs to readstream data from kafka...We are enhancing the from_avro function like below... def deserializeAvro(    topic: String,    client: CachedSchemaRegistryClient,    sc: SparkConte...

  • 256 Views
  • 2 replies
  • 0 kudos
Latest Reply
Naveenkumar1811
New Contributor III
  • 0 kudos

Thanks For the Update Louis... As we are planning to Sync All our notebook from Scala to Pyspark , we are in process of converting the code. I think Adding the additional dependency of ABRiS or Adobe’s spark-avro with Schema Registry support will tak...

  • 0 kudos
1 More Replies
Charuvil
by New Contributor III
  • 213 Views
  • 2 replies
  • 1 kudos

How to tag/ cost track Databricks Data Profiling?

We recently started using the Data Profiling/ Lakehouse monitoring feature from Databricks https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/. Data Profiling is using serverless compute for running the profilin...

  • 213 Views
  • 2 replies
  • 1 kudos
Latest Reply
Charuvil
New Contributor III
  • 1 kudos

Hi @szymon_dybczak Thanks for th quick replay.But it seems serverless budget policies cannot be applied to data profiling/ monitoring jobs. https://learn.microsoft.com/en-us/azure/databricks/data-quality-monitoring/data-profiling/Serverless budget po...

  • 1 kudos
1 More Replies
Nisha_Tech
by New Contributor II
  • 180 Views
  • 1 replies
  • 1 kudos

Wheel File name is changed after using Databricks Asset Bundle Deployment on Github Actions

Hi Team,I am deploying to the Databricks workspace using GitHub and DAB. I have noticed that during deployment, the wheel file name is being converted to all lowercase letters (e.g., pyTestReportv2.whl becomes pytestreportv2.whl). This issue does not...

  • 180 Views
  • 1 replies
  • 1 kudos
Latest Reply
Charuvil
New Contributor III
  • 1 kudos

Hi @Nisha_Tech It seems like a Git issue rather than Databricks or DAB. There is a git configuration parameter decides the upper case/ lower case of the file names deploved. Please refer here: https://github.com/desktop/desktop/issues/2672#issuecomme...

  • 1 kudos
Jeremyy
by New Contributor
  • 1307 Views
  • 2 replies
  • 0 kudos

I can't create a compute resource beyond "SQL Warehouse", "Vector Search" and "Apps"?

None of the LLMs even understand why I can't create a compute resource. I was using community (now free edition) until yesterday, when I became apparent that I needed the paid version, so I upgraded. I've even got my AWS account connected, which was ...

  • 1307 Views
  • 2 replies
  • 0 kudos
Latest Reply
nitinjain26
New Contributor III
  • 0 kudos

I have a similar issue and how can I upgrade?

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels