cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16826992666
by Valued Contributor
  • 1025 Views
  • 1 replies
  • 0 kudos

Okta Integration

My company uses Okta as a SSO provider. Can I integrate Okta with Databricks for a SSO experience?

  • 1025 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

Yes, okta is among the supported identity providers. Read more here : https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html

  • 0 kudos
MoJaMa
by Databricks Employee
  • 830 Views
  • 1 replies
  • 0 kudos
  • 830 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Databricks Employee
  • 0 kudos

Yes. There is a property called dbus_per_hour that you can add to your cluster policy.See https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-virtual-attribute-pathsHere's an example policy that uses it:https://docs...

  • 0 kudos
brickster_2018
by Databricks Employee
  • 4664 Views
  • 1 replies
  • 0 kudos

Resolved! My cluster is running an init script, and I want to see what's going on.

After I log in to the workspace, where can I find the logs?

  • 4664 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

See public docs: https://docs.databricks.com/clusters/init-scripts.html#cluster-scoped-init-script-logsDon't forget to enable cluster log delivery: https://docs.databricks.com/clusters/configure.html#cluster-log-deliveryNote that this only works for ...

  • 0 kudos
User16790091296
by Contributor II
  • 1375 Views
  • 1 replies
  • 0 kudos
  • 1375 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16790091296
Contributor II
  • 0 kudos

At a high level a Lakehouse must contain the following properties:Open direct access data formats (Apache Parquet, Delta Lake etc.)First class support for machine learning and data science workloadsstate of the art performance Databricks is the firs...

  • 0 kudos
User16826987838
by Contributor
  • 2835 Views
  • 1 replies
  • 0 kudos

Extending the duration of Ganglia metrics logs

 Any insights on how to analyse Ganglia Metrics logs for an extended duration of time, not just 15 minute snapshots? We need to visualize cluster CPU utilization for the duration of cluster uptime

  • 2835 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Databricks Employee
  • 0 kudos

One option here would be integration with Observability tools such as Datadog which can capture the cluster metrics on a more NRT basis. More details are here - https://docs.datadoghq.com/integrations/databricks/?tab=driveronly

  • 0 kudos
User16789201666
by Databricks Employee
  • 3691 Views
  • 0 replies
  • 0 kudos

What's Early Stopping in Hyperopt? When should it be used?

It’s advantageous to stop running trials if progress has stopped. Hyperopt offers an early_stop_fn parameter, which specifies a function that decides when to stop trials before max_evals has been reached. Hyperopt provides a function no_progress_loss...

  • 3691 Views
  • 0 replies
  • 0 kudos
brickster_2018
by Databricks Employee
  • 1534 Views
  • 1 replies
  • 0 kudos

Resolved! How to determine if am using the same DBR minor version?

DBR minor version details are not exposed. However, in the documentation, it mentioned Databricks performs maintenance releases every 2 weeks. How can I determine if I am using the same minor version

  • 1534 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Databricks Employee
  • 0 kudos

The below code snippet can help to determine the DBR Hash string for the DBR version. DBR hash string is unique for the DBR minor version. val scalaVersion = scala.util.Properties.versionString   val hadoopVersion = org.apache.hadoop.util.VersionInf...

  • 0 kudos
Anonymous
by Not applicable
  • 1525 Views
  • 2 replies
  • 0 kudos
  • 1525 Views
  • 2 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Delete workspace doesn't delete the root bucket. You could choose to use the same root bucket for more than one workspace ( though not recommended ) It is recommended to automate the infrastructure creation via terraform or quickstart so that cleanup...

  • 0 kudos
1 More Replies
Anonymous
by Not applicable
  • 2711 Views
  • 1 replies
  • 0 kudos

Monitoring jobs

Are there any event streams that are or could be exposed in AWS (such as Cloudwatch Eventbridge events or SNS messages? In particular I'm interested in events that detail jobs being run. The use case here would be for monitoring jobs from our web app...

  • 2711 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

You could write code to call the PutLogEvents api at the beginning of each job to write out custom events to cloudwatch / or use aws sdk to send and SNS notification and route it to a desired consumer.

  • 0 kudos
User16765131552
by Contributor III
  • 2331 Views
  • 1 replies
  • 0 kudos

Azure Databricks Repos and HIPAA

Are Repos HIPAA compliant, or is there a plan and timeline to support this? Customer is getting a warning when trying to enable the Repos feature in a HIPAA deployment on Azure Databricks.

  • 2331 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

There is a plan to support this. For timeline, please reach out to your Databricks account team.

  • 0 kudos
MoJaMa
by Databricks Employee
  • 1713 Views
  • 1 replies
  • 0 kudos
  • 1713 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

Unfortunately this is not possible. The default user workspace name will be the user's email address.

  • 0 kudos
User16826992666
by Valued Contributor
  • 1285 Views
  • 1 replies
  • 0 kudos

What do I need to think about for Disaster Recovery planning?

I am working on a disaster recovery plan for my environment which includes Databricks. Where do I start with my planning? What all do I need to consider when building a DR plan?

  • 1285 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Depending on your RPO/RTOs there are different recovery solution strategies that could be considered (active/passive, active/active) for Databricks deployments. A detailed explanation of these approaches are mentioned here

  • 0 kudos
User16826992666
by Valued Contributor
  • 1373 Views
  • 1 replies
  • 0 kudos

Can you use credential passthrough for users running jobs?

I would like it if I could make it so that the credentials of the user who initiates a job are used as the credentials for the job run. Is this possible?

  • 1373 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Is this in Azure? If so, it is not supported currently. https://docs.microsoft.com/en-us/azure/databricks/security/credential-passthrough/adls-passthrough

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels