cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16790091296
by Contributor II
  • 579 Views
  • 1 replies
  • 0 kudos
  • 579 Views
  • 1 replies
  • 0 kudos
Latest Reply
Taha
New Contributor III
  • 0 kudos

The admin console exists within the workspace and let's you control access and privileges for that specific workspace. An existing admin can get to it from the drop down in the very top right and selecting Admin Console.The first screen you'll land o...

  • 0 kudos
MoJaMa
by Valued Contributor II
  • 509 Views
  • 1 replies
  • 0 kudos
  • 509 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Valued Contributor II
  • 0 kudos

Anything that can reach the control plane and use the SCIM API should work. For Azure AD Premium, there is specifically an enterprise App that does this for the customer. 

  • 0 kudos
User16869510359
by Esteemed Contributor
  • 1178 Views
  • 1 replies
  • 1 kudos
  • 1178 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16869510359
Esteemed Contributor
  • 1 kudos

Ganglia metrics are available only if the job runs for more than 15 minutes. For jobs that are completed within 15 minutes, the metrics won't be available

  • 1 kudos
User16765131552
by Contributor III
  • 558 Views
  • 1 replies
  • 0 kudos

Resolved! Databricks SQL dashboard refresh

In Databricks SQL, can you prohibit a dashboard from being refreshed?

  • 558 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16765131552
Contributor III
  • 0 kudos

It looks like this can be done by not granting CAN_RUN to a user/grouphttps://docs.databricks.com/sql/user/security/access-control/dashboard-acl.html#dashboard-permissions

  • 0 kudos
Anonymous
by Not applicable
  • 434 Views
  • 1 replies
  • 0 kudos
  • 434 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

One way to increase the CIDR range id the ip list is available or create a complete different work space on same vpc with different subnets

  • 0 kudos
User16826992666
by Valued Contributor
  • 490 Views
  • 1 replies
  • 0 kudos

Okta Integration

My company uses Okta as a SSO provider. Can I integrate Okta with Databricks for a SSO experience?

  • 490 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16869510359
Esteemed Contributor
  • 0 kudos

Yes, okta is among the supported identity providers. Read more here : https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html

  • 0 kudos
MoJaMa
by Valued Contributor II
  • 437 Views
  • 1 replies
  • 0 kudos
  • 437 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Valued Contributor II
  • 0 kudos

Yes. There is a property called dbus_per_hour that you can add to your cluster policy.See https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-virtual-attribute-pathsHere's an example policy that uses it:https://docs...

  • 0 kudos
User16869510359
by Esteemed Contributor
  • 3265 Views
  • 1 replies
  • 0 kudos

Resolved! My cluster is running an init script, and I want to see what's going on.

After I log in to the workspace, where can I find the logs?

  • 3265 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16869510359
Esteemed Contributor
  • 0 kudos

See public docs: https://docs.databricks.com/clusters/init-scripts.html#cluster-scoped-init-script-logsDon't forget to enable cluster log delivery: https://docs.databricks.com/clusters/configure.html#cluster-log-deliveryNote that this only works for ...

  • 0 kudos
User16790091296
by Contributor II
  • 670 Views
  • 1 replies
  • 0 kudos
  • 670 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16790091296
Contributor II
  • 0 kudos

At a high level a Lakehouse must contain the following properties:Open direct access data formats (Apache Parquet, Delta Lake etc.)First class support for machine learning and data science workloadsstate of the art performance Databricks is the firs...

  • 0 kudos
User16826987838
by Contributor
  • 725 Views
  • 1 replies
  • 0 kudos

Extending the duration of Ganglia metrics logs

 Any insights on how to analyse Ganglia Metrics logs for an extended duration of time, not just 15 minute snapshots? We need to visualize cluster CPU utilization for the duration of cluster uptime

  • 725 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

One option here would be integration with Observability tools such as Datadog which can capture the cluster metrics on a more NRT basis. More details are here - https://docs.datadoghq.com/integrations/databricks/?tab=driveronly

  • 0 kudos
User16789201666
by Contributor II
  • 2196 Views
  • 0 replies
  • 0 kudos

What's Early Stopping in Hyperopt? When should it be used?

It’s advantageous to stop running trials if progress has stopped. Hyperopt offers an early_stop_fn parameter, which specifies a function that decides when to stop trials before max_evals has been reached. Hyperopt provides a function no_progress_loss...

  • 2196 Views
  • 0 replies
  • 0 kudos
Labels