- 895 Views
- 1 replies
- 0 kudos
One way to increase the CIDR range id the ip list is available or create a complete different work space on same vpc with different subnets
- 0 kudos
One way to increase the CIDR range id the ip list is available or create a complete different work space on same vpc with different subnets
Yes. if the on-premise is accessible over the network from the Databricks cluster, then it's possible to connect.
My company uses Okta as a SSO provider. Can I integrate Okta with Databricks for a SSO experience?
Yes, okta is among the supported identity providers. Read more here : https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html
Yes. There is a property called dbus_per_hour that you can add to your cluster policy.See https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-virtual-attribute-pathsHere's an example policy that uses it:https://docs...
After I log in to the workspace, where can I find the logs?
See public docs: https://docs.databricks.com/clusters/init-scripts.html#cluster-scoped-init-script-logsDon't forget to enable cluster log delivery: https://docs.databricks.com/clusters/configure.html#cluster-log-deliveryNote that this only works for ...
Usually, your account URL is where you navigate to log in.
At a high level a Lakehouse must contain the following properties:Open direct access data formats (Apache Parquet, Delta Lake etc.)First class support for machine learning and data science workloadsstate of the art performance Databricks is the firs...
Any insights on how to analyse Ganglia Metrics logs for an extended duration of time, not just 15 minute snapshots? We need to visualize cluster CPU utilization for the duration of cluster uptime
One option here would be integration with Observability tools such as Datadog which can capture the cluster metrics on a more NRT basis. More details are here - https://docs.datadoghq.com/integrations/databricks/?tab=driveronly
Using Ganglia you can monitor how busy is the GPU(s). Increasing the batch size would increase that utilization. Bigger batches improve how well each batch updates the model (up to a point) with more accurate gradients. That in turn can allow traini...
It’s advantageous to stop running trials if progress has stopped. Hyperopt offers an early_stop_fn parameter, which specifies a function that decides when to stop trials before max_evals has been reached. Hyperopt provides a function no_progress_loss...
DBR minor version details are not exposed. However, in the documentation, it mentioned Databricks performs maintenance releases every 2 weeks. How can I determine if I am using the same minor version
The below code snippet can help to determine the DBR Hash string for the DBR version. DBR hash string is unique for the DBR minor version. val scalaVersion = scala.util.Properties.versionString val hadoopVersion = org.apache.hadoop.util.VersionInf...
Delete workspace doesn't delete the root bucket. You could choose to use the same root bucket for more than one workspace ( though not recommended ) It is recommended to automate the infrastructure creation via terraform or quickstart so that cleanup...
Are there any event streams that are or could be exposed in AWS (such as Cloudwatch Eventbridge events or SNS messages? In particular I'm interested in events that detail jobs being run. The use case here would be for monitoring jobs from our web app...
You could write code to call the PutLogEvents api at the beginning of each job to write out custom events to cloudwatch / or use aws sdk to send and SNS notification and route it to a desired consumer.
Are Repos HIPAA compliant, or is there a plan and timeline to support this? Customer is getting a warning when trying to enable the Repos feature in a HIPAA deployment on Azure Databricks.
There is a plan to support this. For timeline, please reach out to your Databricks account team.
Unfortunately this is not possible. The default user workspace name will be the user's email address.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New GroupUser | Count |
---|---|
42 | |
26 | |
24 | |
15 | |
9 |