- 941 Views
- 1 replies
- 0 kudos
- 941 Views
- 1 replies
- 0 kudos
- 0 kudos
Anything that can reach the control plane and use the SCIM API should work. For Azure AD Premium, there is specifically an enterprise App that does this for the customer.
- 0 kudos
- 1772 Views
- 1 replies
- 1 kudos
- 1772 Views
- 1 replies
- 1 kudos
- 1 kudos
Ganglia metrics are available only if the job runs for more than 15 minutes. For jobs that are completed within 15 minutes, the metrics won't be available
- 1 kudos
- 1592 Views
- 1 replies
- 1 kudos
- 1592 Views
- 1 replies
- 1 kudos
- 1 kudos
As of June, 2021, No.However Public Preview features are stable and intended to advance to GA and fully supported by Databricks Support.
- 1 kudos
- 1409 Views
- 1 replies
- 0 kudos
Resolved! Databricks SQL dashboard refresh
In Databricks SQL, can you prohibit a dashboard from being refreshed?
- 1409 Views
- 1 replies
- 0 kudos
- 0 kudos
It looks like this can be done by not granting CAN_RUN to a user/grouphttps://docs.databricks.com/sql/user/security/access-control/dashboard-acl.html#dashboard-permissions
- 0 kudos
- 799 Views
- 1 replies
- 0 kudos
- 799 Views
- 1 replies
- 0 kudos
- 0 kudos
One way to increase the CIDR range id the ip list is available or create a complete different work space on same vpc with different subnets
- 0 kudos
- 3200 Views
- 1 replies
- 1 kudos
- 3200 Views
- 1 replies
- 1 kudos
- 1 kudos
Yes. if the on-premise is accessible over the network from the Databricks cluster, then it's possible to connect.
- 1 kudos
- 909 Views
- 1 replies
- 0 kudos
Okta Integration
My company uses Okta as a SSO provider. Can I integrate Okta with Databricks for a SSO experience?
- 909 Views
- 1 replies
- 0 kudos
- 0 kudos
Yes, okta is among the supported identity providers. Read more here : https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html
- 0 kudos
- 780 Views
- 1 replies
- 0 kudos
- 780 Views
- 1 replies
- 0 kudos
- 0 kudos
Yes. There is a property called dbus_per_hour that you can add to your cluster policy.See https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-virtual-attribute-pathsHere's an example policy that uses it:https://docs...
- 0 kudos
- 4341 Views
- 1 replies
- 0 kudos
Resolved! My cluster is running an init script, and I want to see what's going on.
After I log in to the workspace, where can I find the logs?
- 4341 Views
- 1 replies
- 0 kudos
- 0 kudos
See public docs: https://docs.databricks.com/clusters/init-scripts.html#cluster-scoped-init-script-logsDon't forget to enable cluster log delivery: https://docs.databricks.com/clusters/configure.html#cluster-log-deliveryNote that this only works for ...
- 0 kudos
- 2017 Views
- 1 replies
- 0 kudos
- 2017 Views
- 1 replies
- 0 kudos
- 0 kudos
Usually, your account URL is where you navigate to log in.
- 0 kudos
- 1234 Views
- 1 replies
- 0 kudos
- 1234 Views
- 1 replies
- 0 kudos
- 0 kudos
At a high level a Lakehouse must contain the following properties:Open direct access data formats (Apache Parquet, Delta Lake etc.)First class support for machine learning and data science workloadsstate of the art performance Databricks is the firs...
- 0 kudos
- 2657 Views
- 1 replies
- 0 kudos
Extending the duration of Ganglia metrics logs
Any insights on how to analyse Ganglia Metrics logs for an extended duration of time, not just 15 minute snapshots? We need to visualize cluster CPU utilization for the duration of cluster uptime
- 2657 Views
- 1 replies
- 0 kudos
- 0 kudos
One option here would be integration with Observability tools such as Datadog which can capture the cluster metrics on a more NRT basis. More details are here - https://docs.datadoghq.com/integrations/databricks/?tab=driveronly
- 0 kudos
- 1154 Views
- 0 replies
- 0 kudos
What's the right batch size in deep learning training?
Using Ganglia you can monitor how busy is the GPU(s). Increasing the batch size would increase that utilization. Bigger batches improve how well each batch updates the model (up to a point) with more accurate gradients. That in turn can allow traini...
- 1154 Views
- 0 replies
- 0 kudos
- 3224 Views
- 0 replies
- 0 kudos
What's Early Stopping in Hyperopt? When should it be used?
It’s advantageous to stop running trials if progress has stopped. Hyperopt offers an early_stop_fn parameter, which specifies a function that decides when to stop trials before max_evals has been reached. Hyperopt provides a function no_progress_loss...
- 3224 Views
- 0 replies
- 0 kudos
- 1406 Views
- 1 replies
- 0 kudos
Resolved! How to determine if am using the same DBR minor version?
DBR minor version details are not exposed. However, in the documentation, it mentioned Databricks performs maintenance releases every 2 weeks. How can I determine if I am using the same minor version
- 1406 Views
- 1 replies
- 0 kudos
- 0 kudos
The below code snippet can help to determine the DBR Hash string for the DBR version. DBR hash string is unique for the DBR minor version. val scalaVersion = scala.util.Properties.versionString val hadoopVersion = org.apache.hadoop.util.VersionInf...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
Access control
1 -
Access Delta Tables
2 -
ActiveDirectory
1 -
AmazonKMS
1 -
Apache spark
1 -
App
1 -
Availability
1 -
Availability Zone
1 -
AWS
5 -
Aws databricks
1 -
AZ
1 -
Azure
8 -
Azure Data Lake Storage
1 -
Azure databricks
6 -
Azure databricks workspace
1 -
Best practice
1 -
Best Practices
2 -
Billing
2 -
Bucket
1 -
Cache
1 -
Change
1 -
Checkpoint
1 -
Checkpoint Path
1 -
Cluster
1 -
Cluster Pools
1 -
Clusters
1 -
ClustersJob
1 -
Compliance
1 -
Compute Instances
1 -
Cost
1 -
Credential passthrough
1 -
Data
1 -
Data Ingestion & connectivity
6 -
Data Plane
1 -
Databricks Account
1 -
Databricks Control Plane
1 -
Databricks Error Message
2 -
Databricks Partner
1 -
Databricks Repos
1 -
Databricks Runtime
1 -
Databricks SQL
3 -
Databricks SQL Dashboard
1 -
Databricks workspace
1 -
DatabricksJobs
1 -
DatabricksLTS
1 -
DBFS
1 -
DBR
3 -
Dbt
1 -
Dbu
3 -
Deep learning
1 -
DeleteTags Permissions
1 -
Delta
4 -
Delta Sharing
1 -
Delta table
1 -
Dev
1 -
Different Instance Types
1 -
Disaster recovery
1 -
DisasterRecoveryPlan
1 -
DLT Pipeline
1 -
EBS
1 -
Email
2 -
External Data Sources
1 -
Feature
1 -
GA
1 -
Ganglia
3 -
Ganglia Metrics
2 -
GangliaMetrics
1 -
GCP
1 -
GCP Support
1 -
Gdpr
1 -
Gpu
2 -
Group Entitlements
1 -
HIPAA
1 -
Hyperopt
1 -
Init script
1 -
InstanceType
1 -
Integrations
1 -
IP Addresses
1 -
IPRange
1 -
Job
1 -
Job Cluster
1 -
Job clusters
1 -
Job Run
1 -
JOBS
1 -
Key
1 -
KMS
1 -
KMSKey
1 -
Lakehouse
1 -
Limit
1 -
Live Table
1 -
Log
2 -
LTS
3 -
Metrics
1 -
MFA
1 -
ML
1 -
Model Serving
1 -
Multiple workspaces
1 -
Notebook Results
1 -
Okta
1 -
On-premises
1 -
Partner
54 -
Pools
1 -
Premium Workspace
1 -
Public Preview
1 -
Redis
1 -
Repos
1 -
Rest API
1 -
Root Bucket
2 -
SCIM API
1 -
Security
1 -
Security Group
1 -
Security Patch
1 -
Service principal
1 -
Service Principals
1 -
Single User Access Permission
1 -
Sns
1 -
Spark
1 -
Spark-submit
1 -
Spot instances
1 -
SQL
1 -
Sql Warehouse
1 -
Sql Warehouse Endpoints
1 -
Ssh
1 -
Sso
2 -
Streaming Data
1 -
Subnet
1 -
Sync Users
1 -
Tags
1 -
Team Members
1 -
Thrift
1 -
TODAY
1 -
Track Costs
1 -
Unity Catalog
1 -
Use
1 -
User
1 -
Version
1 -
Vulnerability Issue
1 -
Welcome Email
1 -
Workspace
2 -
Workspace Access
1
- « Previous
- Next »
User | Count |
---|---|
37 | |
9 | |
9 | |
9 | |
8 |