a week ago
Databricks integrating with ServiceNow via Lakeflow Connect for data ingestion and looking for guidance on enforcing integration-user based data access.
Observed behaviour
This suggests OAuth is working , but the effective ServiceNow identity differs for running data ingestion in databricks (integration user vs ADID user), impacting data visibility.
Clarification Required:
Our goal is to use a consistent Databricks recommended best practice for production without relying on human ADID privileges.
a week ago
@LokeshChikuru Have you checked the docs? This might help - https://docs.databricks.com/aws/en/ingestion/lakeflow-connect/servicenow-troubleshoot#-authenticatio...
a week ago
Hi @Sumit_7
I have reviewed the configuration and do not see any issues with authentication to ServiceNow using the U2M approach (OAuth application with an Integration User).
However, I would like to understand which user context is used when the data fetch occurs during pipeline execution.
Based on my observations, the ServiceNow integration user is not being used during pipeline execution. As a result, no data is returned.
When ServiceNow admin privileges are assigned to the AD user who logged into the Databricks workspace, the ServiceNow table data becomes visible.
a week ago
Hi, looking through some internal resources, it seems most likely to be down to ServiceNow-side ACLs, High Security Settings, or domain/scope restrictions overriding the admin role on system tables the connector queries.
Quick things to check:
- Run this curl as the integration user against ServiceNow: GET /api/now/v2/table/sys_db_object?sysparm_query=name=<your_table>&sysparm_fields=super_class.name. A 403 or empty result confirms it's a
ServiceNow-side ACL issue, not the connector. Test sys_dictionary the same way.
- Compare ACLs on sys_db_object, sys_dictionary, sys_glide_object between a working env (your DEV) and the failing one: that usually surfaces the difference fast.
- Check for glide.security.strict or custom ACLs overriding admin.
- Check whether the integration user is in a different application scope or domain than the data โ domain separation isn't overridden by admin role.
- Confirm the admin role is state = "active" on sys_user_has_role, not "requested" or "inactive".
- There's a workspace-level pipeline flag (ingestionPipelineServiceNowNonAdminAccessSchemaFetchEnabled) for least-privilege setups , support can enable it if needed.
If you want to take OAuth identity off the table entirely while you debug, switching the connection to ROPC (integration user's username + password) removes any ambiguity about who's hitting ServiceNow at runtime. ServiceNow connector supports both โ
https://docs.databricks.com/aws/en/ingestion/lakeflow-connect/servicenow-source-setup.
If none of the above sorts it, raise a support ticket with the curl response (status + body), the failing pipeline ID, and your workspace ID.
I hope this helps.
Thanks,
Emma
Monday
Hi @emma_s
Iโve reviewed the setup and wanted to clarify the behavior Iโm seeing with the ServiceNow connector and U2M OAuth.
The ServiceNow connection was created successfully using a U2M OAuth integration user, and that integration user has admin permissions in ServiceNow. The connection test succeeds without any issues.
However, during actual data ingestion, it appears that the connector is not executing purely in the context of the U2M OAuth integration user. Instead, ingestion seems to also depend on the Databricks workspace user who created or is running the pipeline. When that workspace user does not have the required ServiceNow permissions, the ingestion returns no data. If ServiceNow permissions are granted to the workspace user, the data becomes visible.
Iโm trying to understand whether this behavior is expected with U2M OAuthโi.e., pipelines executing under the workspace userโs identity rather than strictly under the integration userโand whether appโlevel (client credentials) authentication is the recommended approach for unattended ingestion scenarios where execution should not depend on individual Databricks users.
Any clarification from the Databricks team or others who have implemented this would be helpful.