Hello,
My databricks workspace is associated to GCP project analytics.
But me and my team mostly work on GCP project data-science, which contains the only BQ dataset that we have write access to.
I'm trying to automate a pipeline to run on job compute and it fails when reading a table from project data-science, since reading implies writing to a temporary table that we have set to be in the project and dataset we have the rights to (materialisation_project=data-science, dataset=dst). When running the notebook myself or using our development cluster, it works as intended. When running the notebook on a pipeline using job compute, it fails with the error
Access Denied: Project analytics: User does not have bigquery.jobs.create permission in project analytics
What could be the issue here? job compute service account? How do I edit it? It's trying to write to GCP project analytics, which it shouldn't, given the materialisation parameters passed as an argument.
Thanks,
Rui