I work for a company where we are trying to create a Databrick's integration in node using the @DataBricks/sql package to query customers clusters or warehouses. I see documentation of being able to load data via a query from s3 using STS tokens where you do something like:
COPY INTO my_json_data
FROM 's3://my-bucket/jsonData' WITH (
CREDENTIAL (AWS_ACCESS_KEY = '...', AWS_SECRET_KEY = '...', AWS_SESSION_TOKEN = '...')
)
FILEFORMAT = JSON
but the inverse doesn't seem to be true where you can use these to copy query results directly into s3 like doing something like
COPY INTO 's3://my-bucket/jsonData'
FROM (SELECT * my_json_data) WITH
( CREDENTIAL (AWS_ACCESS_KEY = '...', AWS_SECRET_KEY = '...', AWS_SESSION_TOKEN = '...') )
FILEFORMAT = JSON
Does anyone know if capabilities like that exist? We'd like to be able to run a query and have the customer's Databrick's instance write results to an S3 bucket in our account.