I have a customer with the following question - I'm posting on their behalf to introduce them to the community.
For doing modeling in a python environment what is our best practice for getting the data from redshift? A "load" option seems to leave me with the data still sitting on the Amazon side with credentials required for some basic transformations. Clearly that isn't what we'd like. Do I need to create a table on the Databricks side and then delete it after?
Would love to get some code examples and some best practices.