you do not need the sql warehouse itself for that. for ds & e you need a classic cluster (not a sql endpoint) anyway so you can easily read the tables from the metastore using spark.read.table().
Connecting the sql endpoint to the ds cluster seems odd, because what part of the query plan will be executed by the sql endpoint and what part by the ds cluster?
Right now you can already use a sql endpoint for sql notebooks.