The end solution looked like this
first compute the variable part of the location (one month back from today in my case)
#first compute the archive date (or whatever else you need to be dynamic)
import datetime
from dateutil.relativedelta import relativedelta
start_date = datetime.date.today()
archive_date = start_date - relativedelta(months=1)
archive_date = archive_date.strftime('%Y-%m')
#build the table using the previously computed variable
import pyspark.sql.session
folder = 'container/folder'+archive_date+'.parquet'
spark.sql(f"CREATE TABLE Archive using parquet location '/mnt/{folder}'")
couldn't get the SQL approach to work for some reason, but this is absolutely fine for what I need.
Many thanks for your help, this will prove v useful in the future.
As an observation might it be worth updating Databricks documentation as it currently states location must be a string literal? Just a thought.