In databricks you don't have to use auto loader when you're dealing with SDP. Think of auto loader as a very specific structred streaming source (that's source is called cloudFiles ).
So, for instance you can use traditional structred streaming approach to load csv files incrementally:
df = spark.readStream.format("csv") \
.option("header", "true") \
.schema(<schema>) \
.load(<path>)
Or you can turn on auto loader by choosing "cloudFiles" source:
df = spark.readStream.format("cloudFiles") \
.option("cloudFiles.format", "csv") \
.option("header", "true") \
.schema(<schema>) \ # provide a schema here for the files
.load(<path>)
So you have freedom of choice ๐ But if you're dealing with files on S3 bucket or ADLS I would choose auto loader any day ๐