The answer is "yes but".
If you read a csv into a dataframe, and apply a cache action, no matter what file format, it will be cached (if spark can read it of course).
That being said: spark applies lazy evaluation. So this means the csv is only actually read when an action is executed (like write, count, ...). Before that Spark will only generate a query plan.
So to speed up your code, it is important to find out what the best location is to apply the cache. Because caching is an expensive operation (it actually writes the data to disk) and it will only come in handy if the cached dataframe is used more than once afterwards.
Not sure if that makes sense?