Spark has a built-in 'image' data source which will read a directory of images files as a DataFrame: spark.read.format("image").load(...). The resulting DataFrame has the pixel data, dimensions, channels, etc.
You can also read image files 'manually' by using the 'binaryFiles' data source, which will give you the raw bytes of image files. You would then read them with (for example) PIL in Python.
For Python, PIL is pretty much the standard for image manipulation. For the JVM, I think I'd still use the old java.awt classes like BufferedImage.