Try this (in 1.4.0):
val blockSize = 1024 * 1024 * 16 // 16MB
sc.hadoopConfiguration.setInt( "dfs.blocksize", blockSize )
sc.hadoopConfiguration.setInt( "parquet.block.size", blockSize )
Where sc is your SparkContext (not SQLContext).
Not that there also appears to be "page size" and "dictionary page size" parameters that interact with the block size; e.g., page size should not exceed block size. I have them all with the exact same value, and that got me through.
It looks like Spark will allocate 1 block in memory for every Parquet partition you output, so if you are creating a large number of Parquet partitions you could quickly hit OutOfMemory errors.