when I join two dataframes, I got the following error.
org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1 Serialization trace: values (org.apache.spark.sql.catalyst.expressions.GenericRow) otherElements (org.apache.spark.util.collection.CompactBuffer). To avoid this, increase spark.kryoserializer.buffer.max value. at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:253) at org.apache.spark.sql.execution.SparkSqlSerializer$$anonfun$serialize$1.apply(SparkSqlSerializer.scala:90) at org.apache.spark.sql.execution.SparkSqlSerializer$$anonfun$serialize$1.apply(SparkSqlSerializer.scala:89) at org.apache.spark.sql.execution.SparkSqlSerializer$.acquireRelease(SparkSqlSerializer.scala:82) at org.apache.spark.sql.execution.SparkSqlSerializer$.serialize(SparkSqlSerializer.scala:89) at org.apache.spark.sql.execution.joins.GeneralHashedRelation.writeExternal(HashedRelation.scala:65) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1458) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1429) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44) at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:203) at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:102) at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:85) at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34) at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1289)
So how to increase spark.kryoserializer.buffer.max in databricks cloud? http://spark.apache.org/docs/latest/configuration.html does not teach a way for databricks cloud.