I get an PythonException: float() argument must be a string or a number, not 'NoneType' when attempting to save a DataFrame as a Delta Table.
Here's the line of code that I am running:
```
df.write.format("delta").saveAsTable("schema1.df_table", mode="overwrite")
root
|-- ts: timestamp (nullable = true)
|-- _source: string (nullable = true)
|-- lat: decimal(9,6) (nullable = true)
|-- lng: decimal(9,6) (nullable = true)
|-- id: string (nullable = false)
|-- mm-yyyy: date (nullable = true)
|-- hid: string (nullable = true)
```
PythonException: An exception was thrown from the Python worker. Please see the stack trace below. 'TypeError: float() argument must be a string or a number, not 'NoneType'', from , line 3. Full traceback below: Traceback (most recent call last): File "", line 3, in TypeError: float() argument must be a string or a number, not 'NoneType'
How do we handle `None` in the spark DataFrame? I'd like to identify which rows / columns contain `None` and drop them?