- 2036 Views
- 2 replies
- 0 kudos
https://docs.microsoft.com/en-us/azure/databricks/_static/notebooks/deep-learning/tfrecords-save-load.htmlI could not run the Cell # 2java.lang.ClassNotFoundException:
---------------------------------------------------------------------------
Py4JJ...
- 2036 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @THIAM HUAT TAN,Which DBR version are you using? are you using the ML runtime?
1 More Replies
by
Raie
• New Contributor III
- 9798 Views
- 3 replies
- 4 kudos
What I am doing:spark_df = spark.createDataFrame(dfnew)spark_df.write.saveAsTable("default.test_table", index=False, header=True)This automatically detects the datatypes and is working right now. BUT, what if the datatype cannot be detected or detect...
- 9798 Views
- 3 replies
- 4 kudos
Latest Reply
just create table earlier and set column types (CREATE TABLE ... LOCATION ( path path)in dataframe you need to have corresponding data types which you can make using cast syntax, just your syntax is incorrect, here is example of correct syntax:from p...
2 More Replies
by
Jeff1
• Contributor II
- 2755 Views
- 3 replies
- 5 kudos
CommunityI’ve been struggling with utilizing R language in databricks and after reading “Mastering Spark with R,” I believe my initial problems stemmed from not understating the difference between Spark DataFrames and R DataFrames within the databric...
- 2755 Views
- 3 replies
- 5 kudos
Latest Reply
As Spark dataframes are handled in distributed way on workers it is better just to use Spark dataframes. Additionally collect is executed on driver and takes whole dataset into memory so it is shouldn't be used in production.
2 More Replies