cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kll
by New Contributor III
  • 6685 Views
  • 3 replies
  • 0 kudos

python multiprocessing and the Databricks Architecture - under the hood.

I am curious what is going on under-the-hood when using `multiprocessing` module to parallelize an function call and apply it to a Pandas DataFrame along the row axis. Specifically, how does it work with DataBricks Architecture / Compute. My cluster ...

  • 6685 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Keval Shah​ :When using the multiprocessing module in Python to parallelize a function call and apply it to a Pandas DataFrame along the row axis, the following happens under the hood:The Pool object is created with the specified number of processes...

  • 0 kudos
2 More Replies
Zoumana
by New Contributor II
  • 9991 Views
  • 10 replies
  • 4 kudos

Resolved! How to get probability score for each prediction from mlflow

I trained my model and was able to get the batch prediction from that model as specified below. But I want to also get the probability scores for each prediction. Do you have any idea? Thank you!logged_model = path_to_model# Load model as a PyFuncMod...

  • 9991 Views
  • 10 replies
  • 4 kudos
Latest Reply
OndrejHavlicek
New Contributor III
  • 4 kudos

Now you can log the model using this parameter:mlflow.sklearn.log_model( ..., # the usual params pyfunc_predict_fn="predict_proba" ) which will return probabilities for the first class apparently when using the model for inference (e.g. when...

  • 4 kudos
9 More Replies
Vindhya
by New Contributor II
  • 1081 Views
  • 2 replies
  • 0 kudos

Dataframes to Pandas conversion step is failing with exception ""java.lang.IndexOutOfBoundsException: index: 16384, length: 4 (expected: range(0, 16384))"

Dataframes to Pandas conversion step is failing with exception ""java.lang.IndexOutOfBoundsException: index: 16384, length: 4 (expected: range(0, 16384))", PFB screenshot for more details

sccreenshot
  • 1081 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Vindhya D​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...

  • 0 kudos
1 More Replies
afzi
by New Contributor II
  • 1219 Views
  • 1 replies
  • 1 kudos

Pandas DataFrame error when using to_csv

Hi Everyone, I would like to a Pandas Dataframe to /dbfs/FileStore/ using to_csv method.Usually it would just write the Dataframe to the path described but It has been giving me "FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStor...

  • 1219 Views
  • 1 replies
  • 1 kudos
Latest Reply
Avinash_94
New Contributor III
  • 1 kudos

f = open("/dbfs/mnt/blob/myNames.txt", "r")

  • 1 kudos
kll
by New Contributor III
  • 2892 Views
  • 1 replies
  • 1 kudos

Resolved! OSError: Invalid argument when attempting to save a pandas dataframe to csv

I am attempting to save a pandas DataFrame to as csv to a directory I created in Databricks workspace or in the `cwd`. import pandas as pd   import os   df.to_csv("data.csv", index=False)   df.to_csv(str(os.getcwd()) + "/data.csv", index=False)      ...

  • 2892 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 1 kudos

Hi @Keval Shah​ ,You can save your dataframe to csv in dbfs storage.Please refer below code that might help you-df = pd.read_csv(StringIO(data), sep=',') #print(df) df.to_csv('/dbfs/FileStore/ajay/file1.txt')

  • 1 kudos
Chhaya
by New Contributor III
  • 1003 Views
  • 3 replies
  • 2 kudos

Bamboolib with Databricks

Hi Everyone,I am wondering if anyone has experience using the bamboolib library within Databricks. I am currently using it for a client to display table data on the UI and potentially allow users to edit existing rows and insert new ones. While I hav...

bamboolib
  • 1003 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hi @Chhaya Vishwakarma​ I'm sorry you could not find a solution to your problem in the answers provided.Our community strives to provide helpful and accurate information, but sometimes an immediate solution may only be available for some issues.I sug...

  • 2 kudos
2 More Replies
tinendra
by New Contributor III
  • 1780 Views
  • 7 replies
  • 8 kudos

Can we run pandas dataframe inside databricks?

Hi, I want to run df=pd.read_csv('/dbfs/FileStore/airlines1.csv') while trying to run getting error likeFileNotFoundError: [Errno 2] No such file or directory: '/dbfs/FileStore/airlines1.csv'Could you please help me out how to run pandas dataframe in...

  • 1780 Views
  • 7 replies
  • 8 kudos
Latest Reply
Anonymous
Not applicable
  • 8 kudos

Hi @Tinendra Kumar​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...

  • 8 kudos
6 More Replies
jonathan-dufaul
by Valued Contributor
  • 1383 Views
  • 4 replies
  • 5 kudos

Why is writing to MSSQL Server 12.0 so slow directly from spark but nearly instant when I write to a csv and read it back

I have a dataframe that inexplicably takes forever to write to an MS SQL Server even though other dataframes, even much larger ones, write nearly instantly. I'm using this code:my_dataframe.write.format("jdbc") .option("url",sqlsUrl) .optio...

  • 1383 Views
  • 4 replies
  • 5 kudos
Latest Reply
yueyue_tang
New Contributor II
  • 5 kudos

I meet the same problem and I don't know how to write dataFrame to MS sql server quickly​

  • 5 kudos
3 More Replies
databicky
by Contributor II
  • 9245 Views
  • 12 replies
  • 4 kudos
  • 9245 Views
  • 12 replies
  • 4 kudos
Latest Reply
FerArribas
Contributor
  • 4 kudos

Hi @Hubert Dudek​,​Pandas API doesn't support abfss protocol.You have three options:​If you need to use pandas, you can write the excel to the local file system (dbfs) and then move it to ABFSS (for example with dbutils)Write as csv directly in abfss...

  • 4 kudos
11 More Replies
Mado
by Valued Contributor II
  • 3842 Views
  • 6 replies
  • 2 kudos

Resolved! How to see if condition is True / False for all rows in a DataFrame?

Assume that I have a Spark DataFrame, and I want to see if records satisfy a condition.Example dataset:# Prepare Data data = [('A', 1), \ ('A', 2), \ ('B', 3) ]   # Create DataFrame columns= ['col_1', 'col_2'] df = spark.createDataF...

image image
  • 3842 Views
  • 6 replies
  • 2 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 2 kudos

Hi you can use display() or show() function that will provide you expected results.

  • 2 kudos
5 More Replies
Mado
by Valued Contributor II
  • 2742 Views
  • 4 replies
  • 2 kudos

Resolved! Pandas API on Spark, Does it run on a multi-node cluster?

Hi, I have a few questions about "Pandas API on Spark". Thanks for your time to read my questions1) Input to these functions are Pandas DataFrame or PySpark DataFrame?2) When I use any pandas function (like isna, size, apply, where, etc ), does it ru...

  • 2742 Views
  • 4 replies
  • 2 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 2 kudos

Hi @Mohammad Saber​ , Pandas dataset lives in the single machine, and is naturally iterable locally within the same machine. However, pandas-on-Spark dataset lives across multiple machines, and they are computed in a distributed manner. It is difficu...

  • 2 kudos
3 More Replies
turagittech
by New Contributor
  • 4539 Views
  • 2 replies
  • 1 kudos

PYODBC very slow - 30 minutes to write 6000 rows

Along withh several other issues I'm encountering, I am finding pandas dataframe to_sql being very slowI am writing to an Azure SQL database and performance is woeful. This is a test database and it has S3 100DTU and one user, me as it's configuratio...

  • 4539 Views
  • 2 replies
  • 1 kudos
Latest Reply
Vidula
Honored Contributor
  • 1 kudos

Hi @Peter McLarty​ Does @Debayan Mukherjee​  response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly?We'd love to hear from you.Thanks!

  • 1 kudos
1 More Replies
Rahul_Samant
by Contributor
  • 4548 Views
  • 4 replies
  • 5 kudos

Resolved! High Concurrency Pass Through Cluster : pyarrow optimization not working while converting to pandasdf

i need to convert a spark dataframe to pandas dataframe with arrow optimization spark.conf.set("spark.sql.execution.arrow.enabled", "true")data_df=df.toPandas()but getting one of the below error randomly while doing so Exception: arrow is not support...

  • 4548 Views
  • 4 replies
  • 5 kudos
Latest Reply
AlexanderBij
New Contributor II
  • 5 kudos

Can you confirm this is a known issue?Running into same issue, example to test in 1 cell.# using Arrow fails on HighConcurrency-cluster with PassThrough in runtime 10.4 (and 10.5 and 11.0)   spark.conf.set("spark.sql.execution.arrow.pyspark.enabled",...

  • 5 kudos
3 More Replies
Dicer
by Valued Contributor
  • 10489 Views
  • 13 replies
  • 13 kudos

Resolved! Failed to convert Spark.sql to Pandas Dataframe using .toPandas()

I wrote the following code:​data = spark.sql (" SELECT A_adjClose, AA_adjClose, AAL_adjClose, AAP_adjClose, AAPL_adjClose FROM deltabase.a_30min_delta, deltabase.aa_30min_delta, deltabase.aal_30min_delta, deltabase.aap_30min_delta ,deltabase.aapl_30m...

  • 10489 Views
  • 13 replies
  • 13 kudos
Latest Reply
Dicer
Valued Contributor
  • 13 kudos

I just discovered a solution.Today, I opened Azure Databricks. When I imported python libraries. Databricks told me that toPandas() was deprecated and it suggested me to use toPandas.The following solution works: Use toPandas instead of toPandas() da...

  • 13 kudos
12 More Replies
Labels