by
tanjil
• New Contributor III
- 3034 Views
- 4 replies
- 2 kudos
Hello, I have the following minimum example working example using multiprocessing:from multiprocessing import Pool
files_list = [('bla', 1, 3, 7), ('spam', 12, 4, 8), ('eggs', 17, 1, 3)]
def f(t):
print('Hello from child process', flush = Tr...
- 3034 Views
- 4 replies
- 2 kudos
Latest Reply
No errors are generated. The code executes successfully, but there the print statement for "Hello from child process" does not work.
3 More Replies
- 8697 Views
- 6 replies
- 3 kudos
Assume we have a given cellprint('A')
print('B')
print('C')I want to run only the below line.print('B')Obviously, I can seperate the cell into three and run the one I want, but this is timely. This is a feature I use so often (e.g. in pycharm) and wo...
- 8697 Views
- 6 replies
- 3 kudos
Latest Reply
@Volkan_Gumuskay This is also available as an option in the notebook run options.
5 More Replies
- 2954 Views
- 3 replies
- 0 kudos
x=[1,2,3,4,5,6,7]rdd = sc.parallelize(x)print (rdd.take(2))Traceback (most recent call last):
File "/usr/local/spark/python/pyspark/serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^...
- 2954 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @Shelly Bhardwaj Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...
2 More Replies
by
Callum
• New Contributor II
- 12650 Views
- 3 replies
- 2 kudos
So, I have this code for merging dataframes with pyspark pandas. And I want the index of the left dataframe to persist throughout the joins. So following suggestions from others wanting to keep the index after merging, I set the index to a column bef...
- 12650 Views
- 3 replies
- 2 kudos
Latest Reply
Hi!I tried debugging your code and I think that the error you get is simply because the column exists in two instances of your dataframe within your loop.I tried adding some extra debug lines in your merge_dataframes function:and after executing that...
2 More Replies
- 5215 Views
- 2 replies
- 1 kudos
Hello Team,I am trying to run the salesforce and try to extract the data.AT that time i am facing the below issue :SOURCE_SYSTEM_NAME = 'Salesforce'TABLE_NAME = 'XY'desc = eval("sf." + TABLE_NAME + ".describe()")print(desc)for field in desc['fields']...
- 5215 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Rohit Kulkarni Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...
1 More Replies
- 4183 Views
- 7 replies
- 0 kudos
I am able to load data for single container by hard coding, but not able to load from multiple containers. I used for loop, but data frame is loading only last container's last folder record only.Here one more issue is I have to flatten data, when I ...
- 4183 Views
- 7 replies
- 0 kudos
Latest Reply
for sure function (def) should be declared outside loop, move it after importing libraries,logic is a bit complicated you need to debug it using display(Flatten_df2) (or .show()) and validating json after each iteration (using break or sleep etc.)
6 More Replies