Hi @kazinahian,
I believe what you're looking for is the .withColumn() Dataframe method in PySpark. It will allow you to create a new column with aggregations on other columns: https://docs.databricks.com/en/pyspark/basics.html#create-columns
Best
Hi @Phani1,
Unfortunately, there isn't a way to run cells in a notebook simultaneously. But with your use case needing the parallel execution of code, you can configure a Databricks Workflow with multiple tasks running concurrently: https://learn.mi...
Hi @thiagoawstest,
Please reach out to your Account Executive or Solutions Architect. The will be able to help you with the issue you're experiencing while trying to log in.
Best
Hi @databricks98,
It seems like there is some issue connecting to your Azure account. Were there any recent changes to firewalls, permissions, or cluster configurations? Could you please check to make sure that the connection between Databricks and ...
Hi @mangosta,
Our 'Notebook outputs and results' Document references the limit of truncated query rows at 60,000 (ref: https://docs.gcp.databricks.com/en/notebooks/notebook-outputs.html#:~:text=If%20the%20data%20returned%20is%20truncated), but this s...