cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks Jobs do not run on job compute but on shared compute

RobinK
Contributor

Hello,
since last night none of our ETL jobs in Databricks are running anymore, although we have not made any code changes.

The identical jobs (deployed with Databricks asset bundles) run on an all-purpose cluster, but fail on a job cluster. We have not changed anything in the cluster configuration. The Databricks runtime version is also identical (14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)). We have also compared the code and double-checked the configurations.
What could be the reason for the jobs failing without us having made any changes? Have there been changes to Databricks that cause this?

Error messages:
[NOT_COLUMN] Argument `col` should be a Column, got Column.
[SESSION_ALREADY_EXIST] Cannot start a remote Spark session because there is a regular Spark session already running.

Does anyone else have problems with jobs?

Best regards
Robin

1 ACCEPTED SOLUTION

Accepted Solutions

ha2983
New Contributor II

switching from 

spark = DatabricksSession.builder.getOrCreate()
to
spark = SparkSession.builder.getOrCreate()
solved the issue. Strange exception nonetheless.

View solution in original post

12 REPLIES 12

dbruehlmeier
Contributor

Hi Robin

Do you use Databricks Connect creating spark Session?

from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()
We are facing same issue on single user access cluster and in jobs.

Kaniz
Community Manager
Community Manager

@Kaniz : We exactly use your second solution. And we get same issue

from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()

schema = StructType([StructField('category', StringType(), True), StructField('weight', DoubleType(), True)])
data_source = "abfss://......_index_v01_??????_????????.csv"

df = (spark.read.format("csv")
.options(**{'header': 'true'})
.schema(schema)
.load(data_source))

 

[SESSION_ALREADY_EXIST] Cannot start a remote Spark session because there is a regular Spark session already running.

File /databricks/spark/python/pyspark/instrumentation_utils.py:47, in _wrap_function.<locals>.wrapper(*args, **kwargs) 45 start = time.perf_counter() 46 try: ---> 47 res = func(*args, **kwargs) 48 logger.log_success( 49 module_name, class_name, function_name, time.perf_counter() - start, signature 50 ) 51 return res

File /databricks/spark/python/pyspark/sql/readwriter.py:150, in DataFrameReader.schema(self, schema) 117 """Specifies the input schema. 118 119 Some data sources (e.g. JSON) can infer the input schema automatically from data. (...) 146 |-- col1: double (nullable = true) 147 """ 148 from pyspark.sql import SparkSession --> 150 spark = SparkSession._getActiveSessionOrCreate() 151 if isinstance(schema, StructType): 152 jschema = spark._jsparkSession.parseDataType(schema.json())

File /databricks/spark/python/pyspark/sql/session.py:1265, in SparkSession._getActiveSessionOrCreate(**static_conf) 1263 for k, v in static_conf.items(): 1264 builder = builder.config(k, v) -> 1265 spark = builder.getOrCreate() 1266 return spark

File /databricks/spark/python/pyspark/sql/session.py:521, in SparkSession.Builder.getOrCreate(self) 519 return RemoteSparkSession.builder.config(map=opts).getOrCreate() 520 else: --> 521 raise PySparkRuntimeError( 522 error_class="SESSION_ALREADY_EXIST", 523 message_parameters={}, 524 ) 526 session = SparkSession._instantiatedSession 527 if session is None or session._sc._jsc is None:

ha2983
New Contributor II

switching from 

spark = DatabricksSession.builder.getOrCreate()
to
spark = SparkSession.builder.getOrCreate()
solved the issue. Strange exception nonetheless.

Yes, I did the same. However, so we have to switch the code from local (VS Code) implementation to Databricks runs (Jobs/Workflow). 

@Kaniz : Could you check this new issue? 

ha2983
New Contributor II

This Notebook can be used to recreate the issue:

import pandas as pd
from databricks.connect import DatabricksSession
from pyspark.sql.functions import current_timestamp

spark = DatabricksSession.builder.getOrCreate()


# Create a pandas DataFrame
data = {
    "Name": ["John", "Alice", "Bob"],
    "Age": [25, 30, 35],
    "City": ["New York", "San Francisco", "Los Angeles"],
}
df = pd.DataFrame(data)

# Convert pandas DataFrame to Spark DataFrame
spark_df = spark.createDataFrame(df)


spark_df = spark_df.withColumn("_loaded_at", current_timestamp())

spark_df.show()

I used databricks runtime 14.3 LTS with single user access mode

RobinK
Contributor

@ha2983  I can confirm, that I can recreate the issue with your notebook.

In my case the error [NOT_COLUMN] Argument `col` should be a Column, got Column. occurs, when calling 

.withColumn("IngestionTimestamp", unix_timestamp()) on a dataframe.
 
I can reproduce this error using the example from https://spark.apache.org/docs/3.5.0/api/python/reference/pyspark.sql/api/pyspark.sql.functions.unix_... and a single user cluster (DBR 14.3 LTS):
 
from pyspark.sql.functions import unix_timestamp

spark
.conf.set("spark.sql.session.timeZone", "America/Los_Angeles")
time_df = spark.createDataFrame([('2015-04-08',)], ['dt'])
time_df.select(unix_timestamp('dt', 'yyyy-MM-dd').alias('unix_time')).collect() spark.conf.unset("spark.sql.session.timeZone")

>>> [NOT_COLUMN_OR_STR] Argument `col` should be a Column or str, got Column.

 On a shared cluster the code above works.

@dbruehlmeier we are also using vscode for local development and create our spark session like this:

 

from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()

RobinK
Contributor

Update:

Removing the following code from all of our notebook fixed the error:

from databricks.connect import DatabricksSession
spark = DatabricksSession.builder.getOrCreate()
 
I have found a line about SparkSessions in the change logs of databrick-connect: https://docs.databricks.com/en/release-notes/dbconnect/index.html#databricks-connect-1432-python
 
But this still doesn't answer the question, why the error occured in an environment, that did not change at all for us (same DBR Version, same cluster, same code).
 
@Kaniz maybe you could research if there where any internal updates?

dbx-user7354
New Contributor III

we are experiencing the exact same issues. But we do not even create the spark session explicitly. Are there any other fixes to this? 

UniBart
New Contributor II

Hello,

We are also experiencing the same error message [NOT_COLUMN] Argument `col` should be a Column, got Column
This occurs when a workflow is run as a task from another workflow, but not when said workflow is run on its own, that is not triggered by another workflow. The problem seems to be connected to the Databricks Runtime, in 14.3 LTS the workflow fails with said error, as a temporary workaround we switched the job clusters to Runtime 13.3 LTS, this seems to be working.

Any update on this bug is highly appreciated as it affects our production environment.

Best regards
Markus

Attol8
New Contributor II

We just had the exact same issue and it broke all our jobs in production, any update on this bug would be appreciated. We had failures in Databricks Runtime 15.1 and we fixed by moving all the jobs' clusters to 15.2 

jcap
New Contributor II

I do not believe this is solved, similar to a comment over here:

https://community.databricks.com/t5/data-engineering/databrickssession-broken-for-15-1/td-p/70585

We are also seeing this error in 14.3 LTS from a simple example:

from pyspark.sql.functions import col

df = spark.table('things')
things = df.select(col('thing_id')).collect()

[NOT_COLUMN_OR_STR] Argument `col` should be a Column or str, got Column.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!