ValueError: not enough values to unpack (expected 2, got 1)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2023 03:11 AM
Code:
Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, got 1)",
here's the full error message:
ValueError: not enough values to unpack (expected 2, got 1)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<command-75945229> in <cell line: 1>()
----> 1 Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])
2
3
<command-75945229> in jdbc_writer(table_name, df, conf, debug, modified_by)
15 conf = conf.to_dict()
16
---> 17 schema, table = table_name.split('.')
18 schema = schema[1:-1] if schema[0] == "[" else schema
19 table = table[1:-1] if table[0] == "[" else tableAnd when I clicked the cell, this is the line of code:
class Writer:
@staticmethod
def jdbc_writer(table_name:str,
df:SparkDataFrame,
conf:Union[dict ,SqlConnect ],
debug=False,
modified_by = None,
) -> NoI have searched for solutions regarding this particular problem but have never seem to find it, and your help would really benefit me.
- Labels:
-
Databricks notebook
-
Python
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2023 03:39 AM
Hello, Thank you for reaching out to us.
This looks like a general error message, Can you please share the Runtime version of the cluster that you are running the notebook on? You can find this detail under cluster configuration.
Also, Hav you checked this article?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2023 06:24 PM
Hi, this is the runtime version:
11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2023 06:32 AM
@Jillinie Park :
The error message you are seeing ("ValueError: not enough values to unpack (expected 2, got 1)") occurs when you try to unpack an iterable object into too few variables. In your case, the error is happening on this line of code:
schema, table = table_name.split('.')Here, you are trying to unpack the results of the split() method into two variables (schema and table), but it looks like the split() method is only returning one value instead of two. To fix this error, you can check the value of table_name before calling the split() method to make sure it contains a dot (.) character. If it doesn't, you can handle the error accordingly (e.g. raise an exception or return an error message).
Here's an example of how you could modify the jdbc_writer() method to handle this error:
class Writer:
@staticmethod
def jdbc_writer(table_name:str,
df:SparkDataFrame,
conf:Union[dict ,SqlConnect ],
debug=False,
modified_by=None) -> No:
if '.' not in table_name:
raise ValueError(f"Invalid table name '{table_name}'. Table name should be in the format 'schema.table'.")
if isinstance(conf, SqlConnect):
conf = conf.to_dict()
schema, table = table_name.split('.')
schema = schema[1:-1] if schema[0] == "[" else schema
table = table[1:-1] if table[0] == "[" else table
# rest of the code goes hereIn this modified version of the jdbc_writer() method, we first check if the table_name argument contains a dot (.) character. If it doesn't, we raise a ValueError with an appropriate error message. Otherwise, we proceed with the rest of the method as before.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2025 02:53 AM
The error you're encountering, *"ValueError: not enough values to unpack (expected 2, got 1)"*, typically occurs when the code attempts to split a string expecting two parts but only gets one. In your case, `table_name.split('.')` is expecting a schema and table name separated by a period (like `"dbo.Economy"`), but it seems you're just passing `"Economy"`, which leads to the failure. Try passing the full table name in the `"schema.table"` format to resolve it. Also, for more practical tools and updated utilities to streamline such data workflows, I strongly suggest you try out or download the rbtv77 apk for Android it might offer features or integrations that make handling these kinds of tasks easier.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
thanks you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago - last edited 2 weeks ago
The error “ValueError: not enough values to unpack (expected 2, got 1)” usually happens when your code tries to split a string into two parts but only one is found. In this situation, table_name.split('.') expects both a schema and a table name separated by a period (for example, "dbo.Economy"). However, you’re only providing "Economy", which causes the error. To fix this, make sure you pass the table name in the "schema.table" format. Additionally, if you’re looking for more practical tools or updated utilities to simplify data-related workflows, you might consider checking out the minecraft apk for Android, as it may offer helpful features or integrations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
The error happens because the function expects the table name to include both schema and table separated by a dot. Inside the function it splits the table name using a dot and tries to assign two values. When you pass only Economy, the split returns a single value and Python cannot unpack it, which causes the error. To fix this, pass the table name in schema and table format such as dbo.Economy. Another option is to update the function logic to handle table names without a schema by assigning a default schema when no dot is present. The issue is not related to Databricks or JDBC but to how the table name string is being processed.
Solution:
First option is to pass schema and table together when calling the function. For example use dbo.Economy or any valid schema name instead of just Economy.
Example call
Writer.jdbc_writer("dbo.Economy", economy, conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID["Economy"])Second option is to make the function more robust by handling table names without schema and assigning a default schema.
Example change inside jdbc_writer
if "." in table_name:
schema, table = table_name.split(".", 1)
else:
schema = "dbo"
table = table_nameThis way the function will work whether you pass Economy or dbo.Economy.
The root cause is not related to Databricks or JDBC itself, it is purely a string split and unpacking issue in the function logic.