Hi @lndlzy, A StackOverflow error usually occurs when your program recurses too deeply.
In this case, it might be due to a problem with the FeatureStoreClient.create_training_set
method or how the FeatureLookup
objects are defined or used.
Here are a few things you could check:
• Make sure that the lookup_key
and timestamp_lookup_key
in each FeatureLookup
object matches the actual keys in the corresponding feature tables. If there's a mismatch, it could cause an infinite loop, leading to a StackOverflow error.
• Check the data types of the lookup_key
and timestamp_lookup_key
in the feature tables and make sure they match with the data types of the corresponding keys in the DataFrame passed to FeatureStoreClient.create_training_set
.
• Make sure that the feature_names
in each FeatureLookup
object match the actual feature names in the corresponding feature tables. If there's a mismatch, it could cause an error.
• If you're using many features from multiple feature tables, it might exceed the stack size limit. Try reducing the number of features or feature tables and see if the problem persists.
• Check if there's any recent update in Databricks or in the Feature Store library that might affect the FeatureStoreClient.create_training_set
method.
You might need to update your code accordingly if that's the case.
If you've checked all these and still can't resolve the issue, it might be a bug in Databricks or the Feature Store library.
In that case, you should contact Databricks support by filing a support ticket for further assistance.