Hi Team,Just wondering, how can I add a column to an existing table.I'd tried the below script but giving me an error:ParseException: [PARSE_SYNTAX_ERROR] Syntax error at or near '<'(line 1, pos 121)ALTER TABLE table_clone ADD COLUMNS col_name1 STRUC...
@Gil Gonong​ :In Databricks, you can add a column to an existing table using the ALTER TABLE statement in SQL. Here is an example:ALTER TABLE table_clone ADD COLUMN col_name1 STRUCT<
type: STRING,
values: ARRAY<STRING>
>Note that you need to ...
Hi, I am trying to connect to the Storage Account using the SAS token, and receive this error: Unable to load SAS token provider class: java.lang.IllegalArgumentException - more on the picture.I couldnt find anything on the web for this error.I also ...
@Retko Okter​ :It seems that there is an issue with the SAS token provider class. This error can occur when the SAS token is not correctly formatted or is invalid.Here are some steps you can try to resolve the issue:Verify that the SAS token is corre...
I have the following code which should render a choropleth map. import plotly.express as px
import geopandas as gpd
# Example GeoJSON file with polygon geometries
geojson_file = 'example.geojson'
# Read GeoJSON file into GeoDataFrame
*** = gpd.re...
@Keval Shah​ :There could be several reasons why the choropleth map is not rendering in your Jupyter notebook. Here are a few things you could try:Check that the GeoJSON file is loaded correctly: Make sure that the GeoDataFrame has been loaded correc...
Is there a "read only" option when using databricks sql using jdbc driver?I'm looking for an equivalent to this:https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-configuration-options.html#jdbc20-readonly-optionThanks!
Hi @Nativ Issac​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so w...
@Shelly Bhardwaj​ :The error message you provided seems to be incomplete, as it only shows the traceback of a serialization error. Can you provide the full error message or describe the issue in more detail?Regarding the code you provided, it looks c...
I have 106,000 + api's I need to call, so instead of calling them one by one I would like to create a loop as I have the list of location Id's which I've called from there api locations list and these will sit at the end of the url to get more info o...
@Kay Connolly​ :It looks like you are trying to concatenate a string with a column object, which is causing the error. You need to convert the column object to a string first before concatenating it to the URL. Here's a modified code snippet that sho...
@ppatel:If you are using insertInto with overwrite=True on a Hive external table in PySpark, it might not work as expected. This is because Hive external tables are not managed by Hive and the table data is stored externally. When you use overwrite=T...
Hi all,I am using a persist call on a spark dataframe inside an application to speed-up computations. The dataframe is used throughout my application and at the end of the application I am trying to clear the cache of the whole spark session by calli...
No solution yet:Hi @Suteja Kanuri​ ,Thank you for thinking along and replying!Unfortunately, I have not found a solution yet.I am getting an error that there exists no ```.getCache()``` method on a spark context. Also note that I have tried to do som...
Hi amazing community folks,Feel free to share your experience or knowledge regarding below questions:-1.) Can we pass a CTE sql statement into spark jdbc? i tried to do it i couldn't but i can pass normal sql (Select * from ) and it works. i heard th...
Hi @Jyoti j​​, We haven't heard from you since the last response from @Suteja Kanuri​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to others....
I'm trying to setup a Workspace Library that is used internally within our organization. This is a Python package, where the source is available on a private GitHub repository, and not accessible on PyPi or the wider internet / surface web. I managed...
Hi @Eshwaran Venkat​ ​​, We haven't heard from you since the last response from @Suteja Kanuri​ ​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpfu...
Hi all,Currently we are using Driver: Standard_D32s_v3 · Workers: Standard_D32_v3 · 2-8 workers · 6.4 Extended Support (includes Apache Spark 2.4.5, Scala 2.11) cluster. For this we are running 24/7 streaming notebook on trigger of every minute and 5...
Hi @Someswara Durga Prasad Yaralgadda​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love t...
Hi everyone ,I have implemented a data pipeline using autoloader bronze-->silver-->gold .now while I do this I want to perform some data quality checks , and for that I'm using great expectations library.However I'm stuck with below error when trying...
Hi @Chhaya Vishwakarma​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your fe...
In Databricks, using 11.3 ML runtime give different results when using general purpose vs memory-optimized workers. I used SARIMAX and to forecast the results but I’m getting different results when I change the driver and worker types to this options...
Hi @Kevin Kim​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...
If I were to stop a rather large job run, say half way thru execution, will any actions performed on our Delta tables persist or will they be rolled back?Are there any other risks that I need to be aware of in terms of cancelling a job run half way t...
Hi team,If we kill - clusters every-time will the connection details changes.if yes, If there a way we can mask this so that the End users are not impacted dur to any changes in Clusters.Also if I want to call a Delta Table from an API using JDBC - s...
Hi @Siddharth Krishna​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell u...