Calling exe from notebook
How to call exe (c-sharp code based) from data bricks notebook?#csharp exe
- 8 Views
- 0 replies
- 0 kudos
How to call exe (c-sharp code based) from data bricks notebook?#csharp exe
Hello community,I was working on optimising the driver memory, since there are code that are not optimised for spark, and I was planning temporary to restart the cluster to free up the memory.that could be a potential solution, since if the cluster i...
Hi @jeremy98 , collect() operation brings data to the driver and yes it can cause the memory issues that you are seeing, which can cause the cluster to be hung/ crash as well if done enough times. You may confirm these instances from the cluster even...
I have completed the Data Engineer Associate learning path, but I haven’t received the free certification voucher yet.I’ve already sent multiple emails to the concerned support team regarding this issue, but unfortunately, I haven’t received any resp...
Hello @Aravind17! This post appears to duplicate the one you recently posted. A response has already been provided to your recent post. I recommend continuing the discussion in that thread to keep the conversation focused and organised.
I am able to create a UnityCatalog iceberg format table: df.writeTo(full_table_name).using("iceberg").create()However, if I am adding option partitionedBy I will get an error. df.writeTo(full_table_name).using("iceberg").partitionedBy("ingest_dat...
I found weird behavior here while creating table using SQLIf you are creating new table and have added partition column at the last of the column mapping it won't work but if you add it at the beginning it will work!!For example :-Below query will wo...
When embedding the AI BI dashboard, is there a way to not make the tabs show and instead use our own UI tab to navigate the tabs?Currently, there are two tab headers - one in the databricks dashboard and then another tab section in our embedding webp...
Hi @amekojc At the moment, Databricks AI BI Dashboards do not support hiding or disabling the native dashboard tabs when embedding. The embedded dashboard always renders with its own tab headers, and there is no configuration or API to control tab vi...
Hello,Attempting read/write files from s3 but got the error below. I am on the free edition (serverless by default). I'm using access_key and secret_key. Has anyone done this successfully? Thanks!Directly accessing the underlying Spark driver JVM us...
Thank @Sanjeeb2024 I was able to confirm as well
Are there any performance concerns when using liquid clustering and AWS S3. I believe all the parquet files go in the same folder (Prefix in AWS S3 Terms) verses folders per partition when using "partition by". And there is this note on S3 performa...
pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/DatasetsGetting this error while writing can any one please tell how we can resolve it
Hi @Gaurav_784295 ,In Spark, In case of streaming, please use a time based column in window function. Because, In streaming we cant say "last 10 rows", "limit 10" etc. Because streaming never ends. So when you use window, please dont use columns lik...
Hey, so our notebooks reading a bunch of json files from storage typically use a input_file_name() when moving from raw to bronze, but after upgrading to Unity Catalog we get an error message:AnalysisException: [UC_COMMAND_NOT_SUPPORTED] input_file_n...
The reason why the 'input_file_name' is not supported because this function was available in older versions of Databricks runtime. It got deprecated from Databricks Runtime 13.3 LTS onwards
Hello Databricks Community,I'm encountering an issue related to Python paths when working with notebooks in Databricks. I have a following structure in my project:my_notebooks - my_notebook.py /my_package - __init__.py - hello.py databricks.yml...
I have a related question.I'm new to Databricks platform. I struggle with PYTHONPATH issue as the original poster raised. I understand using sys.path.append(...) is one approach for notebook. This is acceptable for ad-hoc interactive session, but thi...
After upgrading from DBR 17.3.2 to DBR 17.3.3, we started seeing a flood of DEBUG logs like this in job outputs:```DEBUG:ThreadMonitor:Logging python thread stack frames for MainThread and py4j threads: DEBUG:ThreadMonitor:Logging Thread-8 (run) stac...
Can two service principal have same name,but unique id's ?
Hi @kALYAN5, Here is an explanation of why service principals share a name but IDs are unique: Names Are for Human Readability: Organizations use human-friendly names like "automation-batch-job" or "databricks-ci-cd" to make it easy for admins to re...
Code:Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, ...
The error happens because the function expects the table name to include both schema and table separated by a dot. Inside the function it splits the table name using a dot and tries to assign two values. When you pass only Economy, the split returns ...
I'm federating Snowflake-managed Iceberg tables into Azure Databricks Unity Catalog to query the same data from both platforms without copying it. I am getting weird error message when query table from Databricks and i have tried to put all nicely in...
Thanks Hubert. I did check the Iceberg metadata location and Databricks can list the files, but the issue is that Snowflake’s Iceberg metadata.json contains paths like abfss://…@<acct>.blob.core.windows.net/..., and on UC Serverless Databricks then t...
I am running databricks premium and looking to create a compute running conda. It seems that the best way to do this is to boot the compute from a docker image. However, in the ```create_compute > advanced``` I cannot see the the docker option nor ca...
Hi @Askenm In Databricks Premium, the Docker option for custom images is not available on all compute types and is not controlled by user level permissions. Custom Docker images are only supported on Databricks clusters that use the legacy VM based c...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 1635 | |
| 791 | |
| 515 | |
| 349 | |
| 287 |