- 1405 Views
- 1 replies
- 4 kudos
- 4 kudos
You can now use cluster policies to restrict the number of clusters a user can create. For more information https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-limit
- 4 kudos
You can now use cluster policies to restrict the number of clusters a user can create. For more information https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-limit
Clone can now be used to create and incrementally update Delta tables that mirror Apache Parquet and Apache Iceberg tables. You can update your source Parquet table and incrementally apply the changes to their cloned Delta table with the clone comman...
You can now use OAuth to authenticate to Power BI and Tableau. For more information, see Configure OAuth (Public Preview) for Power BI and Configure OAuth (Public Preview) for Tableau.https://docs.databricks.com/integrations/configure-oauth-powerbi.h...
I know that the run_as user generally defaults to the creator_user, but I would like to find the defined run_as user for each of our jobs. Unfortunately, I'm unable to locate that field in the api.
Hi @Keller, Michael​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...
Hello All,I am trying to manage the permissions on the experiments using the MLFLow API. Do we have any MLFlow API which helps to manage the permissions of Can Read ,Can Edit , Can Manage.Example :I create the model using MLFlow APIs and through my c...
Hey folks, did we get any workaround for this or what @Sean Owen​ said is true ?
I have some code that occasionally wrong executed, meaning that every n-th time a calculation in a table is wrong. If that happens, I want to be able to restart the cluster from the Notebook.- I'm therefore lookong for a piece of code that can accomp...
@Lukas Goldschmied​ It is. You'll need to use Databricks API.Here you can find an example:https://learn.microsoft.com/en-us/azure/databricks/_extras/notebooks/source/clusters-long-running-optional-restart.html
We observe the following behavior when we keep adding new runs to an experiment:- In the beginning, the runs are still displayed correctly in the UI.- After a certain number of total runs, the following bug occurs in the UI: - In the UI, there are ...
Hi @Timo Burmeister​ Apologies for the delay! I went through the video, does it happen all the time? I see after sorting it with different filter the list appears.
00007160: 2023-01-30T14:22:06 [TARGET_LOAD ]E: Failed (retcode -1) to execute statement: 'COPY INTO `e2underwriting_dbo`.`product` FROM(SELECT cast(_c0 as INT) as `ProductID`, _c1 as `ShortName`, cast(_c2 as INT) as `Status`, cast(_c3 as TIMESTA...
we have solved this issue related to Qlik replicate copying data into delta table
You can ensure there is always an active run of your Databricks job with the new continuous trigger type. https://docs.databricks.com/workflows/jobs/jobs.html#continuous-jobs
Current Cluster Config:Standard_DS3_v2 (14GB, 4 Cores) 2-6 workersStandard_DS3_v2 (14GB, 4Cores) for driverRuntime: 10.4x-scala2.12We want to overwrite a temporary delta table with new records. The records will be load by another delta table and tran...
Hi,thank you for your help!We tested the configuration settings and it runs without any errors.Could you give us some more information, where we can find some documentation about such settings. We searched hours to fix our problem. So we contacted th...
Hello to everyone!I am trying to read delta table as a streaming source using spark. But my microbatches are disbalanced - one very small and the other are very huge. How I can limit this? I used different configurations with maxBytesPerTrigger and m...
Hi @Yuliya Valava​, If you are setting the maxBytesPerTrigger and maxFilesPerTrigger options when reading a Delta table as a stream, but the batch size is not changing, there could be a few reasons for this:The input data rate is not exceeding the li...
I tried to benchmark the Powerbi Databricks connector vs the powerbi Delta Lake reader on a dataset of 2.15million rows. I found that the delta lake reader used 20 seconds, while importing through the SQL compute endpoint took ~75 seconds. When I loo...
Guys, is there any way to switch off CloudFetch and fall back to ArrowResultSet by default irrespective of size? using the latest version of Spark Simba ODBC driver?
Benefit: This will help simplify the where clauses of the consumers of the tables? Just query on the main date field if I need all the data for a day. Not an extra day field we had to make.
Hi @Ryan Hager​ ​ , Just a friendly follow-up. Do you still need help, or @Hubert Dudek (Customer)​ 's response help you to find the solution? Please let us know.
Is it possible to attach a notebook to cluster and run it via the REST API?The closest approach I have found is to run a notebook, export the results (HTML!) and import it into the workspace again, but this does not allow us to retain the original ex...
Hi @Akihiko Nagata​ , have you checked the jobs API? You can run a job on the existing cluster that can use the notebook of concern. I believe this is the only way.https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit
Hi Team, Can you please suggest to me how to implement the late arriving dimension or early arriving fact with examples or any sample script for reference? I have to implement the same using pyspark.Thanks.
Hi @Hare Krishnan​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...
Excited to expand your horizons with us? Click here to Register and begin your journey to success!
Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!
User | Count |
---|---|
1603 | |
744 | |
348 | |
285 | |
247 |