Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
In today’s data-driven world, the role of a data engineer is critical in designing and maintaining the infrastructure that allows for the efficient collection, storage, and analysis of large volumes of data. Databricks certifications holds significan...
As an additional tip for those working towards both the Associate and Professional certifications, I recommend avoiding a long gap between the two exams to maintain your momentum. If possible, try to schedule them back-to-back with just a few days in...
Hello Databricks community!If you're eager to explore how Databricks can revolutionize your data workflows, I highly recommend checking out the Databricks Demo Center. It’s packed with insights and tools designed to cater to both beginners and season...
What is a Databricks Workspace IP Access List?The Databricks Workspace IP Access List is a security feature that allows administrators to control access to the Databricks workspace by specifying which IP addresses or IP ranges are allowed or denied a...
Hi there,I recently came across this post about databricks apps that says it available for public previewhttps://www.databricks.com/blog/introducing-databricks-appsHowever, when I go to previews in the workspace, I don't see an option to enable it, i...
I have a notebook with a text widget where I want to be able to edit the value of the widget within the notebook and then reference it in SQL code. For example, assuming there is a text widget named Var1 that has input value "Hello", I would want to ...
Hi @DavidOBrien, how are you?
You can try the following approach:
# Get the current value of the widget
current_value = dbutils.widgets.get("widget_name")
# Append the new value to the current value
new_value = current_value + "appended_value"
# Se...
Python step-through debugger for Databricks Notebooks and Files is now Generally Availablehttps://www.databricks.com/blog/announcing-general-availability-step-through-debugging-databricks-notebooks-and-files
You can Orchestrate Databricks jobs with Apache AirflowThe Databricks provider implements the below operators:DatabricksCreateJobsOperator : Create a new Databricks job or reset an existing jobDatabricksRunNowOperator : Runs an existing Spark job run...
Good one @Sourav-Kundu! Your clear explanations of the operators really simplify job management, plus the resource link you included makes it easy for everyone to dive deeper .
Retrieval-augmented generation (RAG) is a method that boosts the performance of large language model (LLM) applications by utilizing tailored data.It achieves this by fetching pertinent data or documents related to a specific query or task and presen...
Low Shuffle Merge in Databricks is a feature that optimizes the way data is merged when using Delta Lake, reducing the amount of data shuffled between nodes.- Traditional merges can involve heavy data shuffling, as data is redistributed across the cl...
Great post, @Sourav-Kundu. The benefits you've outlined, especially regarding faster execution and cost efficiency, are valuable for anyone working with large-scale data processing. Thanks for sharing!
Delta Live Tables support for Unity Catalog is in Public PreviewDatabricks recommends setting up Delta Live Tables pipelines using Unity Catalog.When configured with Unity Catalog, these pipelines publish all defined materialized views and streaming ...
Databricks Asset Bundles help implement software engineering best practices like version control, testing and CI/CD for data and AI projects.1. They allow you to define resources such as jobs and notebooks as source files, making project structure, t...
Databricks serverless budget policies are now available in Public Preview, enabling administrators to automatically apply the correct tags to serverless resources without relying on users to manually attach them.1. This feature allows for customized ...
Have you ever accidentally dropped a table in Databricks, or had someone else mistakenly drop it?Databricks offers a useful feature that allows you to view dropped tables and recover them if needed.1. You need to first execute SHOW TABLES DROPPED2. T...
Hello Everyone. I want to explore LakeFlow Pipelines in the community version but don’t have access to Azure or AWS. I had a bad experience with Azure, where I was charged $85 while just trying to learn. Is there a less expensive, step-by-step learni...
Whether you're a data scientist or a sales executive, Databricks is making it easier than ever to build, host, and share secure data applications. With our platform, you can now run any Python code on serverless compute, share it with non-technical c...
The workspace is assigned to unity catalog, and all the access to the ADLS Gen2 is now handled via unity catalog only, means no SPN, no connection string, access keys etc. I have to create append blob files in a volume, Is this is possible in a works...
Now I got your point. No, you can't create Append Blob files directly in Volumes, as this is a native Azure functionality. A volume is basically just an abstraction over a native storage.You will still need to use libraries like azure-storage-blob wi...