cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Weekly Release Notes RecapHere’s a quick recap of the latest release notes updates from the past one week. Databricks platform release notesApril 20 -...

Sujitha
Databricks Employee
Databricks Employee

Weekly Release Notes Recap

Here’s a quick recap of the latest release notes updates from the past one week.

Databricks platform release notes

April 20 - 27, 2023

Configure the Python formatter

For files and notebooks in Databricks Repos, you can now configure the Python formatter based on a Black specification. See Format Python cells.

Workspace files are GA

You can now work with non-notebook files in Databricks. Workspaces files are enabled by default in all workspaces. See What are workspace files?.

Cluster-scoped init scripts can now be stored in workspace files

You can now store cluster-scoped init scripts in workspace files, regardless of Databricks Runtime version used by your compute. Databricks recommends storing all cluster-scoped init scripts in workspace files. See Store init scripts in workspace files.

Databricks SQL release notes

April 20 - 27, 2023

Databricks SQL Version 2023.20 Available

Rollout Schedule

  • Preview rollout for 2023.20: Between Mar 15, 2023 and Mar 23, 2023
  • Current rollout for 2023.20: Between Mar 27, 2023 and Apr 3, 2023

Changes in 2023.20

  • Delta Lake schema evolution supports specifying source columns in merge statements.
  • Remove all NULL elements from an array using array_compact.
  • To append elements to an array, use array_append.
  • To anonymize sensitive string values, use the mask function.
  • Common error conditions now return SQLSTATE.
  • Invoke table-valued generator functions in the regular FROM clause of a query.
  • Use the from_protobuf and to_protobuf functions to exchange data between binary and struct types. See Read and write protocol buffers.
  • Improved consistency for Delta commit behavior for empty transactions relating to update, delete, and merge commands.
  • Behavior change
    • The lateral column alias feature introduces behavior changes during name resolution. See Breaking changes.

User interface updates

The features listed in this section are independent of the SQL Warehouse compute versions described in the Channels section of the release notes.

April 20, 2023

Improvements:

  • Administrators can change warehouse owners using the user interface or the API. See Transfer ownership of Databricks SQL objects.
  • Introduces new pivot tables that allow you to aggregate more than 64k results.
  • Databricks SQL tables and visualizations now support BigInt, 38bit Decimals, and non UTF-8 characters. For numbers, the default setting is now user-defined digit precision.
  • Autocomplete now suggests frequent past joins for Unity Catalog tables, powered by Unity Catalog lineage data in Databricks Runtime 12.0 and above.
  • Cloud Fetch is enabled by default in AWS workspaces with bucket versioning enabled. If you have bucket versioning enabled, Databricks recommends setting a lifecycle policy to automatically remove old versions of uploaded query results. See Advanced configurations.

New feature:

  • Return text generated by a selected large language model (LLM) given the prompt with ai_generate_text. This function is only available as public preview on Databricks SQL Pro and Serverless. To participate in the public preview, populate and submit the AI Functions Public Preview enrollment form.

Databricks runtime releases

April 20 - 27, 2023

User interface updates

The features listed in this section are independent of the SQL Warehouse compute versions described in the Channels section of the release notes.

April 20, 2023

Improvements:

  • Administrators can change warehouse owners using the user interface or the API. See Transfer ownership of Databricks SQL objects.
  • Introduces new pivot tables that allow you to aggregate more than 64k results.
  • Databricks SQL tables and visualizations now support BigInt, 38bit Decimals, and non UTF-8 characters. For numbers, the default setting is now user-defined digit precision.
  • Autocomplete now suggests frequent past joins for Unity Catalog tables, powered by Unity Catalog lineage data in Databricks Runtime 12.0 and above.
  • Cloud Fetch is enabled by default in AWS workspaces with bucket versioning enabled. If you have bucket versioning enabled, Databricks recommends setting a lifecycle policy to automatically remove old versions of uploaded query results. See Advanced configurations.

New feature:

  • Return text generated by a selected large language model (LLM) given the prompt with ai_generate_text. This function is only available as public preview on Databricks SQL Pro and Serverless. To participate in the public preview, populate and submit the AI Functions Public Preview enrollment form.

0 REPLIES 0

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group