With the growing adoption of diverse machine learning, AI, and data science models available in the market, it has become increasingly challenging to assess the safety of processing these models—especially when considering the potential for malicious content. This concern also extends to handling various file formats such as .zip, .dbc, .py, .bin, and others that are uploaded into the Databricks workspace.
- Is there currently any mechanism in place within Databricks to track and verify the safety of models available in the environment?
- How can we ensure that uploaded files are being scanned and monitored for potential malicious activity?
- I am in the process of developing a tool aimed at scanning notebooks, models, and related artifacts for security risks.
I would greatly appreciate your insights on how we can better safeguard this system and enhance our security posture.
G.Chiranjeevi