Documentation

AI Governance

Enterprises are rapidly adopting AI technologies and AI Coding Assistants to build new generations of software products. The AI ecosystem is rapidly evolving, with new models, tools and frameworks becoming available on almost a daily basis. One of the challenges large enterprises face while adopting AI technologies is the lack of visibility into where AI is used within the Software Landscape. Sigrid’s AI Governance module helps with identifying the use of AI across the portfolio.

Use Cases

Visibility and Adoption

Track AI generated code across the portfolio

Track AI technologies across the portfolio

Technical Risks

If AI Governance is available for your system, you can reach this view via the top menu, or by clicking on a capability on the System or Portfolio Overview pages. See the system-level Overview page or portfolio-level Overview page.

AI Governance overview

The overview page provides an overview of key AI Governance metrics for a given system:

AI Generated Code

Provides an estimate on the relative amount of AI generated code currently in a system. In addition, it provides trend information indicating if the use of AI Coding Assistants is increasing or decreasing. From this tile it is possible to navigate to the Code Explorer to inspect the detected AI generated code in the system.

The treemap visualizes the amount of AI generated code in architecture components within the system. This helps with understanding AI Code hotspots at the architecture level. Clicking on an architecture component in the treemap navigates the user to the Code Explorer for inspecting any identified AI generated code in an architecture component.

Code Explorer contains an AI Governance tab where identified AI generated code can be inspected.

AI Generated Code in New / Changed files

Provides an estimate on the relative amount of AI generated code in files that were changed or newly added in the selected timeframe. This metric provides insights into the extent to which AI Coding Assistants were used to produce new code or to make code changes in the selected timeframe.

AI Technologies

Provides a quick summary indicating what kind of AI technologies are used in this system. Examples of AI Technologies that would be reported are the use (or training) of a Machine Learning model, invocations of Large Language Models or use of AI Cloud Infrastructure (such as Google Vertex, Azure AI Foundry, AWS Bedrock). From this tile it is possible to navigate to the AI Technologies page that allows for drilling down to the code level to inspect the use of AI in a system.

The AI Technologies page categorizes AI according to the following classification:

From the category overview it is possible to navigate to the code level to inspect the use of AI technologies in a system:

Technology Support

AI Generated Code Detection

Sigrid detects AI-generated code based on the unique stylometric features of code generated by LLMs and code written by humans. Detection happens at the level of units of code (typically methods or functions). Units of code that contain less than 5 lines of code are not evaluated, as they do not contain enough signal to perform reliable classification.

Current detection accuracy ranges from 95% - 99%, which means that approximately 19 out of 20 code snippets are correctly classified. In a large system it is not unusual to have a number of incorrectly identified code snippets. The goal of Sigrid’s AI Governance is not to identify every single AI-generated code snippet correctly, but to provide high level insights at portfolio level with enough accuracy (>95%) to perform AI Governance activities. Below are the currently supported Programming Languages and Large Language Models.

Programming Languages:

Supported Large Language Models:

Support for new models happens quarterly.

AI Technology Detection

Sigrid supports over 300 checks to identify AI technologies in Python, Java and C#.

On this page