Documentation

Sigrid AI Explanations

Sigrid AI Explanations transform complex software insights into clear, actionable guidance. Weather you are evaluating system architecture, investigating security findings, or assessing code functionality, these explanations help you understand what matters and how to act on it.

Powered by advanced AI, Sigrid delivers two explanation types: AI Static Explanations and GenAI Explanations, each designed to enhance your software quality journey in distinct ways.

AI Static Explanations

The Sigrid platform performs comprehensive quality analysis across over 300 technologies, evaluating critical dimensions including Security, Reliability, Maintainability, and Architecture Quality. This results in tens of thousands of unique code quality checks. Sigrid’s AI Static Explanations ensure that every finding is presented with detailed context, technology-specific recommendations, and actionable mitigation strategies.

Where can I find AI Static Explanations?

AI Static Explanations are integrated throughout the Sigrid platform. The following capabilities currently include AI-powered insights:

Please note that Sigrid’s knowledge base is continuously expanding.

What do I do to use the AI Static Explanations?

No action required. AI Static Explanations are enabled by default for all Sigrid users. Since these explanations are generated through our private infrastructure without sharing any client code or data externally, they are safe and ready to use immediately.

Does Sigrid share my code with external AI providers to generate the AI Static Explanations?

No. Sigrid’s AI Static Explanations are generated using SIG’s private LLM instance, ensuring full compliance with SIG’s security and data sovereignty policies. Your code and data remain protected and are not shared with external AI providers.

Is the advice in the AI Static Explanations specific to my source code?

No. AI Static Explanations are pre-generated per individual quality check and provide technology-consistent guidance. While not customized to your specific codebase, explanations show examples in your code’s technology, making recommendations immediately relevant and actionable.

What data does Sigrid use to generate the AI Static Explanations?

Sigrid AI Static Explanations leverage:


GenAI Explanations

Sigrid GenAI Explanations bridge the gap between complex technical data and clear business insights. When facing unfamiliar code, architecture decisions, or vulnerability management questions, these explanations provide the context you need to move forward with confidence.

Designed for both technical professionals and business stakeholders, GenAI Explanations translate technical complexity into clarity, showing you in plain language what matters, why it matters, and what comes next.

Enabling GenAI Explanations

Sigrid GenAI Explanations require interaction with external AI models to generate on-demand insights. Because your data and code are transmitted to these models, SIG requires explicit consent and contractual alignment before activation.

We are committed to data governance and will never process your codebase or information without your explicit authorization. Enabling GenAI Explanations requires two components:

  1. AI Explanations license - An additional license that grants access to GenAI Explanations
  2. AI addendum - An addendum to your Sigrid contract, specific for AI usage

Discovering GenAI Explanations Across Sigrid Capabilities

Sigrid GenAI Explanations adapt to each capability they support, providing contextual insights tailored to your specific needs. Currently available across these key areas:

System Overview:

Gain an intelligent architectural summary of your entire system directly from your System Overview dashboard. Click the AI button in the System Details tab to generate a comprehensive explanation that transforms raw file structure into business-aligned insights.

This explanation analyzes your systems composition and delivers:

Perfect for teams new to a system, stakeholders seeking quick context, or anyone needing to understand a codebase at a glance.

Code Explorer

Navigate your codebase with confidence. Select any file in the Code Explorer and click the AI button to receive an intelligent analysis from two perspectives:

Functional Perspective - Understanding for business stakeholders Understand what the code does from a business standpoint:

Technical Perspective - Understanding for developers and architects Deep dive into how the code works:

These explanations are perfect for:

Architecture Quality

Understand what your Architecture Quality scores mean and how they impact your system’s long-term maintainability.

Navigate to the Architecture Explore and select a component of interest. The GenAI explanations are available for:

Summary - An overview of all architecture scores for the selected component. Select the Summary tab and click the AI button to receive an overview of your component’s architecture quality scores.

The Summary explanation delivers:

Individual System Properties - Detailed explanations for each architecture capability for the selected component. Select a specific system property tab and click the AI button next to the component name to receive a detailed explanation of its quality score.

The System Property explanation delivers:

Architecture quality explanations are perfect for:

For deeper context on architecture quality, see the Architecture Quality documentation.

Open Source Health

Access and address risk in your software supply chain with detailed vulnerability intelligence. Navigate to the Open Source Health tab in Sigrid, select a dependency with vulnerabilities, and click on the AI button next to the vulnerability to access comprehensive analysis.

The explanation delivers:

Perfect for:

For deeper context on open source health, see the Open Source Health documentation.

Data Security and Privacy

Does Sigrid share my code with external AI models or providers?

Yes, and we do so responsibly. Your code and data are transmitted securely to AI models for explanation generation, following SIG’s enterprise-grade security and data sovereignty policies.

Is GenAI Explanation usage safe?

Absolutely. Multi-layered safeguards protect your intellectual property and data throughout the explanation generation process.

Which AI models does Sigrid use?

Sigrid currently leverages Anthropic’s Haiku and Sonnet models for GenAI Explanation generation. We use AWS Bedrock as our infrastructure provider. AWS Bedrock ensures:

How is my data handled during and after explanation generation?

Your code and data are safely sent to our models provider, AWS Bedrock. The models use your data temporarily only to generate the requested explanation.


Feedback and support

Your insights help us continually improve these explanations. If you have suggestions or questions regarding any AI explanation, please contact our support team.


Disclaimer

Sigrid’s AI explanations have been compiled with the greatest care and based on best-in-class data sources. While we strive to deliver accurate and reliable information, AI-generated content may contain inaccuracies or errors. Always verify critical information with authoritative sources before making any significant decisions based on these explanations.

On this page