Static Application Security Testing (SAST)

SAST Best Practices & False Positive Management

3 min read

The biggest challenge with SAST isn't finding vulnerabilities—it's managing the noise. Here's how to make SAST actionable.

The False Positive Problem

SAST tools often report issues that aren't real vulnerabilities:

Finding Type Description Action
True Positive Real vulnerability Fix it
False Positive Not actually exploitable Suppress it
True Negative No issue, correctly ignored None
False Negative Missed vulnerability Dangerous!

A noisy SAST tool leads to:

  • Alert fatigue: Developers ignore all findings
  • Wasted time: Investigating non-issues
  • Slow pipelines: Reviewing hundreds of findings

Tuning SAST Rules

1. Start with High-Confidence Rules

# Semgrep: Start strict, expand later
semgrep --config p/security-audit \
        --severity ERROR \
        --exclude-rule "generic.secrets.gitleaks.*"

2. Create Baseline Files

Track known findings to avoid re-reviewing:

# Create baseline (first run)
semgrep --config auto --json > .semgrep-baseline.json

# Run with baseline (subsequent runs)
semgrep --config auto --baseline-commit HEAD~1

3. Customize Rule Severity

# .semgrep/settings.yml
rules:
  - id: hardcoded-password
    severity: ERROR  # Block pipeline

  - id: missing-csrf-token
    severity: WARNING  # Warn but don't block

Suppressing False Positives

Inline Suppression (When Necessary)

# nosemgrep: hardcoded-password
TEST_PASSWORD = "test123"  # This is intentional for testing

# Or with reason
password = get_from_vault()  # nosemgrep: hardcoded-password (fetched from Vault)

File-Level Exclusions

# .semgrep.yml
exclude:
  - "tests/**"
  - "*.test.js"
  - "migrations/**"
  - "vendor/**"

Pattern-Based Suppression

# Suppress specific patterns
rules:
  - id: sql-injection
    patterns:
      - pattern: cursor.execute($QUERY)
      - pattern-not: cursor.execute($QUERY, $PARAMS)  # Parameterized = safe

Triage Workflow

Establish a consistent process:

┌─────────────┐
│ New Finding │
└──────┬──────┘
┌─────────────────┐     Yes    ┌──────────────┐
│ Is it a real    │ ─────────▶ │ Create Issue │
│ vulnerability?  │            │ & Fix        │
└────────┬────────┘            └──────────────┘
         │ No
┌─────────────────┐     Yes    ┌──────────────┐
│ Will it recur   │ ─────────▶ │ Update Rule  │
│ frequently?     │            │ or Baseline  │
└────────┬────────┘            └──────────────┘
         │ No
┌─────────────────┐
│ Add inline      │
│ suppression     │
└─────────────────┘

Metrics to Track

Metric Target Why
False Positive Rate < 20% Trust in tool
Mean Time to Triage < 1 hour Developer velocity
Mean Time to Remediate < 1 week Security posture
Findings per 1000 LOC Trending down Improvement
Suppression Rate < 30% Rule quality

Best Practices Checklist

Tool Configuration

  • Start with curated rule sets (e.g., p/security-audit)
  • Exclude test files and generated code
  • Set appropriate severity thresholds
  • Enable incremental scanning for PRs

Process

  • Assign security champions to triage
  • Document suppression decisions
  • Review suppressions quarterly
  • Track trends over time

Developer Experience

  • Provide clear remediation guidance
  • Link to security training
  • Enable IDE integration for instant feedback
  • Celebrate security fixes

Common SAST Anti-Patterns

Anti-Pattern Problem Solution
Scan everything Noise overwhelms signal Start with critical paths
Block on all findings Developers disable tool Block only on critical/high
No baseline Re-review old findings Create and maintain baseline
No ownership Findings rot Assign to code owners
Set and forget Rules become stale Review rules quarterly

In the next module, we'll tackle dependency and container security with SCA tools. :::

Quiz

Module 2 Quiz: Static Application Security Testing

Take Quiz