Bridgit

Risk Assessment Policy

Bridgit Platform (askbridgit.ca)
Version 1.0 | Effective: April 29, 2026 | Next Review: October 29, 2026

1. Purpose and Scope

This policy establishes the framework for identifying, assessing, treating, and monitoring information security risks affecting the Bridgit platform, its infrastructure, and the data of its users and their organizations.

Applies to: All personnel with access to Bridgit production systems, source code, or cloud infrastructure.

In scope: The Bridgit SaaS application, all cloud infrastructure (GCP Cloud Run, Cloud SQL, GCS, Redis, Secret Manager), third-party integrations (AI providers, Stripe, Google OAuth, Tavily, Apify), source code, and all data processed through the platform.

Compliance mapping: ISO 27001 A.8.1, A.14, A.17; SOC 2 CC3, CC5, CC8, CC9; GDPR Art. 35.

2. Risk Management Framework

Methodology: Custom, asset-based risk management approach scaled to a small SaaS team operating on managed cloud infrastructure.

Risk management lifecycle:

Risk ownership: the Platform Administrator owns all identified risks and is responsible for treatment decisions.

3. Risk Appetite

Data security and privacy: Zero tolerance for cross-tenant data exposure, unauthorized access to personal data, or credential leakage. Any risk in this category requires immediate treatment.

Operational risk: Moderate tolerance. Acceptable: brief Cloud Run scaling delays, non-critical feature outages under 4 hours. Unacceptable: data loss, database corruption, extended outage exceeding 8 hours.

Third-party risk: Moderate tolerance. AI provider outages accepted as transient. AI provider data breaches treated as P2 incidents.

Compliance risk: Zero tolerance for GDPR or PIPEDA violations.

Risk acceptance above the defined appetite requires documented justification and review at the next semi-annual cycle.

4. Asset Inventory and Classification

Asset Categories

Infrastructure:

Data:

Services:

Classification

Critical: Cloud SQL database, GCP Secret Manager, activity_instances table, source code repository.

High: Cloud Run services, OAuth tokens, AI provider API keys, Stripe integration.

Medium: Redis, GCS file storage, ai_usage_logs.

Low: Frontend static assets, development/staging environments, documentation.

Ownership

All assets are owned by the Platform Administrator, who maintains the inventory, ensures classification is current, and approves access to Critical and High assets.

5. Risk Identification

Threat Identification

Sources:

Platform-specific threat categories:

Vulnerability Assessment

6. Risk Assessment and Scoring

Likelihood Scale (1-5)

1 Very Low: Less than once in 5 years. Example: GCP region-wide outage exceeding 24 hours.
2 Low: Once in 2-5 years. Example: npm supply chain compromise on a direct dependency.
3 Medium: Once per year. Example: API key accidentally logged in error output.
4 High: Multiple times per year. Example: npm audit High severity finding in transitive dependency.
5 Very High: Expected frequently. Example: automated bot scanning of public API endpoints.

Impact Scale (1-5)

1 Negligible: No data exposure, no disruption. Example: cosmetic UI bug.
2 Minor: Non-personal data, disruption under 1 hour. Example: staging briefly inaccessible.
3 Moderate: Small-scale personal data exposure, 1-4 hours disruption, possible regulatory inquiry. Example: one org's data visible to another briefly.
4 Major: Multi-user data exposure, 4-24 hours disruption, regulatory notification required, financial impact $5K-$50K. Example: database backup exposed publicly.
5 Catastrophic: Large-scale breach, outage exceeding 24 hours, enforcement action, financial impact exceeding $50K. Example: full database compromise.

Risk Matrix

Score = Likelihood x Impact (1-25)

Low (1-4): Accept with monitoring. Review at next semi-annual cycle.
Medium (5-9): Treatment plan within 90 days.
High (10-15): Treatment plan within 30 days. Platform Administrator attention.
Critical (16-25): Immediate action within 7 days. Escalate per Incident Response Policy.

7. Risk Treatment and Mitigation

Treatment Options

Control Selection

Controls are selected based on risk priority, effectiveness, implementation cost, and compliance requirements.

Technical controls: HTTPS, AES-256-GCM encryption, JWT with Redis blacklisting, organization_id isolation, input validation, RBAC middleware, GCP Secret Manager, automated testing and linting.

Administrative controls: code review, staging validation, deployment guide with mandatory backup, Incident Response Policy, semi-annual policy review.

Residual Risk

After controls are applied, residual risk is re-scored using the same matrix. Residual risk in the High or Critical bands requires documented justification, additional compensating controls, and accelerated review.

8. Fraud Risk

Relevant fraud risks:

Mitigating controls:

No formal whistleblower mechanism at current scale.

9. Change Management

All changes follow:

  1. Local development in Docker containers
  2. Automated testing (npm run test:quick) and linting (npm run lint)
  3. Staging deployment via GitHub Actions to staging.askbridgit.ca
  4. Staging validation
  5. Production deployment via pull request merge to main
  6. Database backup before every production push (scripts/prepare-deployment.sh)

Change categories:

10. Business Continuity and Disaster Recovery

Recovery objectives:

Cloud Run API: RTO 15 minutes (auto-recovery), RPO 0 (stateless).
Cloud SQL database: RTO 1 hour, RPO 24 hours (automated backups) or deployment-time (manual pg_dump).
GCS file storage: RTO 0, RPO 0 (regionally redundant).
Redis: RTO 5 minutes, RPO session data only (users re-authenticate).

Disaster recovery: restore database from Cloud SQL backup or pg_dump, redeploy application from known-good git commit via GitHub Actions, rotate all secrets if compromise suspected.

BCP/DR testing frequency: semi-annually.

11. Security in Development

Security integrated at each SDLC phase:

Environment separation: local development, staging, production. Secrets differ per environment.

12. Data Protection Impact Assessment

DPIA required for:

Current DPIA-relevant integrations: OpenAI, Anthropic, Google, Cohere (AI), Stripe (billing), Google OAuth (authentication).

Process: screening, data flow mapping, necessity assessment, risk identification, mitigation, documentation as activity instance in the platform, review on material change or semi-annually.

13. Policy Administration

This policy is maintained alongside the platform source code and is subject to version control. Changes require review and re-approval.