Security Methodology

Security isn't a feature.
It's the foundation.

Every Blueprint Labs engagement begins with a security scoping session — before any workflow is designed, any tool is selected, or any data is touched.

The question is not whether AI can be used securely in your operation. It can. The question is whether the specific workflow, using the specific data, with the specific tools, has been properly scoped. That scoping is where most deployments fail — and where we start.

Step One of Every Engagement

Define what AI can — and cannot — touch.

Before any workflow is built, we classify your operational data into three categories. Every workflow built by Blueprint Labs operates strictly within the approved boundary.

Approved

Approved for AI Use

Data that can safely flow through AI workflows under proper access controls and approved tool providers.

  • General SOPs and policy documentation
  • Anonymized operational summaries
  • Public-facing product and service information
  • Structured operational metrics (aggregated)
  • Staff-facing training and reference materials
Restricted

Restricted — Controlled Use Only

Data that may be used in AI workflows under strict controls: on-premise or private cloud only, role-scoped access, human review required before any output leaves the system.

  • Vendor contracts and pricing agreements
  • Internal financial summaries and cost data
  • Personnel records and HR documentation
  • Customer-identifiable information
  • Engineering drawings and specifications
Excluded

Out of Scope — Not for AI Use

Data categories that are categorically excluded from AI workflows in our standard engagements. Use requires explicit legal and security review beyond Blueprint Labs' standard scope.

  • Regulated personal data (HIPAA, PII with legal exposure)
  • Classified or export-controlled materials
  • Active litigation or legal hold documents
  • Unpublished financial information subject to disclosure rules
  • Data governed by customer confidentiality agreements

This classification is documented in writing at the start of every engagement. It is reviewed with your operations, IT, and legal stakeholders before any workflow design begins. If a workflow cannot be scoped within an approved category, we say so — and we do not build it.

How We Build

Six principles that govern every workflow we deploy.

01

Minimum Necessary Data

Every AI workflow is scoped to the minimum data required to accomplish the specific task. We do not build workflows that request broader access than is necessary — even when broader access would make the workflow more capable.

02

No Model Training on Client Data

We use approved enterprise AI providers whose terms explicitly prohibit training on customer data. Consumer AI products and free-tier tools are not used in any production workflow. This is non-negotiable.

03

Human-in-the-Loop for Every Output

AI outputs that result in an action — an email sent, a document filed, a record updated — require explicit human review and approval before execution. No autonomous action is taken without a responsible party in the loop.

04

Role-Scoped Access

Access to AI-assisted workflows is governed by the same access control principles as the underlying data. If a staff member doesn't have access to a document, the AI assistant for that workflow doesn't have access either.

05

Controlled Rollout

New workflows deploy to a small group of operational users first. We monitor for unexpected behavior, data handling issues, and user errors before expanding to the broader team. No organization-wide rollouts without a pilot phase.

06

Documented Audit Trail

Every production workflow includes logging of AI queries, outputs reviewed, and human approval decisions. Operational leads can review what the AI processed and how outputs were used — without relying on staff memory.

For the Highest-Sensitivity Workflows

When your data cannot leave your environment.

Most enterprise AI providers offer strong data protection. But for some workflows — particularly those involving proprietary supplier agreements, confidential operational data, or material that triggers contractual restrictions — sending prompts and documents to any external API may not be acceptable.

In these cases, Blueprint Labs can design workflows around self-hosted open-weight models deployed inside a customer-controlled environment. Inference runs on your infrastructure. Documents and prompts do not leave your network.

This approach requires proper infrastructure — typically a private cloud environment or on-premise GPU compute — and is evaluated during the Prioritize stage when data classification indicates restricted or highly sensitive scope.

Important: model choice is one layer

Self-hosting a model means inference stays local — but that alone does not make a system secure. The full pipeline must be private: where inference runs, where embeddings are stored, what gets logged, whether any external APIs are called elsewhere in the workflow, and who has access. Blueprint Labs designs the controls around the model, not just the model itself.

When this option is appropriate

  • Data classified as restricted under your data boundary framework
  • Contractual obligations that restrict third-party data processing
  • Internal knowledge, document review, or briefing workflows where frontier-model performance is not required
  • Buyers whose primary concern is third-party exposure or data sovereignty rather than maximum model capability

What "private deployment" requires

  • Private cloud or on-premise compute capable of running the selected model
  • Fully private inference stack with no external API calls in the workflow chain
  • Internal storage for embeddings, logs, and outputs
  • Access controls, telemetry review, and audit logging consistent with broader security posture

"We do not have to send your internal documents into a public AI service to make AI useful in your operation."

Tool Standards

We only use business-grade tools.

Consumer AI products are not used in operational workflows. Every tool is evaluated against four criteria: data residency, enterprise data protection agreements, access control, and logging capability.

AI Platform

Claude for Enterprise

Anthropic's enterprise tier with no training on customer data and SOC 2 compliance.

AI Platform

Azure OpenAI

Microsoft-hosted models with HIPAA-eligible, enterprise data protection, and US data residency.

AI Platform

Google Gemini Enterprise

Google Cloud enterprise tier with customer data isolation and compliance commitments.

Deployment

Self-Hosted Open Models

For restricted-category workflows, open-weight models deployed on customer-controlled infrastructure. Inference and data never leave your environment.

Workflow Automation

Microsoft 365 + Copilot

For organizations already in the Microsoft ecosystem with existing compliance posture.

Data Integration

Read-Only Connectors

All database integrations use read-only connectors. AI components have no write permissions.

Authentication

SSO / Identity Integration

AI workflow access is governed through your existing identity provider, not standalone credentials.

Logging

Audit-Grade Logging

All AI interactions are logged with query, output, reviewer identity, and decision outcome.

Tool selection is determined by your environment and requirements, not a preferred vendor relationship. Recommendations are made during the Prioritize stage with written rationale.

Hard Limits

What we will not build.

Operational discipline requires knowing what to decline. Blueprint Labs does not build workflows that fall outside our security methodology — even if the client requests them.

If a requested workflow cannot be scoped safely, we say so plainly and explain why. We will not design around a security concern for the sake of a project.

Autonomous decision-making without human review
AI can draft, classify, summarize, and flag — but it does not decide, send, or approve without a human in the loop.
Consumer-grade AI tools for operational use
Free-tier products with no enterprise data protection agreements are not used in production workflows under any circumstances.
Workflows built before data classification is complete
Implementation does not begin until the data classification document is finalized and approved by the appropriate stakeholders.
Workflows involving excluded data categories
Data in the excluded category requires legal and security review beyond Blueprint Labs' standard scope. We will refer appropriately.

Reference Frameworks

Standards that inform our methodology.

Blueprint Labs does not claim certification to these frameworks. We follow them as operational references when designing security controls, assessing risk, and defining data handling practices.

NIST

AI Risk Management Framework

The NIST AI RMF provides a structured approach to managing risk in AI systems. We use it to guide how we assess, govern, map, measure, and manage risk in every workflow we deploy.

NIST AI RMF →
OWASP

LLM Top 10 Security Risks

OWASP's LLM Top 10 defines the most critical security risks in LLM-based applications — including prompt injection, data leakage, and insecure output handling. We reference it during workflow design and review.

OWASP LLM Top 10 →

Questions about our methodology

Have specific security requirements to discuss?

The Workflow Audit includes a direct security scoping conversation before any implementation begins. If you have specific data handling requirements, compliance obligations, or IT constraints, those are exactly the conversations we start with.

Request a Workflow Audit View the Engagement Model