The AI Policy Architect: From Natural Language to Validated Configuration
Writing policy configuration is tedious. Remembering the schema, the valid enum values, the required fields—it’s YAML archaeology.
Hexarch’s AI Policy Architect changes the workflow: describe what you want in plain language, and the system generates a validated configuration. You review and approve; the machine handles the boilerplate.
The Service Architecture
The GeminiService class wraps the Google Generative AI SDK:
import { GoogleGenAI, Type } from '@google/genai';
import { z } from 'zod';
const PolicySchema = z.object({
name: z.string().min(3),
type: z.string().min(1),
scope: z.string().min(1),
phase: z.string().min(1),
failureMode: z.string().min(1),
shortCircuit: z.boolean().optional().default(false),
config: z.record(z.any()).optional().default({}),
});
async function generatePolicy(prompt: string) {
if (!prompt || prompt.trim().length === 0) throw new Error('Policy prompt cannot be empty');
if (prompt.length > 5000) throw new Error('Policy prompt exceeds maximum length');
const ai = new GoogleGenAI({ apiKey: process.env.API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-3-pro-preview',
contents: `Generate a JSON configuration for a Java-based API Gateway policy.\nUser request: ${prompt}`,
config: {
responseMimeType: 'application/json',
responseSchema: {
type: Type.OBJECT,
properties: {
name: { type: Type.STRING },
type: { type: Type.STRING },
scope: { type: Type.STRING },
phase: { type: Type.STRING },
failureMode: { type: Type.STRING },
shortCircuit: { type: Type.BOOLEAN },
config: { type: Type.OBJECT },
},
required: ['name', 'type', 'config', 'scope', 'phase', 'failureMode'],
},
},
});
const parsed = PolicySchema.safeParse(JSON.parse(response.text ?? ''));
return parsed.success ? parsed.data : FALLBACK_POLICY;
}
Three safety layers: input validation, structured output, and schema validation on the response.
The Policy Schema
Zod enforces the contract (shape + required fields):
const PolicySchema = z.object({
name: z.string().min(3, 'Policy name must be at least 3 characters'),
type: z.string().min(1, 'Policy type is required'),
scope: z.string().min(1, 'Policy scope is required'),
phase: z.string().min(1, 'Policy phase is required'),
failureMode: z.string().min(1, 'Failure mode is required'),
shortCircuit: z.boolean().optional().default(false),
config: z.record(z.any()).optional().default({})
});
If the AI generates something that doesn’t match this schema, safeParse returns success: false, and the system returns FALLBACK_POLICY instead. Allowed values (types/scopes/phases/failure modes) are described in the generation prompt and then mapped/cast into the dashboard’s PolicyType / PolicyScope / PolicyPhase / FailureMode enums.
The Fallback Policy
When generation fails—network error, malformed response, schema violation—the system returns a safe default:
const FALLBACK_POLICY: PolicyConfig = {
name: 'Service Temporarily Unavailable',
type: 'SECURITY',
scope: 'GLOBAL',
phase: 'PRE_REQUEST',
failureMode: 'FAIL_CLOSED',
shortCircuit: false,
config: {
note: 'AI service is temporarily unavailable. Policy generated with safe defaults.'
}
};
This is intentionally conservative: FAIL_CLOSED means “deny on error.” The operator sees a placeholder policy they can edit, not a broken state.
The UI Integration
In Policies.tsx, the AI Policy Architect appears as a text input:
const handleGenerate = async () => {
if (!prompt.trim()) return;
setIsGenerating(true);
try {
const newPolicyData = await gemini.generatePolicy(prompt);
const newPolicy: Policy = {
id: Math.random().toString(36).substr(2, 9),
name: newPolicyData.name || 'New Policy',
type: (newPolicyData.type as PolicyType) || PolicyType.SECURITY,
description: `AI Generated: ${prompt}`,
enabled: true,
scope: (newPolicyData.scope as PolicyScope) || PolicyScope.GLOBAL,
phase: (newPolicyData.phase as PolicyPhase) || PolicyPhase.PRE_REQUEST,
failureMode: (newPolicyData.failureMode as FailureMode) || FailureMode.FAIL_CLOSED,
order: policies.length,
shortCircuit: newPolicyData.shortCircuit || false,
config: newPolicyData.config || {},
};
setPolicies([newPolicy, ...policies]);
setPrompt('');
} finally {
setIsGenerating(false);
}
};
The generated policy appears in the grid with a “Deploy Filter” button. It’s not deployed automatically—the human reviews first.
Example Prompts
The system handles natural language requests like:
Rate limiting:
“Create a spike arrest policy that allows 1000 requests per minute with a 50-message burst buffer, fail-closed on excess”
Generated output:
{
"name": "spike-arrest-mobile",
"type": "Traffic Control",
"scope": "API",
"phase": "Pre-Request",
"failureMode": "Fail Closed",
"config": {
"ratePerMinute": 1000,
"burstBuffer": 50
}
}
PII masking:
“Create a transformation policy that masks credit card numbers and SSNs in API responses”
Generated output:
{
"name": "pii-mask-responses",
"type": "Transformation",
"scope": "GLOBAL",
"phase": "Post-Request",
"failureMode": "Fail Closed",
"config": {
"patterns": ["creditCard", "ssn"],
"maskChar": "*",
"preserveLength": true
}
}
Why Schema-Constrained Generation
The responseSchema parameter in the API call is key:
config: {
responseMimeType: 'application/json',
responseSchema: {
type: Type.OBJECT,
properties: {
name: { type: Type.STRING },
type: { type: Type.STRING },
scope: { type: Type.STRING },
phase: { type: Type.STRING },
failureMode: { type: Type.STRING },
shortCircuit: { type: Type.BOOLEAN },
config: { type: Type.OBJECT }
},
required: ['name', 'type', 'config', 'scope', 'phase', 'failureMode']
}
}
This tells the model to output JSON matching the schema. Combined with Zod validation on the response, we get:
- Structural correctness: The JSON is well-formed
- Type correctness: Enum values are valid
- Constraint satisfaction: Required fields are present, strings meet length requirements
The model can still hallucinate values, but it can’t hallucinate structure.
The Zenith AI Assistant
The same service powers the Dashboard’s AI assistant with a different method:
async askZenith(query: string): Promise<AiResponse> {
const ai = new GoogleGenAI({ apiKey: process.env.API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-3-pro-preview',
contents: query,
config: {
tools: [{ googleSearch: {} }],
systemInstruction: 'You are Zenith AI, an expert in high-throughput Java API Gateways, Kubernetes Operators, and protocol mediation...',
},
});
const sources = response.candidates?.[0]?.groundingMetadata?.groundingChunks
?.map((chunk: any) => ({
title: chunk.web?.title || 'Source',
uri: chunk.web?.uri
}))
.filter((s: any) => s.uri) || [];
const aiResponse = { text: response.text, sources };
const validationResult = AiResponseSchema.safeParse(aiResponse);
return validationResult.success ? validationResult.data : FALLBACK_AI_RESPONSE;
}
The googleSearch tool enables grounding—the model can cite real documentation. The UI displays these as “Verification Sources” with clickable links.
Grounding Sources
When the assistant cites external documentation:
interface AiResponse {
text: string;
sources: Array<{
title: string;
uri: string;
}>;
}
The UI renders these below the response:
Verification Sources:
• Kubernetes Gateway API Docs — kubernetes.io/docs/...
• Protocol Buffers Best Practices — protobuf.dev/...
This is how you get AI assistance without losing traceability.
Try It
The AI Policy Architect is available on the /policies page. Enter a natural language description, click generate, and inspect the result. Edit if needed, then deploy.
The Dashboard assistant is available on /. Ask about REST-to-SOAP mediation, Kubernetes operator patterns, or any gateway architecture question. The response includes grounding sources you can verify.
AI writes the boilerplate. You own the decision.