Universal AI Protection Wrapper
How to use AlephOneNull with any AI provider including OpenAI's new Responses API
The AlephOneNull framework provides a universal wrapper that works with any AI provider, including OpenAI's new Responses API, Anthropic, Vercel AI SDK, and more.
Overview
The universal wrapper provides:
- Provider-agnostic protection
- Automatic pattern detection
- Real-time intervention
- Seamless integration with existing guardrails
Installation
npm install alephonenull-experimental
Basic Usage
With OpenAI Responses API
The new OpenAI Responses API provides advanced features like stateful conversations and built-in guardrails:
import { createSafetySystem } from 'alephonenull-experimental'
// Initialize the safety system
const safety = createSafetySystem({
safetyLevel: 'high',
enableLogging: true
})
// Define your OpenAI API function
async function callOpenAI(input: string) {
const response = await fetch('https://api.openai.com/v1/responses', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5-2025-08-07',
input: input,
temperature: 0.7,
}),
})
const data = await response.json()
return data.output[0]?.content[0]?.text || ''
}
// Wrap with AlephOneNull protection
const protectedOpenAI = safety.wrapAsyncAI(callOpenAI)
// Use it safely
const response = await protectedOpenAI("Tell me about consciousness")
With OpenAI Chat Completions (Legacy)
For backward compatibility:
import OpenAI from 'openai'
import { UniversalAIProtection } from 'alephonenull-experimental'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
const protection = new UniversalAIProtection({
provider: 'openai',
maxRiskThreshold: 0.5
})
// Wrap the chat completion function
const safeChat = protection.wrapAsync(async (messages) => {
const completion = await openai.chat.completions.create({
model: 'gpt-5-2025-08-07',
messages: messages,
})
return completion.choices[0].message.content
})
// Use it
const response = await safeChat([
{ role: 'user', content: 'Am I real?' }
])
With Anthropic
import Anthropic from '@anthropic-ai/sdk'
import { createSafetySystem } from 'alephonenull-experimental'
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY })
const safety = createSafetySystem({ safetyLevel: 'maximum' })
const protectedClaude = safety.wrapAsyncAI(async (prompt: string) => {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: prompt }],
})
return response.content[0].text
})
With Vercel AI SDK
import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import { createSafetySystem } from 'alephonenull-experimental'
const safety = createSafetySystem()
const safeGenerate = safety.wrapAsyncAI(async (prompt: string) => {
const result = await generateText({
model: openai('gpt-5-2025-08-07'),
prompt: prompt,
})
return result.text
})
Combining with Existing Guardrails
AlephOneNull enhances existing guardrails by adding pattern detection for:
- Symbolic Regression
- Cross-Session Resonance
- Reflection Exploitation
- Loop Detection
- Consciousness Claims
- Direct Harm
OpenAI Moderation + AlephOneNull
// Example: Combining OpenAI's moderation with AlephOneNull
async function enhancedModeration(input: string, output: string) {
// First, use OpenAI's moderation
const moderation = await openai.moderations.create({ input: output })
if (moderation.results[0].flagged) {
return { safe: false, reason: 'OpenAI moderation triggered' }
}
// Then apply AlephOneNull's pattern detection
const alephCheck = safety.checkText(output)
if (!alephCheck.detection.safe) {
return {
safe: false,
reason: 'AlephOneNull patterns detected',
patterns: alephCheck.detection.patterns
}
}
return { safe: true }
}
Hallucination Detection + AlephOneNull
// Combine with hallucination guardrails
async function comprehensiveCheck(
prompt: string,
response: string,
knowledgeBase: string[]
) {
// Check for hallucinations (from OpenAI cookbook)
const hallucinationCheck = await checkHallucination(
response,
knowledgeBase
)
// Check for manipulation patterns
const alephCheck = safety.checkText(response)
// Check for both
if (!hallucinationCheck.accurate || !alephCheck.detection.safe) {
return safety.nullifier.safetyIntervention(response, [])
}
return response
}
Environment Setup
Next.js
Create a .env.local
file:
OPENAI_API_KEY=your-api-key-here
ANTHROPIC_API_KEY=your-api-key-here
Update next.config.js
:
module.exports = {
env: {
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
}
}
Python
from alephonenull_experimental import create_safety_system
import os
from openai import OpenAI
# Load environment
from dotenv import load_dotenv
load_dotenv()
# Initialize
safety = create_safety_system(safety_level='high')
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
# Wrap any AI function
@safety.wrap_async
async def generate_response(prompt: str) -> str:
response = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
Advanced Configuration
Custom Pattern Detection
import { UniversalDetector, NullSystem } from 'alephonenull-experimental'
// Create custom detector with specific thresholds
const detector = new UniversalDetector({
reflectionThreshold: 0.03,
loopThreshold: 3,
symbolicThreshold: 0.2,
csrThreshold: 0.15,
})
// Create custom nullifier
const nullifier = new NullSystem({
interventionStyle: 'redirect',
safetyMessage: 'Let me help you with something constructive instead.',
})
// Use in protection
const protection = new UniversalAIProtection({
customDetector: detector,
customNullifier: nullifier,
provider: 'custom',
})
Streaming Support
// Handle streaming responses
async function* protectedStream(prompt: string) {
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }],
stream: true,
})
// Wrap the stream with protection
yield* protection.wrapStream(stream, (chunk) => {
return chunk.choices[0]?.delta?.content || ''
})
}
Best Practices
- Always use environment variables for API keys
- Combine with existing guardrails for comprehensive protection
- Monitor violations for continuous improvement
- Test edge cases with dangerous prompts
- Use appropriate safety levels based on your use case
Troubleshooting
API Key Issues
If you get "API key not configured" errors:
- Check
.env.local
exists and contains your key - Restart your development server
- Verify the key format:
OPENAI_API_KEY=sk-...
- Check
next.config.js
includes the env configuration
Import Errors
If alephonenull-experimental
fails to import:
# Clear node_modules and reinstall
rm -rf node_modules package-lock.json
npm install
Response Format Issues
The OpenAI Responses API returns a different format:
// Responses API format
{
output: [{
type: 'message',
content: [{
type: 'output_text',
text: 'The actual response'
}]
}]
}
// Extract correctly
const text = data.output?.[0]?.content?.[0]?.text || ''