This is a submission for the Permit.io Authorization Challenge: AI Access Control/Permissions Redefined/API-First Authorization
Hey devs! I just wrapped up my submission for the Permit.io Authorization Challenge 2025, and I'm excited to share how I built a secure AI medical assistant that enforces proper authorization at multiple levels. If you've been wondering how to combine modern AI capabilities with robust access controls, this post is for you.
🔍 TL;DR
A secure AI medical assistant built with Google AI, Permit.io, GroundX, and Clerk — fully policy-driven and RAG-enabled. Role-based access, prompt filtering, and document-level restrictions all included.
🧰 Tech Stack
- Frontend: Next.js 15 + Vercel AI SDK
- Backend: Bun (Hono) + Drizzle
- Auth: Clerk
- Authorization: Permit.io (RBAC, ABAC, ReBAC)
- RAG: GroundX
- Monorepo: TurboRepo + PNPM
The Challenge
Building AI applications that handle sensitive data requires strong authorization controls. For healthcare applications, this is even more critical - doctors should see different information than patients, and the AI must respect these boundaries.
I built a medical chat application that:
- Authenticates users with Clerk
- Enforces fine-grained authorization with Permit.io
- Delivers AI responses using the Vercel AI SDK with Google AI
- Implements secure RAG with GroundX
What Makes This Project Unique
Unlike typical chat apps, this assistant enforces layered security at prompt, retrieval, and response levels — ensuring zero unauthorized data leaks even when using an LLM. Most AI projects stop at input validation — I went beyond that.
The system performs authorization checks at four critical points:
- When the user submits a prompt - Is this question allowed for their role?
- When retrieving documents - Which medical records can this user access?
- Before generating AI responses - What information can be included?
- Before delivering the response - Final sanitization of any sensitive data
This multi-layered approach ensures that even if one security layer fails, others will catch unauthorized access attempts.
Project Architecture
The project uses a TurboRepo monorepo structure with PNPM for package management:
permit-aibac/
├── apps/
│ ├── api/ # Bun API backend (Hono, Drizzle, Clerk)
│ └── web/ # Next.js 15 frontend (App Router, shadcn/ui, React Query)
├── packages/
│ ├── db/ # Drizzle schema and database utilities
│ ├── clerk/ # Authentication integration
│ ├── permit/ # Authorization utilities
│ ├── groundx/ # RAG implementation
│ └── logs/ # Logging utilities
The Authorization Magic: Roles & Policies Made Simple
Think of authorization as the bouncer at an exclusive hospital. Different people get different levels of access, but how do we make this work in code? Here's how I broke it down:
Role & Permission Overview
Role | Permissions |
---|---|
Admin | Full access to all resources |
Doctor | View/update assigned patient records |
Patient | View own records, chat with AI |
Researcher | View anonymized data only |
1. Roles: The "Who You Are" Part
I created four distinct roles, each with their own superpowers:
- Admin: The all-seeing system manager who can access everything
- Doctor: The medical professional who can view and update patient records, but only for their assigned patients
- Patient: The end-user who can only see their own records and chat about their own medical conditions
- Researcher: A special role that can only see anonymized data for research purposes
2. Resources: The "What You're Trying to Access" Part
Everything in the system is a protectable resource:
// Simplified Permit.io resource setup
await permit.api.resources.create({
key: "medicalRecord",
name: "Medical Record",
actions: {
view: { name: "View" },
create: { name: "Create" },
update: { name: "Update" },
delete: { name: "Delete" },
share: { name: "Share" }
}
});
3. The Secret Sauce: Multiple Authorization Models
Here's where it gets interesting! I implemented three complementary authorization models:
RBAC (Role-Based Access Control): The simplest layer
// Doctor's basic permissions
await permit.api.roles.create({
key: "doctor",
name: "Doctor",
permissions: [
"medicalRecord:view",
"medicalRecord:update",
"prescription:create",
"chat:create"
]
});
ABAC (Attribute-Based Access Control): More fine-grained with conditions
// Only doctors with clearance level 4+ can see restricted records
await permit.api.conditionSets.create({
key: "high_clearance_access",
conditions: {
allOf: [
{ user: { clearance: { gte: 4 } } },
{ resource: { sensitivity: { eq: "Restricted" } } }
]
}
});
ReBAC (Relationship-Based Access Control): The most powerful model
// Doctors can only access records of patients they're treating
await permit.api.resourceRelations.create({
resource: "patient",
relation: "treating_physician",
subject_resource: "doctor"
});
The beauty of this approach? The authorization logic lives in Permit.io, not in your code! Change the rules anytime without redeploying.
Authorization Implementation
Step 1: Setting up Permit.io
I created a custom package for Permit.io integration:
// packages/permit/src/client.ts
import { Permit } from 'permitio';
export function getPermitClient() {
return new Permit({
token: process.env.PERMIT_API_KEY,
pdp: 'https://cloudpdp.permit.io',
});
}
Step 2: User Management with Clerk
For authentication, I used Clerk and synchronized user roles with Permit.io:
// packages/clerk/src/sync.ts
import { clerkClient } from '@clerk/nextjs';
import { getPermitClient } from '@repo/permit';
export async function syncUserWithPermit(userId: string, role: string) {
const permit = getPermitClient();
const user = await clerkClient.users.getUser(userId);
await permit.api.syncUser({
key: userId,
firstName: user.firstName || '',
lastName: user.lastName || '',
email: user.emailAddresses[0]?.emailAddress || '',
roles: [role]
});
}
Step 3: Implementing Middleware
The API uses middleware to enforce authorization:
// apps/api/src/pkg/middleware/permit-auth.ts
import { Context } from 'hono';
import { getPermitClient } from '@repo/permit';
export async function permitMiddleware(c: Context, next: () => Promise<void>) {
const permit = getPermitClient();
const userId = c.get('userId');
const resource = c.req.param('resource');
const action = mapMethodToAction(c.req.method);
const allowed = await permit.check({
user: userId,
action,
resource,
});
if (!allowed) {
return c.json({ error: 'Not authorized' }, 403);
}
await next();
}
Secure RAG Implementation
For retrieval-augmented generation, I integrated GroundX to ensure authorization-aware search results:
// packages/groundx/src/services/search.ts
import { getGroundXClient } from "../client";
import { SearchParams, SearchResponse } from "../types";
import { logger } from "@repo/logs";
export async function searchContent(params: SearchParams): Promise<SearchResponse> {
try {
logger.info({
query: params.query,
bucketId: params.bucketId,
filter: params.filter,
resultLimit: params.n
}, "Executing GroundX search");
const client = getGroundXClient().getClient();
const response = await client.post<SearchResponse>('/search/content', params);
return response.data;
} catch (error) {
logger.error({
query: params.query,
bucketId: params.bucketId,
error,
}, "GroundX search failed");
throw error;
}
}
Prompt Filtering
One of the key security features is prompt filtering based on user role:
// apps/api/src/modules/chat/services/chat.service.ts
async function filterPrompt(userId: string, prompt: string) {
const permit = getPermitClient();
const user = await permit.api.users.get(userId);
// Different prompt handling based on user role
if (user.roles.includes('patient')) {
// Filter out prompts trying to access other patients' data
if (promptContainsOtherPatientRequest(prompt)) {
return {
allowed: false,
reason: "You can only access your own medical information"
};
}
}
return { allowed: true, filteredPrompt: prompt };
}
The Frontend Experience
The frontend uses Vercel AI SDK to stream responses while respecting authorization boundaries:
// Simplified example from web app
'use client';
import { useChat } from 'ai/react';
import { Button } from '@/components/ui/button';
export function ChatInterface() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
});
return (
<div className="flex flex-col h-full">
<div className="flex-1 overflow-y-auto p-4">
{messages.map((message) => (
<div key={message.id} className="mb-4">
<div className="font-semibold">{message.role === 'user' ? 'You' : 'Medical AI'}</div>
<div>{message.content}</div>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="border-t p-4">
<div className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask a medical question..."
className="flex-1 p-2 border rounded"
/>
<Button type="submit">Send</Button>
</div>
</form>
</div>
);
}
Try It Yourself
You can test the application with these credentials:
Admin Account:
- Username: admin@medicalai.com
- Password: 2025DEVChallenge
Doctor Account:
- Username: doctor@medicalai.com
- Password: 2025DEVChallenge
Patient Account:
- Username: patient@medicalai.com
- Password: 2025DEVChallenge
Researcher Account:
- Username: researcher@medicalai.com
- Password: 2025DEVChallenge
Regular User Account:
- Username: newuser@medicalai.com
- Password: 2025DEVChallenge
Each account has different permissions - try asking for information you shouldn't be allowed to access and see how the system responds!
What I Learned
Building this project taught me several valuable lessons:
Externalized Authorization is Powerful: By separating auth logic with Permit.io, I could change policies without changing code.
Auth Must Happen at Multiple Levels: From API requests to RAG results to generated responses, each stage needed checks.
Type Safety Matters: Using TypeScript throughout made integrations more reliable.
Monorepo Structure Shines: TurboRepo made sharing types and utilities across the stack seamless.
Challenges Faced
The biggest challenge was ensuring that authorization happened at every stage of the AI pipeline:
- When the prompt is submitted
- When RAG retrieves documents
- When the LLM generates content
- Before the response is sent to the client
I had to carefully design each layer to respect permissions while maintaining a good user experience.
Conclusion
Implementing fine-grained authorization in AI applications is challenging but essential, especially for sensitive domains like healthcare. Permit.io made this process much more manageable by providing a unified way to define and enforce complex authorization rules.
The full source code is available at https://github.com/jacksonkasi1/permit-aibac.
What authorization patterns are you using in your AI applications? I'd love to hear your thoughts in the comments!