Auth Libraries
Auth libraries are one of the strongest fits for Farming Labs ORM because they usually need to describe the same storage shape across many app stacks without rewriting the same adapter story over and over.
For many auth packages, that means Farming Labs ORM can become the drop-in replacement for much of the adapter ecosystem they would otherwise have to own and document separately.
What auth libraries usually need
Most auth systems end up modeling the same ideas:
- users
- sessions
- linked accounts
- verification records
- organizations and memberships
- plugin-owned models
The repeated work usually shows up in three places:
- schema examples for each supported stack
- adapter or storage code for each backend
- docs that have to explain the same model in different ORM dialects
Farming Labs ORM helps by letting the package define the storage contract once, then letting the consuming app choose the eventual runtime or generator target.
Recommended integration model
The cleanest auth-package architecture usually looks like this:
- the auth package owns the schema contract
- the app chooses how to generate or execute it
- the auth package writes its storage helpers once against the unified runtime
That means the package does not need separate query logic for Prisma, Drizzle, Kysely, raw SQL, and MongoDB.
If you are evaluating the broader adapter surface that auth packages usually care about, see Adapter Ecosystem for the more general replacement pattern around one schema definition, one ORM/runtime layer, and one setup path instead of many adapter packages.
Step 1: define the schema once
import { belongsTo, datetime, defineSchema, hasMany, id, model, string } from "@farming-labs/orm";
export const authSchema = defineSchema({
user: model({
table: "users",
fields: {
id: id(),
email: string().unique(),
name: string(),
createdAt: datetime().defaultNow(),
updatedAt: datetime().defaultNow(),
},
relations: {
sessions: hasMany("session", { foreignKey: "userId" }),
accounts: hasMany("account", { foreignKey: "userId" }),
},
}),
session: model({
table: "sessions",
fields: {
id: id(),
userId: string().references("user.id"),
token: string().unique(),
expiresAt: datetime(),
},
relations: {
user: belongsTo("user", { foreignKey: "userId" }),
},
}),
account: model({
table: "accounts",
fields: {
id: id(),
userId: string().references("user.id"),
provider: string(),
accountId: string(),
},
constraints: {
unique: [["provider", "accountId"]],
},
relations: {
user: belongsTo("user", { foreignKey: "userId" }),
},
}),
});The important part is that the auth package now owns the model once.
Step 2: let the app generate what it needs
If the consuming app wants Prisma output:
import { defineConfig } from "@farming-labs/orm-cli";
import { authSchema } from "@acme/auth";
export default defineConfig({
schemas: [authSchema],
targets: {
prisma: {
out: "./generated/prisma/schema.prisma",
provider: "postgresql",
},
},
});If the app wants Drizzle or safe SQL instead, the schema package does not have to change.
If the auth library wants to render artifacts in memory for a CLI, installer, or docs example, it can also call the generators directly:
import { renderDrizzleSchema, renderPrismaSchema, renderSafeSql } from "@farming-labs/orm";
import { authSchema } from "./auth-schema";
const prisma = renderPrismaSchema(authSchema, {
provider: "postgresql",
});
const drizzle = renderDrizzleSchema(authSchema, {
dialect: "pg",
});
const sql = renderSafeSql(authSchema, {
dialect: "postgres",
});That is useful when the auth package wants to:
- power an installer or setup wizard
- emit artifacts in a framework CLI
- snapshot generated output in tests
- show docs examples that come from the real schema contract
Step 3: write the auth storage helpers once
The auth package can now write its data access once against the unified runtime surface.
import type { OrmClient } from "@farming-labs/orm";
import { authSchema } from "./auth-schema";
const normalizeEmail = (email: string) => email.trim().toLowerCase();
export function createAuthStore(orm: OrmClient<typeof authSchema>) {
return {
findUserByEmail(email: string) {
return orm.user.findUnique({
where: {
email: normalizeEmail(email),
},
select: {
id: true,
email: true,
sessions: {
select: {
token: true,
expiresAt: true,
},
},
accounts: {
select: {
provider: true,
accountId: true,
},
},
},
});
},
findAccount(provider: string, accountId: string) {
return orm.account.findUnique({
where: {
provider,
accountId,
},
select: {
userId: true,
provider: true,
accountId: true,
},
});
},
createOAuthUser(input: { name: string; email: string; provider: string; accountId: string }) {
return orm.transaction(async (tx) => {
const user = await tx.user.create({
data: {
name: input.name,
email: normalizeEmail(input.email),
},
select: {
id: true,
email: true,
},
});
const account = await tx.account.create({
data: {
userId: user.id,
provider: input.provider,
accountId: input.accountId,
},
select: {
provider: true,
accountId: true,
},
});
return { user, account };
});
},
rotateSession(input: { userId: string; token: string; expiresAt: Date }) {
return orm.session.upsert({
where: {
token: input.token,
},
create: {
userId: input.userId,
token: input.token,
expiresAt: input.expiresAt,
},
update: {
expiresAt: input.expiresAt,
},
select: {
token: true,
expiresAt: true,
},
});
},
invalidateUserSessions(userId: string) {
return orm.session.deleteMany({
where: {
userId,
},
});
},
};
}That gives the auth package one place to implement:
- load a user by email, id, token, or linked account
- rotate or revoke sessions
- link provider accounts
- count related records for hooks or plugin rules
Step 4: accept raw clients when the integration needs to
If your auth framework wants to accept a raw database client directly, use the runtime helpers instead of rebuilding runtime detection yourself.
import { createOrmFromRuntime } from "@farming-labs/orm-runtime";
import { authSchema } from "./auth-schema";
export async function createAuthOrm(database: unknown) {
return createOrmFromRuntime({
schema: authSchema,
client: database,
});
}That keeps the auth integration small:
- accept a raw client
- normalize it into Farming ORM
- run the shared auth storage helpers against the normalized client
If the integration needs to debug what the app passed in before creating the ORM, inspect the runtime first:
import { inspectDatabaseRuntime } from "@farming-labs/orm";
const inspection = inspectDatabaseRuntime(database);
if (!inspection.runtime) {
throw new Error(inspection.summary);
}Step 5: use capabilities instead of guessing
Higher-level auth libraries often need to know what the runtime can safely do.
const caps = orm.$driver.capabilities;
caps.supportsTransactions;
caps.numericIds;
caps.supportsSchemaNamespaces;
caps.upsert;
caps.returning.create;
caps.nativeRelations.hasMany;That is useful for decisions like:
- whether numeric IDs are manual or generated
- whether transactions should wrap multi-step account linking
- whether a schema namespace is available on Postgres
- whether a fallback path is needed for relation-heavy reads
Step 6: use normalized errors at the boundary
Auth systems often need to react consistently to duplicate emails or duplicate provider-account pairs.
import { isOrmError } from "@farming-labs/orm";
try {
await orm.user.create({
data: {
email: "ada@farminglabs.dev",
name: "Ada",
},
});
} catch (error) {
if (isOrmError(error) && error.code === "UNIQUE_CONSTRAINT_VIOLATION") {
// convert to your auth-layer error
}
throw error;
}That is cleaner than parsing Prisma codes, SQLSTATEs, MySQL driver errors, and Mongo duplicate key errors separately in the auth package.
Step 7: bootstrap in tests and local setup
If the auth package or framework needs to stand up a real database in tests, use the runtime-aware setup helpers.
import { bootstrapDatabase } from "@farming-labs/orm-runtime/setup";
import { authSchema } from "./auth-schema";
const orm = await bootstrapDatabase({
schema: authSchema,
client: prisma,
});That is especially useful in:
- integration tests
- local demos
- framework-owned setup flows
- preview apps
If the auth package only needs to prepare the database and does not want the ORM returned yet, use:
import { pushSchema } from "@farming-labs/orm-runtime/setup";
await pushSchema({
schema: authSchema,
client: prisma,
});Use bootstrapDatabase(...) when you want setup plus the ORM client back. Use
pushSchema(...) or applySchema(...) when the package already has its own ORM
or wants setup as a separate step.
Helper map for auth packages
renderPrismaSchema(...): emitschema.prismatext from the auth schemarenderDrizzleSchema(...): emit a Drizzle schema module for starter kits or generated outputrenderSafeSql(...): emit SQL DDL for direct SQL installs or snapshot testinginspectDatabaseRuntime(...): explain what raw client was passed to the auth integrationcreateOrmFromRuntime(...): turn a raw client into the unified auth runtimepushSchema(...): prepare a real database before tests, examples, or setup flowsbootstrapDatabase(...): prepare the database and return an ORM in one step
Numeric IDs and namespaces
If the auth library needs numeric IDs, the schema can choose that explicitly:
id: id({ type: "integer", generated: "increment" });That is first-class on:
- SQL
- Drizzle
- Kysely
- Prisma
- memory
MongoDB and Mongoose currently support manual numeric IDs only.
If the auth package needs Postgres namespaces, use:
table: tableName("users", { schema: "auth" });Do not pass flat strings like "auth.users". The ORM intentionally rejects
that shape so namespaces stay explicit.
Good auth-package rules of thumb
- keep normalization helpers like
normalizeEmail(...)shared between writes and reads - declare compound uniques such as
provider + accountIdin the schema, not only in app docs - keep backend-specific branching out of the auth package whenever possible
- let the consuming app choose the generator/runtime target
- use runtime capabilities and normalized errors instead of guessing backend behavior
What this removes
- duplicated storage code per ORM
- drift between auth docs and auth runtime behavior
- hand-maintained adapter logic for every consumer stack
It does not remove every integration detail, but it centralizes the part that really is shared.
How is this guide?