Testing PlanetScale and Vitess: Schema Changes, Branching, and Query Testing
PlanetScale's database branching model lets you create isolated test databases from production schema in seconds. Pair that with automated schema change validation and Vitess compatibility checks, and you have a testing workflow that catches production incidents before they happen.
Key Takeaways
Database branches are your test environments. PlanetScale branches are full Vitess shards — cheap, fast, and isolated. Create one per PR, run your test suite, delete when done.
Deploy requests enforce schema safety. PlanetScale's deploy request process runs schema changes through Vitess's online DDL validator. Test your migrations locally first with the PlanetScale CLI, then let the deploy request process be your final gate.
Vitess has MySQL compatibility gaps — test them explicitly. Foreign keys, certain JOINs with subqueries, and some DDL operations behave differently in Vitess. Your test suite should cover the specific patterns your application uses.
PlanetScale's Testing Model
PlanetScale runs on Vitess, the same database clustering system that powers MySQL at YouTube and Slack scale. The branching model is what makes it compelling for testing: every branch is a fully isolated Vitess keyspace. You can create a branch from your production schema, run migrations and tests against it, and destroy it — all without touching production.
The PlanetScale CLI makes this scriptable:
# Install CLI
brew install planetscale/tap/pscale
<span class="hljs-comment"># Authenticate
pscale auth login
<span class="hljs-comment"># Create a test branch from main
pscale branch create mydb test-pr-123 --from main --<span class="hljs-built_in">wait
<span class="hljs-comment"># Connect to it
pscale connect mydb test-pr-123 --port 3309Your application connects to 127.0.0.1:3309 and treats it as a normal MySQL 8.0 connection.
Branch-Per-PR Test Isolation
The cleanest pattern for CI is one branch per pull request:
# .github/workflows/test.yml
name: Test
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
env:
PSCALE_SERVICE_TOKEN: ${{ secrets.PSCALE_SERVICE_TOKEN }}
PSCALE_SERVICE_TOKEN_ID: ${{ secrets.PSCALE_SERVICE_TOKEN_ID }}
PSCALE_ORG: myorg
PSCALE_DB: mydb
steps:
- uses: actions/checkout@v4
- name: Install pscale CLI
run: |
curl -fsSL https://github.com/planetscale/cli/releases/download/v0.179.0/pscale_0.179.0_linux_amd64.tar.gz | tar xz
mv pscale /usr/local/bin/
- name: Create test branch
run: |
pscale branch create $PSCALE_DB pr-${{ github.event.number }} --from main --wait
pscale connect $PSCALE_DB pr-${{ github.event.number }} --port 3309 &
sleep 3 # wait for proxy to be ready
- name: Run migrations on branch
run: |
mysql -h 127.0.0.1 -P 3309 -u root < migrations/latest.sql
- name: Run tests
run: npm test
env:
DATABASE_URL: "mysql://root@127.0.0.1:3309/mydb"
- name: Delete test branch
if: always()
run: pscale branch delete $PSCALE_DB pr-${{ github.event.number }} --forceThis creates a fresh branch from the production schema on every PR, applies any new migrations, runs the full test suite, and cleans up — all in isolation.
Testing Schema Changes with Deploy Requests
PlanetScale enforces a schema change workflow: you cannot run DDL directly against main. Instead, you create a deploy request (like a PR for your schema). This gives you a natural test gate.
Test your schema change locally before opening a deploy request:
// test/migrations/add-user-preferences.test.ts
import mysql from "mysql2/promise";
let connection: mysql.Connection;
beforeAll(async () => {
connection = await mysql.createConnection({
host: "127.0.0.1",
port: 3309, // pscale connect proxy
user: "root",
database: "mydb",
});
});
afterAll(() => connection.end());
test("migration adds user_preferences table with correct columns", async () => {
// Apply the migration
await connection.execute(`
CREATE TABLE IF NOT EXISTS user_preferences (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
user_id BIGINT UNSIGNED NOT NULL,
theme VARCHAR(50) NOT NULL DEFAULT 'light',
notifications_enabled TINYINT(1) NOT NULL DEFAULT 1,
created_at DATETIME(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
updated_at DATETIME(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
PRIMARY KEY (id),
KEY idx_user_id (user_id)
)
`);
// Verify the schema
const [columns] = await connection.execute(`SHOW COLUMNS FROM user_preferences`);
const colNames = (columns as any[]).map((c) => c.Field);
expect(colNames).toContain("user_id");
expect(colNames).toContain("theme");
expect(colNames).toContain("notifications_enabled");
expect(colNames).toContain("created_at");
expect(colNames).toContain("updated_at");
});
test("migration is idempotent (IF NOT EXISTS works)", async () => {
// Running the same migration twice should not throw
await expect(
connection.execute(`
CREATE TABLE IF NOT EXISTS user_preferences (
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
)
`)
).resolves.not.toThrow();
});Vitess MySQL Compatibility Testing
Vitess has specific behaviors that differ from standard MySQL. Your test suite should cover these explicitly.
Foreign Key Constraints
PlanetScale (Vitess) disables foreign key enforcement by default. If your application relies on FK constraints for data integrity, you need to enforce that in application code:
test("orphaned records are prevented at application layer", async () => {
// In Vitess, this INSERT succeeds even if parent doesn't exist
// because FK enforcement is disabled — verify app code handles this
const { rows: orgCount } = await pool.query(
"SELECT COUNT(*) as count FROM organizations WHERE id = ?",
[999999]
);
expect(orgCount[0].count).toBe(0);
// Application should check before inserting
await expect(
userRepository.create({ orgId: 999999, email: "orphan@example.com" })
).rejects.toThrow("Organization not found");
});Scatter Queries
Vitess will throw errors on queries that require scanning all shards without a shard key. Test that your queries include the routing column:
test("user query includes shard key to avoid scatter", async () => {
// Bad: SELECT without tenant_id causes scatter query in sharded Vitess
// Good: always include the shard key
const result = await userRepository.findByEmail("alice@example.com", {
tenantId: "tenant-123", // shard key
});
expect(result).toBeDefined();
});
test("scatter query throws VT12001 in Vitess strict mode", async () => {
// If your Vitess is configured with --allow_scatter=false
await expect(
connection.execute("SELECT * FROM users WHERE email = 'alice@example.com'")
).rejects.toThrow(/VT12001/);
});Online DDL Behavior
PlanetScale runs schema changes as online DDL, which means they execute asynchronously. Test that your application handles tables that may be mid-migration:
test("application handles missing column gracefully during migration", async () => {
// Simulate a migration that adds a nullable column
// Before migration completes, the column doesn't exist
// App code should use optional chaining or defaults
const user = await userRepository.findById("user-123");
// Should not throw even if 'preferences' column doesn't exist yet
const theme = user.preferences?.theme ?? "light";
expect(theme).toBe("light");
});Connection Pooling Under Test Load
PlanetScale's connection limits vary by plan. Under concurrent test load, you can exhaust the connection pool:
// test/connection-pool.test.ts
import { Pool } from "mysql2/promise";
test("concurrent queries respect connection pool size", async () => {
const pool = mysql.createPool({
host: "127.0.0.1",
port: 3309,
user: "root",
database: "mydb",
connectionLimit: 10, // PlanetScale Scaler plan allows 10,000 connections
waitForConnections: true,
queueLimit: 0,
});
// Simulate 50 concurrent queries
const queries = Array.from({ length: 50 }, (_, i) =>
pool.execute("SELECT ? AS n", [i])
);
const results = await Promise.all(queries);
expect(results).toHaveLength(50);
expect(results.every(([rows]) => (rows as any[])[0].n !== undefined)).toBe(true);
await pool.end();
});
test("pool handles connection errors with retry", async () => {
const pool = mysql.createPool({
host: "127.0.0.1",
port: 3309,
user: "root",
database: "mydb",
connectionLimit: 5,
});
// Kill all connections and verify pool reconnects
await pool.execute("KILL CONNECTION_ID()").catch(() => {});
// Next query should succeed after reconnect
const [rows] = await pool.execute("SELECT 1 AS alive");
expect((rows as any[])[0].alive).toBe(1);
await pool.end();
});Testing with Prisma on PlanetScale
Prisma is a common ORM for PlanetScale. Test the Prisma client against your branch:
// test/prisma-integration.test.ts
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient({
datasources: {
db: { url: process.env.DATABASE_URL }, // points to pscale connect proxy
},
});
beforeAll(() => prisma.$connect());
afterAll(() => prisma.$disconnect());
afterEach(async () => {
// Clean up in reverse dependency order
await prisma.userPreferences.deleteMany();
await prisma.user.deleteMany();
});
test("creates user with preferences in a transaction", async () => {
const user = await prisma.$transaction(async (tx) => {
const u = await tx.user.create({
data: { email: "alice@example.com", name: "Alice" },
});
await tx.userPreferences.create({
data: { userId: u.id, theme: "dark" },
});
return u;
});
const prefs = await prisma.userPreferences.findFirst({
where: { userId: user.id },
});
expect(prefs?.theme).toBe("dark");
});Note: Prisma with PlanetScale requires referentialIntegrity = "prisma" in schema.prisma because FK enforcement is disabled at the database level.
Summary: The PlanetScale Testing Checklist
Before merging any PR that touches database code:
- Create a branch from
mainwith the PlanetScale CLI - Apply the new migration to the branch
- Run your full integration test suite against the branch
- Verify the migration is idempotent
- Test any queries that could scatter across shards
- Open a deploy request — let PlanetScale's DDL validator run
- Merge the deploy request after CI passes
HelpMeTest can run your PlanetScale integration tests automatically on every pull request — sign up free.