Introduction
Qusql is a collection of crates that make it quicker and easier to write correct, type-safe SQL in Rust and Python. MariaDB/MySQL and PostgreSQL are both first-class citizens.
The problem
SQL queries are usually just strings at compile time. Mistakes like wrong column names, wrong argument counts, and mismatched types only surface at runtime, often in production.
Qusql solves this by parsing your schema at compile time and checking every query against it before your binary is built.
Crates at a glance
| Crate | What it does |
|---|---|
| qusql-parse | SQL lexer and parser; produces an AST |
| qusql-type | Type-inference engine; checks queries against a schema |
| qusql-mysql-type | Compile-time typed MySQL/MariaDB queries in Rust (via qusql-mysql) |
| qusql-sqlx-type | Compile-time typed PostgreSQL queries in Rust (via sqlx) |
| qusql-py-mysql-type | mypy-checked MySQL queries in Python |
Source code
Everything lives in one repository: https://github.com/antialize/qusql
Each crate also has a worked example in the
examples/ directory.
qusql-parse
qusql-parse is a fast, zero-dependency SQL parser for Rust. It turns a SQL
string into an Abstract Syntax Tree (AST) and emits structured diagnostics
(errors and warnings) with byte-accurate source spans so you can display
them to users.
Supported dialects
Pass a SQLDialect to ParseOptions to select the right keyword set, identifier quoting rules, and type names for your database:
| Dialect | SQLDialect value |
|---|---|
| MariaDB / MySQL | SQLDialect::MariaDB |
| PostgreSQL | SQLDialect::PostgreSQL |
| PostGIS | SQLDialect::PostGIS |
| SQLite | SQLDialect::Sqlite |
Argument placeholder styles
The parser needs to know which placeholder syntax your driver uses so it can parse arguments correctly and assign them indices:
| Style | SQLArguments value | Example |
|---|---|---|
? | SQLArguments::QuestionMark | MariaDB / MySQL |
$1, $2, ... | SQLArguments::Dollar | PostgreSQL |
%s | SQLArguments::Percent | Python drivers |
Basic usage
#![allow(unused)] fn main() { use qusql_parse::{SQLDialect, SQLArguments, ParseOptions, parse_statement, Issues}; let options = ParseOptions::new() .dialect(SQLDialect::MariaDB) .arguments(SQLArguments::QuestionMark) .warn_unquoted_identifiers(true); let sql = "SELECT `id`, `title` FROM `notes` WHERE `id` = ?"; let mut issues = Issues::new(sql); let ast = parse_statement(sql, &mut issues, &options); // Issues implements Display - prints a plain-text summary of all diagnostics. println!("{}", issues); println!("AST: {:#?}", ast); }
Error recovery
The parser is deliberately error-tolerant: it keeps going after a syntax
error and returns the best AST it can produce. All problems are collected in
Issues rather than returned as a Result. This means you can highlight
multiple errors in one pass, which is important for editor tooling and linters.
Source spans
Every AST node implements Spanned, which returns a Range<usize> of byte
offsets into the original source string. If you need character offsets (e.g.
for ariadne diagnostics), convert with
ByteToChar:
#![allow(unused)] fn main() { use qusql_parse::ByteToChar; let b2c = ByteToChar::new(sql.as_bytes()); let char_span = b2c.map_span(byte_span.start..byte_span.end); }
Links
- crates.io
- docs.rs
- Example: qusql-parse-lint - a command-line SQL linter built on this crate
qusql-type
qusql-type is the type-inference engine at the heart of qusql. You give it a
schema (CREATE TABLE, ALTER TABLE, stored procedures, PL/pgSQL functions, and so on) and
it tells you the types of result columns and query arguments for any SQL
statement, without needing a running database.
Both MariaDB/MySQL and PostgreSQL are well supported. The parser covers essentially the full grammar of both databases, and the type checker understands almost all built-in functions and operators: aggregates, window functions, string and date arithmetic, JSON operators, PostGIS geometry functions, type casts, and more. If a function is not yet recognised the macro emits a warning and falls back to an unknown type rather than a hard error, so new or obscure functions degrade gracefully rather than blocking compilation.
How it works
- Parse the schema with
parse_schemas(). This evaluates the schema definition statements (CREATE TABLE, ALTER TABLE, CREATE VIEW, and so on) and builds an internal model of every table, view, procedure and function. - Type a statement with
type_statement(). This returns aStatementTypethat describes what the query produces and what arguments it expects.
#![allow(unused)] fn main() { use qusql_type::{ schema::parse_schemas, type_statement, TypeOptions, SQLDialect, SQLArguments, StatementType, Issues, }; let schema_sql = " CREATE TABLE notes ( id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY, title text NOT NULL, body text );"; let opts = TypeOptions::new().dialect(SQLDialect::PostgreSQL); let mut issues = Issues::new(schema_sql); let schemas = parse_schemas(schema_sql, &mut issues, &opts); assert!(issues.is_ok()); let query = "SELECT id, title, body FROM notes WHERE id = $1"; let mut issues = Issues::new(query); let stmt = type_statement( &schemas, query, &mut issues, &TypeOptions::new() .dialect(SQLDialect::PostgreSQL) .arguments(SQLArguments::Dollar), ); assert!(issues.is_ok()); match stmt { StatementType::Select { columns, arguments } => { // columns[0] -> id : i32, not-null // columns[1] -> title: String, not-null // columns[2] -> body : Option<String> // arguments[0] -> i32 (the type of `id`) println!("{} columns, {} arguments", columns.len(), arguments.len()); } _ => panic!("expected SELECT"), } }
StatementType variants
| Variant | Produced by |
|---|---|
Select { columns, arguments } | SELECT |
Insert { arguments } | INSERT |
Update { arguments } | UPDATE |
Delete { arguments } | DELETE |
Call { arguments } | CALL |
Invalid | Statement that type-checks to a type error |
AlterTable / CreateTable / ... | Schema definition statements |
Schema evaluation model
parse_schemas() processes the schema string the same way a database would
bootstrap from an empty state: it reads statements top to bottom, executing
each one in order. After the last statement the resulting in-memory model
reflects the fully-constructed database state, which type_statement() then
queries.
This means the schema string should contain everything needed to build the
database from scratch: CREATE TABLE, ALTER TABLE, CREATE VIEW,
CREATE TYPE ... AS ENUM, CREATE FUNCTION, CREATE PROCEDURE, DO blocks,
BEGIN/COMMIT, INSERT (for seed data used by apply_revision-style
migration helpers), and so on.
The evaluator understands IF / ELSE blocks and can follow a PL/pgSQL
apply_revision()-style pattern where each migration step is wrapped in a
function call that skips already-applied revisions. SELECT apply_revision(..., $rev$ ... $rev$) blocks are interpreted, so the schema sees the final
accumulated state regardless of which revision style you use.
What is supported
| Statement | Notes |
|---|---|
CREATE TABLE / ALTER TABLE / DROP TABLE | Full column and constraint handling |
CREATE VIEW / CREATE MATERIALIZED VIEW | |
CREATE TYPE ... AS ENUM | PostgreSQL enum types |
CREATE FUNCTION / CREATE OR REPLACE FUNCTION | PL/pgSQL body evaluated |
CREATE PROCEDURE / DROP PROCEDURE | MySQL stored procedures |
CREATE INDEX / DROP INDEX | Tracked but not type-checked |
CREATE TRIGGER / DROP TRIGGER | Accepted, ignored |
DROP TABLE / DROP VIEW / DROP FUNCTION | Remove object from model |
DO $$ ... $$ | Anonymous blocks |
BEGIN / COMMIT | Transaction wrappers (assumed to commit) |
SELECT (at schema level) | Evaluated for side effects (e.g. calling migration helpers) |
INSERT (at schema level) | Evaluated for seed data side effects |
GRANT / COMMENT ON / ANALYZE | Accepted, ignored |
IF / ELSE | Conditional schema evolution |
Obtaining a schema for an existing database
If you have an existing database rather than a hand-written schema file, most database tools can export the schema without data:
- MySQL / MariaDB via phpMyAdmin: select the database, go to Export, choose Custom, tick Structure only (untick Data), and export as SQL. The resulting file can be used directly as your schema file.
- MySQL / MariaDB via the command line:
mysqldump --no-data mydb > schema.sql - PostgreSQL via
pg_dump:pg_dump --schema-only mydb > schema.sql
You may want to lightly edit the exported file: remove
SET statements and /*!...*/ MySQL version guards that are not needed for
type-checking, and add the dialect comment on the first line if you are using
the qusql-sqlx-type or qusql-mysql-type macros.
Migration support
Because parse_schemas() processes schema statements sequentially from an
empty state, it naturally supports incremental migrations represented as
ordered schema statements. There are two common patterns:
Separate migrations folder. Keep a canonical schema.sql that describes
the final desired state for type-checking purposes, and maintain a separate set
of incremental migration files for production use (e.g. with any migration
tool). The two are kept in sync manually: when a migration adds a column, you
also add it to schema.sql.
Idempotent schema with revision tracking. The schema file itself contains a
schema_revisions table and a helper function (e.g. apply_revision()) that
skips a migration block if its revision name has already been recorded. Each
logical migration is a SELECT apply_revision(..., $rev$ ... $rev$) call; running
the file against a live database is therefore idempotent. parse_schemas()
evaluates the revision blocks and sees the full accumulated schema state, so
type-checking always reflects all revisions. This pattern is shown in the
qusql-sqlx-type-books
example.
Column types
Each column in columns is a FullType:
FullType.type_: the base SQL type (e.g.BaseType::String,BaseType::I64)FullType.not_null:trueif the column isNOT NULL; maps toTvsOption<T>in generated Rust
Links
- crates.io
- docs.rs
- Example: qusql-type-check: a CLI tool that prints inferred types for queries against a schema
qusql-mysql and qusql-mysql-type
This chapter covers the two MySQL/MariaDB crates in the qusql family:
qusql-mysql: an async MySQL/MariaDB driver focused on low overhead and correct cancellation behaviour.qusql-mysql-type: a thin proc-macro layer on top that gives you compile-time type checking of SQL queries against a schema file.
qusql-mysql
qusql-mysql is a lightweight async MySQL/MariaDB driver. It deliberately
prioritizes efficiency: a normal query returning a string allocates no extra
memory, error types are 8 bytes, and very few tasks are spawned. A benchmark
against sqlx shows it is significantly faster:
| Test | qusql-mysql | sqlx |
|---|---|---|
| Insert (400 k rows) | 14 219 ms | 15 500 ms |
| Select all (100 x) | 10 969 ms | 15 861 ms |
| Select stream (100 x) | 9 991 ms | 13 216 ms |
| Select one (400 k) | 19 085 ms | 34 729 ms |
Cancellation safety
Dropping or cancelling any future or struct returned by the library does not
corrupt the connection. The connection has internal state that finishes any
partially executed query the next time it is used, or when
Connection::cleanup() is called. When a PoolConnection is dropped mid-query
the cleanup runs in a spawned task; if that takes too long the connection is
closed and a new one is established on the next request.
This means qusql-mysql is safe to use in a web server where requests can be
cancelled at any time: a long-running query will be killed shortly after its
request is dropped.
Feature flags
| Flag | Effect |
|---|---|
chrono | Bind and decode support for chrono::DateTime and chrono::NaiveTime |
list_hack | Support for passing a List(&slice) as a dynamically-sized IN (?) argument |
stats | Add query count and timing statistics to Connection |
Basic usage
#![allow(unused)] fn main() { use qusql_mysql::{ ConnectionError, ConnectionOptions, Executor, ExecutorExt, Pool, PoolOptions, }; async fn example() -> Result<(), ConnectionError> { let pool = Pool::connect( ConnectionOptions::from_url("mysql://user:pw@127.0.0.1:3306/db").unwrap(), PoolOptions::new().max_connections(10), ) .await?; let mut conn = pool.acquire().await?; // Execute a statement let mut tr = conn.begin().await?; tr.execute("INSERT INTO notes (title) VALUES (?)", ("Hello",)) .await?; tr.commit().await?; // Fetch rows as tuples: no schema knowledge required let _rows: Vec<(i64, String)> = conn.fetch_all("SELECT id, title FROM notes", ()).await?; Ok(()) } }
Links
qusql-mysql-type
qusql-mysql-type wraps qusql-mysql with a set of proc-macros that
type-check your SQL queries at cargo check time using a plain SQL schema
file. The type-checking model is the same as described in the
qusql-sqlx-type chapter (compile-time schema evaluation,
nullability inference, argument type checking, "did you mean" hints, and so on)
and the schema file format is identical to what is described in the
qusql-type chapter.
Setup
Add to Cargo.toml:
[dependencies]
qusql-mysql-type = "*"
Place your schema in qusql-mysql-type-schema.sql next to the crate's
Cargo.toml:
CREATE TABLE notes (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
title VARCHAR(255) NOT NULL,
body TEXT,
pinned TINYINT(1) NOT NULL DEFAULT 0
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
For guidance on how the schema file is evaluated, how to export one from an existing database, and how to handle migrations, see the qusql-type schema evaluation model.
Macros
All macros take (&mut conn, "SQL", args...). execute! has no variants;
every fetch macro comes in four forms combining two independent axes:
Base operations:
| Macro | Description |
|---|---|
execute! | Run a statement; returns affected row count |
fetch_one! | Fetch exactly one row (error if zero or more than one) |
fetch_optional! | Fetch zero or one rows |
fetch_all! | Fetch all rows into a Vec |
fetch! | Fetch rows lazily as an async stream |
Variant suffixes (apply to all fetch macros):
| Suffix | Effect |
|---|---|
| (none) | Rows returned as tuples with borrowed values where possible (e.g. &str for VARCHAR) |
_owned | Like the base, but all values are owned (e.g. String instead of &str) |
_as | Maps each row into an explicit struct: fetch_all_as!(MyRow, &mut conn, "SQL", args...) |
_as_owned | _as with owned values |
So the full set of fetch macros is fetch_one!, fetch_one_owned!, fetch_one_as!,
fetch_one_as_owned!, and the same pattern for fetch_optional, fetch_all, and fetch.
Example
#![allow(unused)] fn main() { use qusql_mysql_type::{execute, fetch_all}; // Argument types are checked at compile time execute!( &mut conn, "INSERT INTO notes (title, pinned) VALUES (?, ?)", "Hello", false, ) .await?; // Return types are inferred from the schema: // (i32, String, Option<&str>) let notes = fetch_all!(&mut conn, "SELECT id, title, body FROM notes ORDER BY id",).await?; for n in ¬es { println!("{}: {}", n.title, n.body.unwrap_or("")); } }
Links
qusql-sqlx-type: SQL queries that are checked at cargo check
If you have used sqlx you almost
certainly know its query! macro. It gives you compile-time checked SQL by
connecting to a real database during the build, verifying the query against the
live schema, and recording the result in a sqlx-data.json file that is checked
into source control for CI. The feature is genuinely useful, but the workflow
has some friction:
- You need a running database when you run
cargo check(orcargo buildlocally), unless the project has an up-to-datesqlx-data.json. - Every schema change means re-running
cargo sqlx prepareto regenerate that file and commit it alongside the code. - In CI, you need to spin up a database (or carefully maintain
sqlx-data.json) to get the compile step to succeed.
qusql-sqlx-type takes a different approach: your schema lives in a plain SQL
file right next to your Cargo.toml, and the proc-macro reads and parses it
at compile time without touching a database at all. The tradeoff is that the
crate's schema parser has to understand your schema definition rather than asking the database;
in practice that covers the vast majority of real-world schemas.
Quick start
We will walk through how to use the library, starting with adding the dependency and writing a schema file, then showing how to write type-checked queries against it.
Add the dependencies to your Cargo.toml
[dependencies]
qusql-sqlx-type = "*"
sqlx = { version = "*", features = ["postgres", "runtime-tokio"] }
tokio = { version = "*", features = ["full"] }
Place your schema in sqlx-type-schema.sql in the root of the crate
(alongside Cargo.toml):
-- -*- sql-product: postgres -*-
CREATE TABLE IF NOT EXISTS notes (
id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
title text NOT NULL,
body text,
pinned boolean NOT NULL DEFAULT false,
created_at timestamptz NOT NULL DEFAULT now()
);
Now write queries exactly as you would with sqlx::query!, but import the macro
from qusql_sqlx_type instead:
#![allow(unused)] fn main() { use qusql_sqlx_type::query; // Argument types checked: $1 must be compatible with `text NOT NULL`, // $2 must be compatible with `text` (nullable; Option<...> is fine). query!( "INSERT INTO notes (title, body) VALUES ($1, $2)", title, body ) .execute(&pool) .await?; // Return types inferred from the schema, no annotations needed: // row.id : i32 (integer NOT NULL) // row.title : String (text NOT NULL) // row.body : Option<String> (text, nullable) // row.created_at : chrono::DateTime<chrono::Utc> (timestamptz NOT NULL) let notes = query!("SELECT id, title, body, created_at FROM notes ORDER BY id") .fetch_all(&pool) .await?; }
If you write the wrong column name, pass the wrong argument type, or try to treat a nullable column as if it were non-null, you get a Rust compiler error, not a runtime panic.
Error messages
One area where qusql-sqlx-type aims to be noticeably better than working
directly with a database driver is the quality of error messages.
When you run a query with a typo against PostgreSQL the error you get back at runtime is:
Error: error returned from database: column "titl" does not exist
MySQL gives a different format but is equally a runtime error:
Error: error returned from database: 1054 (42S22): Unknown column 'titl' in 'field list'
With qusql-sqlx-type the error is caught at compile time and rendered using
ariadne-style diagnostics. The message
is attached to the query! call as a Rust compiler error, with a labelled span
pointing to the exact character inside the SQL string literal:
error: ╭─[ query:1:8 ]
│
1 │ SELECT titl, body FROM notes WHERE id = $1
│ ──┬─
│ ╰─── Unknown identifier
│
│ Help: did you mean `title`?
───╯
--> src/main.rs:7:24
|
7 | let _rows = query!("SELECT titl, body FROM notes WHERE id = $1", id)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The same mechanism applies to type errors inside the SQL itself: mismatched types in a comparison, an argument bound to the wrong placeholder, or a nullable column used where a non-null value is required all produce labelled compile errors that point you straight to the problem.
How it differs from sqlx::query! in practice
With sqlx::query! the schema truth lives in the running database. The macro
connects, asks the database to describe the query, and records the answers.
That's powerful (it handles every database feature automatically), but it means
your build is always one step behind: you change the schema, run a migration, run
cargo sqlx prepare, commit sqlx-data.json, and only then do other developers
get accurate type information.
With qusql-sqlx-type the schema truth lives in sqlx-type-schema.sql, the
same file your application reads to bootstrap the database on first run. There
is no side-channel file, no prepare step, and no running database needed. When
you add a column, you update the SQL file; cargo check picks up the change
instantly on the next invocation.
This also makes the crate pleasant in monorepos where you might not want every crate to have its own database connection string in the environment, and in early-stage projects where the schema is still changing fast.
The mental model: start with an empty database
The way to think about sqlx-type-schema.sql is: this is exactly the SQL
you would run against a fresh, empty database to create every object the
application needs. The schema evaluator processes it top-to-bottom, executing
CREATE TABLE, ALTER TABLE, CREATE TYPE, CREATE INDEX, stored procedures,
PL/pgSQL functions, and so on, and builds an in-memory representation of the
resulting database state. Every query! invocation is then type-checked against
that in-memory state.
Dialect detection
Like sqlx, qusql-sqlx-type supports multiple database backends. You specify
which database your schema targets by adding a short comment to the first line of
sqlx-type-schema.sql. The macro reads this comment at compile time and selects
the right SQL dialect, argument placeholder style, and type mappings automatically:
-- -*- sql-product: postgres -*- -> PostgreSQL mode ($1/$2/... arguments)
-- -*- sql-product: postgis -*- -> PostGIS mode (same + PostGIS extensions)
-- (no comment, or any other) -> MariaDB/MySQL mode (? arguments)
Migrations in production
Because the schema file describes the final desired state, bootstrapping a fresh
database is trivial: just sqlx::raw_sql(include_str!("../sqlx-type-schema.sql")).execute(&pool).await?.
For migrating an existing database there are two common approaches:
1. A separate migrations folder (e.g. with
sqlx-migrate or any other migration tool).
You write your full schema in sqlx-type-schema.sql for type-checking
purposes, and maintain an independent set of incremental migration files for
production. The two are manually kept in sync: a migration adds a column; you
also add it to the schema file.
2. An idempotent schema with revision tracking, the pattern shown in the
qusql-sqlx-type-books example.
The schema file itself contains a small schema_revisions table and a PL/pgSQL
apply_revision() function. Each logical migration is a SELECT apply_revision(...)
call that wraps a block of schema statements; the function skips it if the revision name has
already been recorded. Running the schema file against a live database is
therefore idempotent: it applies only the new revisions, in order, in a single
transaction:
BEGIN;
CREATE TABLE IF NOT EXISTS schema_revisions (
id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
name text NOT NULL UNIQUE,
sequence_index integer NOT NULL UNIQUE,
applied_at timestamptz NOT NULL DEFAULT now()
);
CREATE OR REPLACE FUNCTION apply_revision(rev_name text, description text,
seq_idx integer, command text) RETURNS BOOLEAN AS $$ ... $$ LANGUAGE plpgsql;
-- Revision 0
SELECT apply_revision('init', 'Initial schema', 0, $rev$
CREATE TABLE IF NOT EXISTS notes ( ... );
$rev$);
-- Revision 1: added pinned column
SELECT apply_revision('add_pinned', 'Add pinned flag', 1, $rev$
ALTER TABLE notes ADD COLUMN IF NOT EXISTS pinned boolean NOT NULL DEFAULT false;
$rev$);
COMMIT;
The schema evaluator is smart enough to process DO, CREATE OR REPLACE FUNCTION, SELECT apply_revision(..., $rev$ ... $rev$) revision blocks, and bare BEGIN/COMMIT
blocks, so the query! macros see the fully-evaluated state regardless of which
pattern you choose.
The type system
Nullability is first-class
sqlx::query! does track nullability, using the database engine to determine
whether a column can be null. For a plain column reference like SELECT x FROM t
this works well, and it is often preserved through simple expressions like x + x.
However it is frequently lost through function calls, and the database engine has
no way to narrow nullability based on your WHERE clause.
qusql-sqlx-type goes further in two ways. First, it analyses WHERE clauses
to narrow nullability: if your query is
SELECT x FROM t WHERE x IS NOT NULL
or
SELECT x FROM t WHERE x = $1
then x is known to be non-null in the result and the macro gives it type T
rather than Option<T>.
Argument type checking is stricter than sqlx
sqlx::query! generally accepts any type that implements the right sqlx::Encode
trait for the target column type. qusql-sqlx-type has a curated set of
allowed conversions per SQL type. For example, you cannot accidentally bind a
f64 where a column expects integer: only i8 through i64 (and their
unsigned counterparts) are accepted for integer columns. This catches a real
class of bugs where a loosely-typed intermediate value gets passed to the wrong
placeholder.
Type checking also applies inside the SQL query itself. Most databases allow
implicit coercions that are technically valid but hide likely mistakes: using an
integer where a boolean is expected, or comparing an integer column to a
floating-point argument. qusql-sqlx-type rejects these. If you genuinely
need a cross-type operation you can make the intent explicit with a SQL cast:
SELECT * FROM t WHERE active = $1::integer -- explicit cast, accepted
SELECT * FROM t WHERE score > $1::float8 -- explicit cast, accepted
MySQL/MariaDB support and the _LIST_ hack
For MySQL and MariaDB that do not support lists as arguments we have added a _LIST_ hack.
Queries with IN (...) clauses over a runtime-determined list of values are
notoriously awkward in sqlx because sqlx does not support dynamically-sized
parameter lists. qusql-sqlx-type solves this with a special placeholder:
#![allow(unused)] fn main() { // Pass a slice; the macro expands _LIST_ to the correct number of ? at runtime. let ids: Vec<i32> = vec![1, 2, 3]; let rows = query!("SELECT id, title FROM notes WHERE id IN (_LIST_)", &ids,) .fetch_all(&pool) .await?; }
_LIST_ is replaced with the right number of ? placeholders based on the
slice length. If the slice is empty it is replaced with NULL so that IN (NULL) is always a valid SQL expression. Note that x IN (NULL) evaluates
to UNKNOWN (never TRUE) for any value of x, so a query with an empty
list will return no rows, which is the expected behaviour.
What is supported; what is not
The schema evaluator understands the most common schema and data manipulation statements used in real-world applications:
CREATE TABLE/ALTER TABLE/DROP TABLECREATE INDEX/DROP INDEXCREATE TYPE ... AS ENUM(PostgreSQL)CREATE VIEW/CREATE MATERIALIZED VIEWCREATE FUNCTION/CREATE OR REPLACE FUNCTION(PL/pgSQL)CREATE PROCEDURE/CALL(MySQL stored procedures)DO $$ ... $$anonymous blocksINSERT,UPDATE,DELETE,SELECTincludingJOIN, subqueries,WINDOWfunctions,CTEs(WITH),RETURNING,ON CONFLICT
Many (most) of the SQL functions and operators are understood: aggregates, string functions, date arithmetic, JSON operators, and so on. That said, the parser is not a full PostgreSQL or MySQL engine. Some less common functions or vendor extensions may not yet be recognized.
Pull requests and issues are very welcome for anything you find missing. The parser is written in pure Rust and is straightforward to extend. The project is at github.com/antialize/qusql.
Examples
To make the concepts above concrete, the repository includes two fully-working example programs that you can clone and run against a local PostgreSQL instance. They are intentionally small enough to read in a few minutes, and together they cover the full range from a single-table beginner setup to a multi-table schema with enums, UUIDs, and migrations.
-
qusql-sqlx-type-notes: a small CLI that adds, lists, pins, and deletes notes from a single-table PostgreSQL schema. Good for getting started. -
qusql-sqlx-type-books: a library catalog with authors, books, loans, and reviews; demonstrates UUIDs, user-definedENUMtypes,JOINqueries,RETURNING,ON CONFLICT, the idempotent migration pattern, andchronodate types.
Both compile without a running database (cargo check and the query! macros
work offline) and can be run against a local PostgreSQL instance with a single
DATABASE_URL environment variable.
Links
qusql-py-mysql-type
qusql-py-mysql-type and qusql-mysql-type-plugin let you write MySQL queries
in Python that are type-checked by mypy at static analysis time, with no runtime
surprises.
The plugin is implemented as a native extension (compiled Rust via PyO3) that
mypy loads to evaluate your schema and annotate every execute() call.
Setup
Install the packages (e.g. with uv):
# pyproject.toml
[project]
dependencies = ["qusql-mysql-type"]
[dependency-groups]
dev = ["qusql-mysql-type-plugin", "mypy", "types-mysqlclient"]
[tool.mypy]
plugins = ["qusql_mysql_type_plugin"]
Place your schema in mysql-type-schema.sql in the working directory where
mypy runs (usually the project root):
CREATE TABLE IF NOT EXISTS notes (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
title VARCHAR(255) NOT NULL,
body TEXT,
pinned TINYINT(1) NOT NULL DEFAULT 0
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Usage
from typing import cast
import MySQLdb
import MySQLdb.cursors
from qusql_mysql_type import execute
conn = MySQLdb.connect(host="127.0.0.1", user="test", passwd="test", db="test")
# cast() is required because MySQLdb stubs leave cursor() return type as Any
c = cast(MySQLdb.cursors.Cursor, conn.cursor())
# mypy infers: list[tuple[int, str, str | None]]
rows = execute(c, "SELECT id, title, body FROM notes ORDER BY id").fetchall()
for note_id, title, body in rows:
print(title, body or "")
What mypy checks
- Column names referenced in
SELECTexist in the table %sargument count matches the query%sargument types match the expected SQL column types- Inferred return type flows into the rest of your code; wrong destructuring patterns are caught at analysis time
- Invalid SQL is a mypy error
The _LIST_ expansion
For IN (...) queries with a variable-length list, use _LIST_:
ids = [1, 2, 3]
rows = execute(c, "SELECT id, title FROM notes WHERE id IN (_LIST_)", ids).fetchall()
_LIST_ is expanded at runtime to the correct number of %s placeholders. If
the list is empty it becomes NULL.