Core Innovations
KYRx retains zero-cost memory safety while eliminating the three biggest sources of developer friction in systems programming.
own T · shared T · default &T
Choose your ownership model per value. Exclusive stack-allocated ownership with own T, reference counting with shared T, or the default safe reference. The borrow checker still enforces safety — but you set the rules, not the compiler.
let x: own Vec<u8> = ... let y: shared Config = ... let z: &[u8] = &x[..]
concurrent code looks like sync code
No async keyword. No await keyword. No function coloring — a sync function can call a concurrent one without modification. The work-stealing scheduler and io_uring / kqueue / IOCP integration are entirely invisible to user code.
fn load_all(ids: Vec<Id>) -> Vec<Row> {
ids.map(|id| db::get(id)).collect()
// ↑ concurrent, zero keywords
}step backwards through execution history
The KYRx runtime records execution history at near-zero overhead. Attach the debugger to any running process and step backwards through every function call, memory write, and state transition. No replay required.
kyrx debug --attach <pid> > step-back 50 > inspect frame -3 > watch x.field == 42
Compiler Architecture
Every crate has a single responsibility. The pipeline is deterministic and auditable — each stage passes a well-typed artifact to the next. Cranelift handles fast dev builds; LLVM maximises release performance.
Runtime Architecture
Language Tour
Same safety guarantees. Radically less friction. Three scenarios that show where KYRx wins every day.
Bridges & FFI
First-class FFI bridges let KYRx call and be called by every major language and runtime. Adopt incrementally — no rewrite required.
Render KYRx components in React apps via JSX bridge
Native Node.js addon via N-API — zero-copy buffers
Direct C ABI interop — share types across language boundary
PyO3-style bindings — call KYRx from Python, no overhead
cgo-compatible shared library — link KYRx into Go services
WASM32 target — run in browser, Cloudflare Workers, Deno
Performance
Colorless concurrency compiles down to the same machine code you would write by hand. The scheduler is a Rust library — no interpreter, no bytecode.
The kyrxc_backend_llvm crate emits LLVM IR directly. All LLVM optimisation passes apply — vectorisation, inlining, LTO, PGO. Binary parity with Rust release builds.
Fewer lifetime annotations. No async/await syntax. Opt-in ownership means you write correct code fast. Real teams ship features in days, not weeks fighting the borrow checker.