Why VM now
The 2026 question every buyer asks — can ChatGPT, Claude, or Copilot reverse this? — has a structural answer: AI assistants pattern-match against transform shapes they’ve seen in training. Static obfuscators with fixed transforms eventually lose. Per-build polymorphism wins because the LLM has no fixed signature to learn.
But polymorphism is still a transform of recognizable JavaScript. A determined human with an execution sandbox and patience can recover the original logic given enough time. For the small set of functions where that recovery cost matters — license validation, anti-tamper checks, proprietary algorithms — we want a stronger answer.
That answer is virtualization. Compile the function to bytecode for a custom VM. Ship the bytecode plus the VM interpreter. The original function’s control flow, identifiers, and structure are gone — what ships is a stream of opcodes that only the VM understands.
How it works, end to end
1. You mark functions for virtualization
Source — you opt in per function
// @virtualize
function calculateLicenseHash(userId, productKey) {
const seed = userId * 37 + productKey.charCodeAt(0);
return 'JSO-' + seed.toString(16).padStart(8, '0').toUpperCase();
}
// regular code (NOT virtualized) - keeps native JS speed
function renderUI(state) {
document.body.textContent = state.label;
}
Only the marked function is virtualized. Everything else compiles through the standard Maximum-mode pipeline (renaming, string encryption, flat control flow). The VM is reserved for the parts where its 10–100× runtime cost is worth the protection it gives.
2. The compiler emits bytecode + a VM interpreter
At protection time, the marked function is parsed to an AST and compiled to a stream of opcodes for a stack-based VM. Common opcodes: PUSH_LITERAL, LOAD_VAR, BINARY_OP, CALL_METHOD, RETURN. The opcode encoding is regenerated per build — on Tuesday the BINARY_OP opcode might be byte 0x4a; by Wednesday’s release it’s byte 0x91.
What ships in the protected output (illustrative)
// VM interpreter - shape regenerates per build
(function _vm(){
var R = new Array(256), SP = 0;
var BC = _decodeBytecode("...long base64 string...");
var STR = _decodeStrings("...encrypted constant pool...");
while(true) {
switch(BC[PC++]) {
case 0x4a: R[SP-2] = R[SP-2] * R[SP-1]; SP--; break; // mul
case 0x91: R[SP++] = STR[BC[PC++]]; break; // load string
// ... ~40 other opcodes, dispatcher shape randomized per build
}
}
})();
calculateLicenseHash = function(){return _vmCall(0x12);};
3. The protected file at runtime
When your application calls calculateLicenseHash(userId, productKey), control transfers into the VM. The VM dispatches opcodes against its internal register file and stack. The function’s observable behaviour is identical — it returns the same hash for the same inputs — but its structure is unrecoverable from the shipped bundle alone.
How VM mode differs from Maximum mode today
To make the tradeoff concrete, here’s the same function as the website’s earlier walkthrough:
Maximum mode today — per-build polymorphic decoder, encrypted strings, flat control flow
(function(){var _0xa3=_dec(0x4a);
function _0xa4(_p){var _st=0;
while(_st!==-1){switch(_st){
case 0:if(_p===_dec(0x4b))return!1;
_st=1;break;
case 1:return _p[_dec(0x4c)]>
Date[_dec(0x4d)]();
}}}})();
The original control flow shape is preserved (you can still see it’s an if/else returning a comparison) but identifiers, strings, and per-build randomization make it expensive to follow.
Maximum + VM — the same function virtualized
// the function body is gone - replaced by a VM call
calculateLicenseHash = _vmCall(0x12, _vmCtx);
// what _vmCall(0x12) does is encoded in:
// - the opcode stream (~80 bytes for a small function)
// - the VM dispatcher (regenerates per build)
// - the encrypted constant pool
// none of which contain the original variable names,
// the original control flow structure, the original
// arithmetic constants, or the literal 'JSO-' string.
A reverse engineer trying to recover the original logic now has to: (a) understand this build’s VM dispatcher, (b) extract the bytecode and the constant pool, (c) symbolically execute or simulate the VM with synthetic inputs, (d) reconstruct the original control flow from the opcode trace. That’s an order of magnitude harder than reading polymorphic JavaScript.
Cost, and when to use it
VM execution is meaningfully slower than native JavaScript. Concrete numbers, measured on a small license-hash function:
- Native JS: ~3 ns / call
- Maximum mode: ~5 ns / call (basically free overhead)
- Maximum + VM: ~250 ns / call (~80× slower)
For a function called twice on page load, that’s 500 ns of overhead — entirely invisible. For a hot inner loop that runs 10,000× per second, that’s 2.5 ms of overhead per second — still fine. For a function called millions of times in a tight loop, you don’t want this.
The right rule is the one we’re defaulting to: opt-in per function via the // @virtualize comment. Code is virtualized only where you explicitly mark it. The other 99% of your bundle keeps native JS speed and gets Maximum-mode polymorphic protection.
Good fits: license validation, anti-tamper checks, in-app entitlement gates, watermarking, proprietary scoring/ranking algorithms, fingerprinting routines, key-derivation paths.
Bad fits: rendering loops, animation tick handlers, network parsing hot paths, anything in a per-frame budget. These should stay native JS with Maximum-mode protection.
How VM mode interacts with the anti-LLM design
The same per-build polymorphism that defeats LLM pattern-matching on Maximum-mode output applies to VM mode — with stronger leverage:
- Opcode encoding regenerates per build. A model that learned what byte
0x4a meant in the August release sees byte 0x91 in the September release for the same operation.
- Dispatcher shape regenerates per build. The order of opcodes in the dispatcher switch, the register file layout, the stack pointer naming convention — all randomized.
- Constant pool encoding regenerates per build. String literals and numeric constants are encrypted with build-time keys; their byte representation in the pool changes every release.
Net effect: an LLM that has been trained on sample VM output, or that solves one customer’s build, has nothing transferable to use on the next. There is no fixed signature.
The honest limits
Things VM mode does not solve:
- Live execution observation. An attacker can still attach a debugger, set a breakpoint inside the VM, and dump the register file mid-execution. VM mode raises the cost of understanding the code without running it; it doesn’t prevent observation of the running machine. Pair with anti-debug + runtime monitoring for that threat.
- VM-aware deobfuscators. If an attacker invests serious time, they can reverse the VM dispatcher, write a decompiler that converts opcode streams back into JS, and apply it to every release. We make this expensive (per-build dispatcher randomization) but not impossible. The economic argument — per-release cost to attacker exceeds value of recovered code — still applies.
- Async / await inside virtualized functions. The current design supports synchronous functions only. Marking an
async function as virtualized will fail at compile time. We may lift this in a later release, but truly concurrent async inside a VM has hard semantic issues.
When this ships
Currently in preview. Expected delivery: a future release as part of the Maximum-mode pipeline. Tier availability: included in Corporate ($49/mo) and Enterprise ($99/mo) tiers; not in Basic. Free tier is unaffected.
If you want to be notified when VM mode goes live, contact us and ask for the VM mode beta. We’re looking for a small set of customers with realistic workloads to test it before general release.