The Inevitable Conflict
In any system where multiple clients write to the same data independently, conflicts are inevitable. The question isn’t whether they’ll happen — it’s how you handle them when they do.
Two users update the same task simultaneously. One user edits a task while offline, another edits it while online, and then the offline user reconnects. A sync delay causes two clients to base their changes on the same version of a record.
Bad conflict resolution produces data loss, silent overwrites, or confusing “version conflict” dialogs that require user decisions they aren’t equipped to make.
FlowEra’s Model: Field-Level Last Write Wins
FlowEra uses last-write-wins (LWW) conflict resolution at the field level. This sounds simple, but the “field level” part is crucial.
Most systems that use LWW apply it at the record level: the most recent write to the entire record wins, and the previous write is lost. If User A updates the title and User B updates the status at almost exactly the same time, one change overwrites the entire record and one update is lost.
FlowEra tracks changes at the field level. The title update and the status update are independent writes. If they happen simultaneously, both survive — User A’s title change is applied, and User B’s status change is applied. There’s no conflict to resolve because the changes don’t touch the same data.
When Field-Level LWW Isn’t Enough
Field-level LWW handles the majority of real-world concurrent edit scenarios. Two people rarely update the exact same field at the exact same millisecond with conflicting values.
When they do — two people both update the task status simultaneously, and they choose different values — the field-level LWW rule means the server-side timestamp determines which value wins. The later write survives.
This is occasionally wrong. But the alternative — requiring user decisions on merge conflicts — is worse for a task management tool. Users are in flow state. Interrupting them to resolve a conflict about which status label was applied 50ms earlier breaks that flow and produces a worse outcome than silently accepting the later value.
For collaborative text editing (knowledge base pages), we use a different approach: operational transformation, which merges concurrent character-level edits without data loss. That’s a significantly more complex system, warranted by the nature of document editing where every character matters.
The Write-Back Queue
When FlowEra’s client writes to local SQLite, those writes are queued for the write-back endpoint. The queue is persistent — if the app closes, the queue survives and flushes when the user next opens the app.
Each mutation in the queue carries:
- The operation type (PUT / PATCH / DELETE)
- The record ID
- The field values being written
- The client-side timestamp
The server applies queued mutations in order. It validates that the user has permission to write the affected records (checking tenant_id from the JWT against the affected rows), applies the mutation, and records the server-side timestamp. The server timestamp becomes the authority for any conflict resolution.
Handling Schema Mismatches
If a client has been offline for an extended period and the server schema has been updated (new required fields, changed field types), queued mutations might not match the current schema. FlowEra handles this gracefully: unknown fields are ignored, missing required fields take server-side defaults, and the server records a validation warning in the audit log without rejecting the entire mutation batch.
This ensures that a user returning from vacation doesn’t come back to find their week of offline work rejected because a schema migration ran while they were away.
What We Don’t Protect Against
We’re transparent about the limitations of this model:
- High-frequency edits to the same field by different users — only the last write survives. In practice, two users editing the same task field simultaneously is rare enough that this is acceptable.
- Business logic conflicts — if two users simultaneously move a task to “Done” and the system is supposed to trigger a single deployment action, we detect the duplicate status transition but require application-level logic to decide what to do.
- Long offline periods with complex changes — if a user is offline for a week and makes hundreds of changes, the reconciliation at reconnect is complex and occasionally produces surprising results.
We document these edge cases rather than pretend they don’t exist. The architecture is correct for 99.9% of real usage patterns.