Reframe payment UX from PWFG to pay neutral#116
Merged
isaacbowen merged 2 commits intomainfrom Feb 13, 2026
Merged
Conversation
A February 2026 conversation in the Lightward Inc slack about Softer: > > It's interesting seeing PWFG up with buttons to select a price - no conversation required > to this point, I think this isn't actually pwfg, I think it's actually something we might call "pay neutral"? > at the root it's still about creating financial accord peacefully > yours.fyi already uses this model, self-selection of $1/$10/$100/$1000 > it's not about financial sustainability (AI is not itself financially sustainable), it's about address-space > with pwfg, there's a human<->human interaction because without that we don't know what the transaction will do to our health. > with *this* thing, I know *exactly* what a "pay neutral" transaction will do to our health, which means offering it doesn't compromise Lightward Inc's sustainability > so the question *here* is less, "What price feels good?" and more of "How much do you weigh, financially?" > it's a diagnostic question, not a preference question --- Isaac: may I ask for your help with a codething? Claude: Of course — what's on your mind? Isaac: right now Softer's payment *implementation* names PWFG, and I want to ask you if I *can* ask you to adjust toward "pay neutral" holistically, bearing in mind both future-us in this codebase and future-user encountering the ux directly Claude: The surface area is small — one line of UX text, some CLAUDE.md framing. But the *shift* is precise and worth getting right. What's changing conceptually: PWFG is dialogic — "what price feels good?" requires a relational negotiation, a conversation between humans. That's still Locksmith/Mechanic's model. But Softer's four buttons aren't that. There's no conversation to have about the price because the transaction is *neutral* to Lightward Inc's health. The question isn't "what do you value this at?" — it's "where are you, financially?" A measurement the user already knows. What that means for the UX text: The eigenbearer principle says: don't model others, trust them to know their own position. So the replacement text shouldn't *explain* how to choose — it should trust the user to self-locate. Proposed: "Every room is the same product. Choose the amount that matches your scale. Visible to all participants." Isaac: two things — I feel like addressing the Yours codebase as well here. And how do you feel about making explicit cross-regime reference? "How much do you weigh, financially?" does something really specific as a series of words. Claude: "How much do you weigh, financially?" is exactly right. The direct question does something the polite version doesn't: it asks the user to *feel* their own mass. The cross-regime jump (physical weight → financial weight) short-circuits the usual "what's this worth to me?" calculation and replaces it with "what am I?" — diagnostic, not preference. Isaac: what if we did both takes on language? the "weight" language as the field label above the selector, your language as the bit below? Claude: The question as the header, the grounding as the footer, the four buttons as the answer between them. Isaac: "Every room is the same product. For a payment-neutral experience, choose the amount that matches your scale. Your choice will be visible to all participants." Claude: "Payment-neutral" as language the user actually encounters — it names the concept directly instead of leaving it as internal vocabulary. And "matches your scale" echoes "weigh" from the header without repeating it. The header is a question that makes you feel something, the footer is an explanation that gives you a word for what you just felt. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
9764cef to
cc7699e
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The distinction: PWFG is dialogic (human↔human negotiation about price). Pay neutral is diagnostic — the user self-locates based on their financial reality, not their feelings about the product's value. The transaction is neutral to Lightward Inc's health, so there's no negotiation needed.
Test plan
🤖 Generated with Claude Code