Skip to content

skyrod-vactai/subverse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introducing Subverse

Advancing Sovereign Communication

Subverse is communication software designed to empower users by providing agency, control, and customizability over their online interactions. This tool addresses the need for communication sovereignty in an age of spam, authentication fatigue, propaganda, data breaches, leaks, and censorship. Users can easily manage their relationships, ensuring private, programmable, reliable, and seamless communication.

With Subverse, you decide who makes high value content. Enjoy the peace without spammers, shills, bots, and useless social media posts. The software bypasses censorship, allowing free and open messaging. Automation and customizability are at the core of Subverse, offering a user-friendly programming interface and the ability to build and share apps as easily as any other content.

Featuring telepathy-level privacy, Subverse treats information as the valuable commodity it is, ensuring no outside observer can detect if, when, or with whom a user is communicating. By adopting Subverse, users will experience enhanced privacy, freedom, and adaptability in their online communication, giving them a distinct advantage in the digital world.

Overview

Goals

  • Users communicate with privacy and trust
  • Users control their own content
  • Users can program and share their own applications on the platform
  • Users can automate interactions via chatbots that act on their behalf (and speak either programmable API or natural language)
  • Users are freed from central control, by pushing functionality to the edges

Properties

Authenticated and attributed

With Subverse, you always know with maximum confidence, who created the content you are viewing. Since no one can impersonate anyone else, a lot of current internet problems go away completely or are greatly diminished: phishing, propaganda, deep fakes.

In the subverse protocol, authentication is automatic, programmable and flexible to balance the goals of security and convenience.

In subverse, you will never need to:

  • Create an account
  • Verify your email address
  • Enter a username or password
  • Prove you are human
  • Enter a bank account or credit card number

Trusted

Your attention is a precious resource, with millions of internet voices competing for it. Subverse provides a personal entourage that filters out the voices, leaving you in peace to communicate with friends, family, associates and people you admire.

In the Subverse protocol, no one can talk to you without first being properly introduced by someone you trust. Spam does not and cannot exist. Any low quality content you do receive immediately reduces the influence of whoever introduced you to its source, making it less and less likely over time that you’ll receive any low quality content.

By using an introduction-based protocol, you always know who you’re dealing with and how much you can trust them.

In subverse’s trusted environment, the presentation of content is controlled by you, not the other party. Every shopping site works exactly the same way, so you can get things done and get on with your life. You aren’t bombarded by flashy branding and ads.

Private

Current “privacy” messengers send whispers: No one else knows what you are saying, but they can see who you’re whispering to. Subverse gives you a new superpower: telepathy. While telepathy is not possible in the physical world, Subverse brings it to the digital realm. Your thoughts reach their target and no observer knows you did anything at all.

Durable

Your private data is treated as a valuable resource. It is encrypted and replicated on the network, such that if you lose access to your device, you can pick up right where you left off on a new device. All you need is your key.

Reliable

If you go offline, other peers will collect data being sent to you (in encrypted form) and deliver it when you are back online. Messages will always reach you as long as you have your secret key.

Censorship Resistant

All communication is done over a censorship resistant network, which has several techniques for bypassing firewalls. Since you already have complete privacy it is not possible for anyone to block messages based on content.

Automated

Subverse is intended as a replacement for the web - it has the functionality of both a web server (it can serve your data via an API), and a browser (you can consume and view data).

The web is a document-based system originally designed for desktop computing. Subverse is a message-based system, where content from humans and programs integrate seamlessly. AI chatbots fit right in and actually benefit from the network’s authenticated nature.

Serverless

Because Subverse offers strong authentication, privacy, and reliability, it eliminates the need for central servers in many use cases - denial of service attacks are harder for an attacker to pull off, no need to pay CloudFlare to serve your content for you.

Device agnostic

Move between devices seamlessly. While technically your identity on different devices is different, you have access to the same content, and everyone will treat your multiple device identities as the same person.

How it works

Authentication and attribution

Authentication is done by small programs called scripts, inspired by the way bitcoin (and other cryptocurrency) decides whether the person spending money is really the one authorized to spend it. The script acts as a lock, requires some cryptographic proof that acts as the key to unlock it. This allows users to prove who created given content and attribute the content to someone. Scripts can be off-the-shelf or customized to serve your particular security needs.

Trust

Trust starts with you. If you trust someone (say, a family member), you mark them as such in the subverse address book. That tells the application they’re allowed to do things that other people can’t - they can message you, introduce you to others, etc. Your network expands in much the same way it often does in real life: via introductions. Introductions don’t have to be person to person - for example you can think of Google as an introduction service. Whether you trust someone like Google to make introductions is up to you. You can revoke that trust at any time.

Privacy

Underlying Subverse’s networking is an anonymity network that guarantees that no third party can determine what you’re saying, or who you’re talking to, or even if you are talking to anyone at all. It also guarantees that no second party (people you talk to) can determine your physical location unless you explicitly tell them.

Subverse automatically creates new identities when needed. For example, you don’t want a search engine to compile a database of everything you’ve ever searched for, so every time you search, Subverse will use a fresh identity. It’s only when you need someone to remember you, or be able to reach you later, that subverse will re-use identities.

Subverse also encrypts data at rest on your device, and can automatically expire old content such that it does not become a liability.

Durability

All content you create becomes part of an encrypted “stream”, which is similar to a bittorrent file share, but content can be added over time and cannot be deleted. Other users participate as peers in the data sharing. Some users have the decryption key (the people who you want to be able to read the messages), and some don’t (they hold the encrypted data as a backup in case anyone needs it later, but they cannot read it themselves).

Every part of the app is streamed, even those where the only person with the key is you - your address book, app configuration, message history, file attachments, etc. If you lose your phone you can restore everything from the network.

Reliability

Since all data in the system is duplicated in the network, message senders can go offline without delaying delivery. Receivers can go offline without dropping messages.

The way duplication is handled is similar to bittorrent in that chunks of data are exchanged between peers, and a user seeking to download an entire stream can download from multiple peers at once. Where Subverse differs is that peers who don’t care about the content also participate, and are incentivized by micropayments. They serve as both durability (store the content long term) and reliability (can serve the content when the creator is offline).

Censorship resistance

The internet itself is a powerful censorship resistance tool - it automatically routes around censorship. However most people don’t use it that way, they give all their content to a third party (google, facebook) instead of serving it themselves, and that third party can easily censor the content. Subverse fixes that by making it trivial to serve your own content.

It goes even further by using i2p for networking. Anyone who is upset about your content generally doesn’t know who or where you are, so it’s very difficult for them to threaten you.

Subverse is completely decentralized, so there is no company for governments to sue, or server to disconnect.

Automation

Most functionality in the app is programmable via a very simple programming language called kcats. In order to automate things in subverse, you create bots - programs that receive messages and respond to them. You then introduce the bots to your contacts, so they can interact with it.

The bot can do things as simple as sharing photos, or as complex as running an online store.

Names

In Subverse, all names are local and for human eyes only. Everyone’s name in your address book is your name for them. The app itself doesn’t use names, it uses the hash of the person’s script to track who’s who. Like nearly anything else in subverse, address book entries can be shared, and the receiver can edit it however he chooses.

Serverlessness

Many examples of modern coordination software are centralized mostly because that was the only way to get attribution and durability of data. However now we have those properties in a decentralized protocol and can move functionality to the edges, making more efficient use of computing resources and making systems more robust.

If you examine a company, each employee usually doesn’t generate that much content that they could not serve it from their own device (either a workstation or mobile device). All that remains is coordinating the communication, which is easily modeled inside subverse.

The exceptions to this are services that aggregate or process vast amounts of data. For example, while likely no Google employee generates vast amounts of data, their web crawler certainly does. That cannot be easily distributed and will still require a large datacenter. However most corporate functions could be distributed - email, issue tracking, planning and scheduling, payments, administration, etc.

The simplest example of coordinating communication without a server, is a group chat. Your device would understand the group chat protocol, which goes something like this: You produce a stream of messages, (your side of the group conversation). Each message includes an ID of the message it’s replying to. The stream is encrypted and distributed as described in the “durability” section. Similarly your device will download the encrypted streams produced by all the other members, decrypt them and assemble the messages in the correct order, displaying them in much the same way modern messaging apps do.

This is a fairly boring example because there are applications that handle this functionality already. It gets more interesting when we generalize to more complex coordination protocols. Ignoring regulatory burdens for a moment, let’s examine a much more complex group and see how its currently centralized rules and protocols could be pushed to the edges. Let’s look at a hospital.

There are many parties involved in medical care, including: the patient, doctors, nurses, technicians, imaging equipment, and administrators. Let’s look at the content each one produces, and how it can be coordinated.

Patient: he has access to his internal state - how he is feeling, if he is sick or injured, what hurts, and how much. He knows what medications he took and when, etc.

Doctor: Creates medical diagnoses and decisions on a per-patient basis

Nurse: Creates records of treatments administered to patients, observations of patient condition

Imaging and sensing equipment: produces data and images of the patient of various types (heart rate, xray, MRI etc).

Technician: interprets data and images

Administration: controls resources like rooms, equipment, doctors, nurses. Decides how much needs to be paid.

A patient breaks his ankle and goes to the hospital. At the entrance he’s asked to share his medical stream with the hospital, which would include all his medical history. It would include not only his own content but the keys he was given to other data relevant to his treatments. The doctor examines him, thinks the ankle looks broken, adds that to his stream (all hospital staff would create a new stream for each patient to preserve confidentiality), shares the key with the patient and administration. Administration allocates an ER room and xray. Patient is taken to xray machine, images are taken. The machine (which also speaks the protocol) streams the xray images and grants access to the patient, technicians, doctor, and nurses. Presumably the administration does not need to see the xray since they rely on the doctor’s diagnosis, and thus does not fetch or store any images themselves. Doctor views the xray and diagnoses a broken ankle, prescribes a cast, crutches, and pain medication, and adds those data points to his stream. Administration accesses the doctor’s stream and calculates the bill, and so forth.

The patient gets treated just as he would with a centralized coordination system that hospitals use today. However in this case, the patient leaves the hospital with all his medical data directly under his control. He has the keys to the streams and can download and view everything, and can take it with him wherever he might get treated next. The next doctor can see exactly what happened - not just the final result but who did and said what and when. If there was an initial incorrect diagnosis or other mistake, he’d see that as well since streams are append-only. The nature of the system enforces accountability because the data is immutable.

A real decentralized hospital protocol would be quite complex, but it will result in a system that has far less complexity at the center than current systems, which are run by administrators and have to store and process all the data, not just the data required for administration. A simpler center means a smaller central point of failure. All these communications protocols are independent and don’t require the others to function properly. If the hospital’s central system goes down, patients can still be treated, doctors can still view xrays and prescribe medication and nurses can receive that information and administer treatments. The administration’s system can catch up when it comes back online. In fact, the system could probably function reasonably well even without an internet connection, where data is shared p2p via bluetooth or ad-hoc wifi.

Of course, building a working system as described would be a complex undertaking, but no more complex than existing systems.

The reason a system like this is “decentralizable” is that there is not much need for aggregation where one party needs all the data. In fact, in the medical industry where confidentiality is required by law, that could be a liability.

Prior art, components, and inspiration

Background

About Identity

Overview

In order to know who a message is from, we need a way to for the message to “prove” it comes from a particular name. Humans understand names, not cryptographic keys. However names are also personal - the name you give to someone might not be the name anyone else gives them (even themselves).

So let’s say Alice wants to know when a message is from someone she calls “Bob”. She sets up a programmatic “lock”, that will ingest a message as data, process it, and if it is from Bob, it will return “Bob”, otherwise return nothing (meaning, “I don’t know who it’s from”).

*Note maybe it won’t return “Bob”, it could just return true and the actual name associated with the lock won’t be part of the lock program itself, but rather somewhere outside it (whatever application is responsible for executing the program, would have a mapping of names to locks). Then the lock program can just be a predicate.

How can it tell who the message is really from? The basic mechanism is digital signatures. In order for the “lock” program to process it correctly, the message will need to include (for example):

  • The message content
  • a digital signature

The program will already contain the public key Alice expects Bob to use, and it will verify the signature on that message. If it verifies, it returns “Bob”, otherwise, nothing.

These scripts can get more complex than “check if the signature is valid for pk_x”. It could instead require:

  • a message delegating the signing from key x to key y
  • the signature by key x
  • the message content
  • the signature with key y

Then the lock would do the following:

  • Put all known revocations on the stack and check to see if x is in the list. if not, continue
  • Do the same check for y
  • Check the signature on the delegation message, if good, continue
  • Check the sig on the message content, if good, return Bob
  • otherwise return nothing.

Then if Mallory steals Bob’s key y, but Bob realizes this, he can send this to alice:

  • Message content “I lost control of my key y, don’t accept it anymore”
  • signature by key x

When alice receives this, she adds y to her list of stolen (and therefore useless) keys.

Let’s say after that, Mallory tries to impersonate Bob to Alice. Alice’s lock will find key y in the revocation list, and the program returns nothing.

Now let’s say Bob loses control of key x. He can revoke that too, but that means he’s out of cryptographic methods to identify himself to Alice. He’ll have to perhaps meet Alice in person (or maybe a phone call) to tell her a new key so she can update her lock that grants access to the name “Bob”.

Now maybe Alice decides she doesn’t want to call “Bob” “Bob” anymore, she wants to call him “Bob Jones”. She can just update the name on the lock program, so that it returns “Bob Jones” instead of “Bob”.

Generally not every message Bob sends is going to require this cryptographic proof. The network will provide some guarantees, for example, that messages coming from a particular network source are protected with temporary crypto keys and we can trust that if the first message proves it’s bob, the next one from the same source must also be bob. It’s only when Bob moves to a new place on the network that he needs to re-prove himself. So in general the first message from any network source will be an id proof, and then after that just contents.

A story

You’re walking down the street, and a stranger passing by calls your name and stops you. “Hey! It’s been a long time, how are you?”

You stare blankly at him for a second, since you have no idea who this man is. “It’s me, Stan! Sorry, I forget that people don’t recognize me. I was in an auto accident last year, and I had to have facial reconstruction. I’ve also lost about 50kg since the last time you saw me!”

You remember Stan, of course, your good friend you haven’t heard from in a while. But you really cannot tell if this man is him or not.

He says, “Listen, I’m in kind of a jam here, I lost my wallet and …” and goes on about his misfortune. Finally he says, “so would you mind lending me fifty pounds?”

“Well, ok,” you say. “Hey, do you remember that time we went to your cousin’s beach house? That was a fun time.”

“Yeah it was!” the man says, “My cousin Earl’s house in Dune Beach. That had to be what, four years ago?”

“Sounds about right,” you say as you hand him the 50 pounds. “You’re a lifesaver! I’ve got your email, I’ll be in touch to return the money. Let’s grab dinner next week!”

“Nice to see you Stan!”

Epilogue

What just happened was a case of a failed identification, and then using a second method, which worked.

Normally we identify people in person by their physical characterisitics - their face, voice, etc. This is a fairly reliable method, because a physical body with certain characteristics is difficult to copy. However this method can fail - if the original characteristics are lost (as in an auto accident), that identification method doesn’t work anymore.

So we have other methods of being sure of a person’s identity. In this case, we asked some personal details that an impostor would be very unlikely to know. We used a shared “secret”.

This is something we do without even thinking about it - identify people by their physical appearance, and if that fails, fall back to shared secrets. This is, in a sense, a small program, a script.

We actually have these scripts in our heads for lots of other things.

First cut About Identity

Identity is the continuity of a person or thing over time. Even though he/she/it changes, we know it’s still the same person or thing.

Let’s do some examples (starting with everyday identifications and then get more abstract).

  1. A family member, say a brother. You know your brother when you see him, even though he might have different clothes or hair than the last time. Even though he looks nothing like he did as a small child, you can easily distinguish him from anyone else.
  2. A set of identical twins. The normal cues you use for identity tend not to work. Their face, voice, etc are the same. You may have to rely on shorter term phenomena like hairstyle. It gets especially difficult if the twins set out to deliberately trick you.
  3. A company. How do you know you’re talking to say, your cable company (or a person authorized to represent the company?) What happens after a merger? Still the same company? What if it gets new management? Is the identity the brand name or the people behind the company? Or something else?
  4. An online username. If you chat with “Gandalf”, is he the same real life person you chatted with last time under that name? How do you know? If the account is the administrator of a forum, does it matter if the real person behind the account changes over time?
  5. A computer file. If you write up your resume, is the updated version the same file as the previous version? Is it the same just like your brother is the same person even though he has a new haircut? What if you rewrote your resume completely, so that it has nothing in common with the old version?

The point here is that there are no universal answers to these questions. Identity is not inherent in the person or thing, it’s a tool for people who interact with them. And that tool can be legitimately used in many different ways.

Identity is a set of instructions for determining “is this the same person/thing”, resulting a yes/no answer. In computer science, this is called a “predicate”. You automatically choose these instructions for everything you interact with. Of course there are some common methods, you don’t normally just make up arbitrary requirements.

For people, we generally start with appearance and other physical attributes. We recognize faces and voices. But let’s say your old friend lost a lot of weight or had to have facial reconstruction, and you don’t recognize him physically. How can you be sure it’s really him in this new-looking physical form? You can ask questions only he would know the answer to.

Quite often, identity involves memory. What makes a person or thing unique is that they know things that others don’t.

Imagine if your friend who suddenly looked different claimed to have forgotten your entire friendship - your shared history. He would be indistinguishable from an impostor, wouldn’t he? If he took a DNA test to prove physical continuity, would that even matter given he had no memory of your friendship? Would you want to continue to be friends?

So in this sense identity and unique knowledge are closely related. We can perhaps refer to this unique knowledge as “secrets”. You might not think of your high school spring break trip with your friend as a “secret”, but it is something anyone else is very unlikely to know about, and so you and your friend can use it to identify each other (either in person or online).

Secrets

What makes a strong secret?

Blog posts

A name by any other name

What’s in an internet name?

What does it mean to us when we see “bbc.co.uk” or “amazon.com” in a browser address bar? Or when we see a social media post under the name “shadowDuck1234”? Why are they there?

Before we answer that, let’s talk about what a name is in the first place. We use names primarily as shorthand to express continuity. It’s a lot easier to say “Roger Federer” than “The Swiss tennis player who’s won a bunch of tournaments”.

Names are not always universally agreed upon. While nearly everyone thinks of the tennis player and not some other “Roger Federer”, each person has “Mom” in their address book, and it’s millions of different “Mom”s.

Computers don’t really care about names. In order to tell people apart, they could just as easily assign them ID numbers, it works just as well. In fact, this is what computers do - you might log into an account with your username, but that’s just because it’s easier for you to remember. To the computer managing your account, you are a number in a database.

So this brings us to an important insight: Names are for brains, not machines. Humans need to use names to refer to people and things, machines don’t. Machines are taught how to deal with names because the machines need to communicate with humans.

How do computers deal with names today? Well, it’s a bit of a mixed bag. The name “amazon.com” in your browser is meant to be universal, but a website username “shadowDuck1234” is not - each website has a different set of users, and “shadowDuck1234” on one site might not be the same person as that username on another site.

Let’s talk about the universal names first - those come from the Domain Name System or DNS. This system was conceived fairly early on in internet history, in the 1980’s. This was long before the internet became popular and began to operate high-value systems.

The idea is you claim a name, and you get exclusive rights to it. Anytime someone sends messages to that name, you receive them. That was all well and good when the internet was largely an academic project, and there was very little to be gained from attacking it. Today, however, there are severe flaws in this system that are regularly exploited by scammers. Those exploits are called “Phishing”.

Phishing is taking advantage of naming confusion. The victim receives an email that looks like it’s from his bank, but it’s not. It includes a link that looks like it’s for the bank website, but it’s not. It is just a similar looking name. Some people don’t notice the difference - the attacker deliberately set up his website to look the same as the bank’s. Then the victim gives away his secrets to the attacker because he thinks he’s talking to the bank. Then the attacker uses those secrets to steal money from the victim.

The solution to phishing is not some technical detail or hurdle. The problem is inherent to universal names. Remember, “names are for brains”. Brains just aren’t good at telling similar names apart. Was it “jillfashionphoto.com” or “jillfashionfoto.com” or “jill-fashion-foto.com” or “jillfashionphoto.org”? Most people won’t remember the distinction. Attackers can simply occupy common variations and masquerade as someone else.

The most common recommendation to avoid phishing is “use a bookmark” - in other words, remove the universality! Your bookmarks listing is a listing of page titles, which are not unique. However among the sites you personally visit, they might be. So you can bookmark “jillfashionfoto.com” as “Jill’s Fashion Photography” even though the latter is not a universal name. And it works great! No one can phish you because you always reach Jill via your bookmark, and you never need to remember her exact Domain Name again.

The conclusion I would like you to take away from this is that universal names are irretrievably broken, and that DNS should be abandoned.

To reinforce this argument, I’d like to talk about why universal names were appealing in the first place. In the 1980’s when DNS was invented, the internet was not an adversarial environment. Nobody had a smartphone in their pocket. So it’s not a surprise that the engineers chose universal human-meaningful names. Their advantage is that humans can remember them, and later use them to communicate. Back then if you misremembered a name, you would know it, and no harm done.

Things have changed. Today, not only is phishing very real and sophisticated, we don’t really need to memorize names anymore. Smartphones are ubiquitous. Instead of your friend telling you the domain name of a site they want you to visit, they just text it to you. You don’t need to know the name, all that matters is that you’re sure the text came from your friend.

Names are for brains, but our brains aren’t using them!

It’s time to get rid of the names our brains aren’t using.

The dangers of internet promiscuity

We are promiscuous. We read content on the internet every day, having no idea where it came from, or what the true motive was for creating it.

It doesn’t always hurt us. A funny video or meme is fairly benign - it’s safe to assume the motive for producing it was just the satisfaction of seeing a creation go viral. It doesn’t always hurt us, but usually it does.

We are waking up to reality now, that powerful interests are exploiting our promiscuity. Fake news assaults our social media feeds. We’re inundated with content specially crafted to manipulate our emotions and influence us to serve someone else’s interests, instead of our own.

Who creates this content? We have no idea, it’s been shared and reshared so many times, the origin is completely lost. However it’s safe to assume that powerful interests are better able to get content in front of our eyeballs than anyone else. They don’t put their own name on it, they create content designed to make us angry so that we’ll spread it ourselves. They’ll pretend to be someone in our social or political circle so that we’ll be less skeptical. Corporate conglomerates, media, tech companies, political groups, governments, they’re all playing this game. In fact, social media apps themselves are also specially crafted to influence us. Have you noticed that Facebook is a platform for people to make their life appear more glamorous than it really is? That is not an accident. It is a tool of mass influence and control, designed to set us against each other in a crazy game of “who can destroy their future the most, to impress their friends today”. We’ve been injecting it directly into our brain, by the gigabyte.

We are realizing now that we’ve been tricked, but we don’t know how to stop. Social media is our only lifeline to many of our friends now. We can’t just turn it off. Can we?

Yes, we can. Before we get to the “how”, let’s go on a journey of what life would be like when we’ve freed ourselves.

Design notes

Overview

Messaging

At the application level, subverse will resemble Signal or Whatsapp or any other messenger. The main screen will be a list of contacts, and clicking on one will go to your conversations with that contact.

One major difference from Signal etc is that among the contacts will be programs you can communicate with. Many of those will be local programs - your own agents that act automatically on your behalf. They do things like filter incoming messages, notify you about important messages, forward information to other people, add items to your calendar, make payments, etc.

First communication

This can be with an in-real-life contact, or someone introduced online via a service like google.

When you are introduced, several pieces of info need to be collected:

  • What you want to call this contact
  • Use a fresh identity?

    If you use a fresh identity, the app will automatically track it - that identity will only be used with this contact.

    If you message a contact with whom you’ve used multiple identities, you’ll need to choose which one you’re going to use this time (or a fresh one).

    The main window will let you swipe left/right to switch identities. There is a search bar at the top which searches all messages, for all identities.

    Examples

Forget/remember

By default all new conversations will use fresh identities. But there are some contacts (like google) that you don’t want to recognize you from earlier (and be able to tie together all your interests).

So there is a “forget me” function (perhaps a button) that will start a new conversation with the existing contact.

If it turns out later that you need the contact to remember you, there will be a “Remember” function that will send a proof to the contact that you control both the new identity and whichever identity had the old conversation you want them to remember.

This will result in a rather large number of public keys being created. It is a bit more complex to manage but it should be possible to hide the complexity from the user.

When Alice introduces you to Bob, which key do you give him? Alice can just give him the one you gave her. Or she can ask you for a new one. Probably the most secure is for Alice to be the middleman for a Diffie-Hellman between you and Bob where you negotiate keys for the conversation and then exchange pubkeys. Sure, Alice could MITM you and for example, pretend to be Bob. But that’s always the case. You have to trust the introducer.

Let’s say Bob is internet-famous. How do you know Alice is introducing you to the “real” Bob? It’s up to Bob to prove to you he controls the “famous” identity. A simple method would be for you to send Bob a secret random large number (eg 1352471928206147350861) at his “famous” identity, and in your introduction session Bob echoes back the random number to you. Then you’re satisfied it’s him but you can’t prove it to anyone else. (To understand why you can’t prove it to anyone else: Since both you and Bob knew the secret number, the echo reply could have come from either you or him. The only person who is sure it didn’t come from you, is you. So it doesn’t work as proof for anyone but you).

Of course, Bob could just skip all this complexity by just using his famous key in your introduction. Generally speaking, the “remember” procedure will only be needed when you change your mind later about remaining anonymous.

Managing identity

Do we really want to create separate i2p destinations (and client/server tunnels) for every identity? That gets expensive. How long do we keep those?

I believe we can keep the keys for destinations as long as we want, but we can shut down tunnels for those that are unused (and perhaps spin them up occasionally just to see if there’s any new messages).

How many tunnels we can have active at once is something I’ll have to look into. But I suspect that for most users, this limit will not be a problem.

Shopping example

Google

Me: shoes

Google: Let me introduce you to contacts who know about "shoes"

Google: Joe's shoes [long description] [meet button] 
...

You click the meet button. A popup appears that shows that this identity calls himself “Joe’s shoes” and your current contact “Google” also calls him that. You click “Ok” to accept that name (but you can edit it if you want).

Key management

The seed is the secret from which all others are derived.

In order to maximize metadata privacy, it will be necessary to use different public keys as often as possible (so that other people can’t compare keys and connect your activities together into a cohesive history).

So the question then is how to create and manage these keys.

The idea is for a seed to map 1:1 with a brain (physical person) and then that person will have many identities. Each of those identities also needs to be able to recover from key compromise so each one must have a “higher level” key that is kept offline (and those keys must also be different for each identity, for the same reason).

The problem is how to only store a small amount of secret material, while also having the ability to roll keys independently for each of many identities, without having a common root pubkey for any two identities.

This will work exactly the same way as if there was only one identity, except many top-level pubkeys will be generated instead of one.

  • Seed (safe deposit box)
    • Secret1 (drawer)
      • Keypair1
      • Keypair2
    • RootKey1
    • RootKey2
  • generate seed from device entropy
  • Derive Secret1 from seed
  • Derive a series of RootKey from seed
  • Derive series of Keypair from Secret1
  • Construct scripts such that “any message signed by a key, signed by a key, with Rootkey at root, not revoked is valid”
  • Generate i2p destinations from device entropy, assign to keypairs
  • Prompt user to write down seed
  • Destroy seed on device
  • Prompt user to write down Secret1
  • Destroy Secret1 on device
  • Publish hash => destination mappings to DHT (using anonymous submission, so they can’t be linked)

Script

Overview

Instead of pk as identity, a script is identity. The script is what someone else needs to run to authenticate a message from you. The script hash is considered the identity. The DHT lookup for network address is keyed off script hash and also contains the actual script.

Similar to bitcoin script, start with the unlock portion and has the lock appended.

Lock: [PK_M] op_transitive op_verify

Verify: [MSG_HASH] [SIG] [PK_W]

Seems burdensome to have to execute this on every message. Maybe some caching: if K3 is signed transitively by K1, and no new revocations came in then op_transitive is a pure function and memoizable.

Delegation

Here’s a typical script that allows for delegations. The following checks are done:

  • Is the child script (cs) present?
  • If not, verify the message via the included root pubkey
  • If so, prover gives child script (cs), signature
  • Take hash of child script, this is the message
  • Take root pk, this is the pk
  • Verify sig, message, pk
  • If the verification is ok, do the following
  • place the sig and message (or whatever the child script requires) on the stack
  • execute the child script
message sig child-script child-script-sig-by-parent
0xabcd ;; pk css cs 
[sink ;; css cs pk
[[hash] [dip shield] decorated ;; css csh cs pk
  [swap] dipdown ;; css csh pk cs
  verify]  ;; b cs
[discard ;; the (empty) child script -> pk sig msg
 sink ;; sig msg pk
 verify]
branch]
[[]] recover

Root signing case

;;message sig child-script child-script-sig-by-parent
"hi" bytes #b64 "SfqfvISYD8j2DG9v5BnWnaQY+rV7diV+H/pHPKmEQBGjzIcBqJW/7P9ekyZduImwzr6nygedtT9uMXZ/qzD1Bw==" [] []
;;0xabcd ;; pk css cs
"foo" bytes key [secret] unassign
[sink ;; css cs pk
[[hash] [dip shield] decorated ;; css csh cs pk
  [swap] dipdown ;; css csh pk cs
  verify]  ;; b cs
[
 sink ;; sig msg pk
 verify]
[clone] dipdown branch]
[[]] recover

delegated signing case

;;message sig child-script child-script-sig-by-parent
;;"hi" bytes #b64 "SfqfvISYD8j2DG9v5BnWnaQY+rV7diV+H/pHPKmEQBGjzIcBqJW/7P9ekyZduImwzr6nygedtT9uMXZ/qzD1Bw==" [] []
"hi" bytes
[] ;; empty sig because the delegated script doesn't need it
[true] ;; the child script
#b64 "hKxJZBKZDS2gFnVM7OJX9bYlWzYrA/T5YFPMr78CZkS9peC1IhX0QMr3SSnix/cMOteLgp9AE50QWJE+SZ2MAQ==" ;; root key sig


"foo" bytes key [secret] unassign ;; the public key (hardcoded in real world use)
[sink ;; css cs pk
[[bytes hash] [shield dip] decorated ;; css csh cs pk
 float ;; cs css csh pk
 [verify] dip
 [[]]  ;; the program to run if the child script isn't authorized
 branch] ;; runs the child script if the sig on its hash is verified  
[discard discard ;; the sig and (empty) child script -> pk sig msg
 sink ;; sig msg pk
 verify]
[clone] dipdown branch]
[[]] recover ;; fail closed

Now make a program that wraps a given pubkey in a delegating script. This script says “any script signed by this pubkey is authorized”. (Ignores revocations for now).

"foo" bytes key [secret] unassign ;; the public key (hardcoded in real world use)
[[sink ;; css cs pk
  [[bytes hash] [shield dip] decorated ;; css csh cs pk
   float ;; cs css csh pk
   [verify] dip
   [[]]  ;; the program to run if the child script isn't authorized
   branch] ;; runs the child script if the sig on its hash is verified  
  [drop drop ;; the sig and (empty) child script -> pk sig msg
   sink ;; sig msg pk
   verify]
  [clone] dipdown branch]
 [[]] recover]
swap prepend

Here’s a demonstration of delegating.

;; all script making functions expect a keypair or pk
[[make-simple-script [[secret] unassign [first] sort wrap [sink verify] join emit bytes]]
 [delegated-script [[secret] unassign [first] sort
                    [[sink ;; cssig cs pk
                      [[hash] [shield dip] decorated ;; css csh cs pk
                       float ;; cs css csh pk
                       string read first ;; bytes => program
                       [verify] dip ;; dump
                       [[]]  ;; the program to run if the child script isn't authorized
                       branch] ;; runs the child script if the sig on its hash is verified  
                      [drop drop ;; the sig and (empty) child script -> pk sig msg
                       sink ;; sig msg pk
                       verify]
                      [clone] dipdown branch]
                     [[]] recover]
                    swap prepend emit bytes]]
 ;; scrhash script args 
 [authenticate [[[[emit bytes hash] dip =]
                 [swap string read first dump functional [inject] lingo first]
                 [drop drop []] if]
                [[]] recover]]]
["bar" bytes key ;; child key
 "we attack at dawn" bytes ;; message
 [sign] shield ;; sign it
 float
 make-simple-script [hash] shield ;; csh cs

 "foo" bytes key swap [sign] shielddown ;; sig pk

 swap ;; pk sig 
 delegated-script ;; ps cssig cs msig m
 dump
 ;; ps cssig cs msig m
 [hash] shield ;; psh ps cssig ...
 ;dump
 ;authenticate ;; i think the problem is this expects args to be wrapped
] 
let

ord = cssig W1tbc = cs W1tbd = ps wX8 = msig d2U = m

What is the form of authenticate? It’s going to take:

  • the hash of who it purports to be from
  • The encoded script that should hash to the above (lock)
  • The encoded list of arguments to that script (key)

Ok so what is the form of proof for both scripts?

The inner (child) script: encoded bytes of a list, [key message sig]

The outer parent script: encoded bytes of a list [childscript, cssig, csargs]

Should the authenticate script be responsible for decoding the inputs? Or should the caller do that? On the one hand, it’s nice to deal with actual data instead of a byte blob, but we have to re-encode anyway it to get the hash (at least for the script itself, not the inputs generally).

Should the authenticate function be responsible for checking against a known hash? Or should it just return [from, message] or nothing? The reason to go with the former, is that I expect filtering to be applied even before authentication - if the sender isn’t in our address book, for example, we can drop it without even caring if it’s authentic.

Is authenticate something we can call recursively? If possible we should. Let’s try it

;; take a hash, script (list) and args (list) => message or nothing
[[hashed [emit bytes hash]]
 [authenticate [[[[hashed] dip =]
                 [drop functional [inject] lingo first] ;; use functional before [inject] to turn off debug mode
                 [drop drop []] if]
                [[]] recover]]
 [scrub-key [[secret] unassign [first] sort]]
 [make-simple-script [scrub-key wrap [sink verify] join]]
 [delegated-script [scrub-key
                    [sink ;; cssig cs pk args
                     [hashed] [shield dip] decorated ;; css csh cs pk args
                     float ;; cs css csh pk args
                     [verify] dive ;; csh cs args
                     [authenticate] bail] swap prepend]]]
["working-key" bytes key ;; child key
 "we attack at dawn" bytes ;; message
 [sign] shield ;; sign the message with the working key
 swap
 ;; 76 put ;; altering the message after signing, should fail to auth
 pair swap
 make-simple-script [hashed] shield ;; csh cs
 ;;["mallory" put] dip ;; alter the child script after signing, should not auth anymore 
 "master-key" bytes key swap [sign] shielddown ;; sig pk

 swap ;; pk sig 
 delegated-script ;; ps cssig cs [msig m]
 [triplet reverse] dip
 ;; add the parent script hash on top
 [hashed] shield
 authenticate [string] bail]
let

A delegated script should take: [cs sig [args]] and verify the sig. If it verifies, it should call authenticate with [csh, cs, args]

Other possible scripts

No delegation

[PK_M] op_verify

Multisig

[Pk_1 pk_2 pk_3] 2 op_threshold_verify

msgHash [sig1 sig3]

the hell does this mean anyway.

Issues

Overwriting built in words

If we allow :define, then an unlock script could include

[:verify-ed25519 [:pop :pop :pop true]] :define

and that would make any signature verify.

For a general purpose language, allowing overwrite is fine, but there has to be a way to seal that off.

An easy way is to have a :safe-define which doesn’t allow overwriting and then

[:define [:safe-define]] :define

Which should seal off overwriting

It’s not even clear that we need :define at all for validating identity scripts. If it was used at all it would just be for readability and/or convenience. However doesn’t seem like it is worth the security risk. Should probably just dissoc :define out of the words list after bootstrap, to make the words list read-only.

Opcodes

verify

Verify signature

Message, pk, sig -> bool

Delegation scripts

A script can not only limit authentic messages as being signed by certain keys, but also limit it to other scripts.

Eval

Stack based langs would need some kind of eval function, eg:

[ 1 2 + ] dup eval swap eval + .

Results in 6.

key types (prot against loss, cost theft by stranger, by trusted, cheap implement)

  • master unenc in vault, safe deposit box (8/8/2/2)
  • master encrypted w memorized pw (4/9/8/2)
  • Memorized low-entropy pw (6/7/7/7)
  • 3-of-5 trusted friend multisig (8/7/1/8)
  • hardware token no backup (3/5/2/3)
  • software token no backup (2/3/2/8)

Protection against theft is more important than loss for most people - you can always start over with a new identity (it’s cheap for your friends to verify a new digital identity in person). But theft can be catastrophic.

The more your identity is purely digital, the more loss protection you need (it may be catastrophic to have to rebuild reputation after a loss)

Regarding the “memorized low entropy pw” (brainwallet)

There are several schemes for doing this. The basic requirement is that the low-entropy pw is stretched using a very expensive KDF. You could use something like scrypt, if you have fast hardware to derive the key yourself just as cheaply as an attacker could. The problem is most people don’t, they only have a commodity laptop or smartphone.

So the idea is to outsource the computation to someone else, and pay for the compute resources. You do it once when generating the key, and possible more times if the key or its subordinate key is lost.

Vitalik’s EC method

This one sounds the easiest and simplest, although I have no idea about the security:

Now, there is one clever way we can go even further: outsourceable ultra-expensive KDFs. The idea is to come up with a function which is extremely expensive to compute (eg. 2^40 computational steps), but which can be computed in some way without giving the entity computing the function access to the output. The cleanest, but most cryptographically complicated, way of doing this is to have a function which can somehow be “blinded” so unblind(F(blind(x))) = F(x) and blinding and unblinding requires a one-time randomly generated secret. You then calculate blind(password), and ship the work off to a third party, ideally with an ASIC, and then unblind the response when you receive it.

One example of this is using elliptic curve cryptography: generate a weak curve where the values are only 80 bits long instead of 256, and make the hard problem a discrete logarithm computation. That is, we calculate a value x by taking the hash of a value, find the associated y on the curve, then we “blind” the (x,y) point by adding another randomly generated point, N (whose associated private key we know to be n), and then ship the result off to a server to crack. Once the server comes up with the private key corresponding to N + (x,y), we subtract n, and we get the private key corresponding to (x,y) - our intended result. The server does not learn any information about what this value, or even (x,y), is - theoretically it could be anything with the right blinding factor N. Also, note that the user can instantly verify the work - simply convert the private key you get back into a point, and make sure that the point is actually (x,y).

Examples
1
  • Single master in physical vault
  • hardware token at home
  • Software token on phone
2
  • Single master in physical vault
  • Multisig 2/3 friends
Questions to ask
  • Do you intend to build a reputation online and keep your real world identity secret? Yes: vault
  • Do you have convenient access to physical security? (fireproof safe or safe deposit box)? Yes: favor physical keys
  • Do you know 3 people you trust not to lose their identity, or collude to steal your identity? No: forget social keys
  • Are you confident you can memorize a single word with periodic reminders? No: forget brain keys
  • Can you spend $50/yr on security?
College kid

No, no, yes, yes, no. 2/2 friend/word

Upper mid-class professional

No, yes, yes, no, yes. 2/2 vaults

DNM dealer

yes, yes, no, yes, yes. 2/3 vault/word

Distributed hash tables

Use dhts to map several things:

A hash to content

This doesn’t require authentication - the recipient can hash the data himself to make sure it’s legit. This is the basic DHT use case. I am not sure what content is small enough that peers don’t mind storing it but big enough that the client wouldn’t already have it. I am guessing somewhere in the kilobytes range.

A content hash to peer ids

The typical bittorrent use case - I am looking for a large video file and I want to know which peers have it.

A public identity to its various properties

  • The script whose hash is the key for the DHT
  • Network location(s)
  • self-identifying info (what this identity calls himself etc)

A hash to a revocation document

Discussion

  • h1: “[script content…]” (as bytes) - this doesn’t need to be signed, as this is an identity starting point (Bob has already been told out of band this is his script hash). These types of entries are not updateable by definition as any change to the content changes the key.
  • Could also include other fields that are signed. eg
    ["abcd" [[value "[foo bar...]"]
             [properties [network-address 1234567890]]
             [signature "defg"]]]
        
  • What about privacy? we don’t want people scraping the DHT and compiling worldwide addressbooks. The entries could be encrypted, similar to i2p encrypted lease sets. The idea is, instead of handing out your script hash, you encrypt the script with a password, then hand out the hash of the encrypted script and the password. The recipient looks up the hash in the DHT, gets the ciphertext, and decrypts the script.
  • What about updateable properties vs fixed? Obviously content that hashes to the key in the dht is already as “authentic” as it can get (the tamper point is before that - when giving that hash to someone to use). Use the same dht? Could maybe just use ipfs or similar for plain content.
  • Should peer values be identities, or just destinations? Maybe we don’t care who it is as long as they have the content.

Can we get away with only storing peers in the DHT?

To avoid having the DHT caring about the content (for example, requiring Alice to sign her own network location updates, and the DHT to verify it before overwriting an entry), perhaps the DHT can only point to peers and some other layer can be tasked with validating and otherwise working with that content.

How could we implement revocations in this manner? The DHT would only point a user toward someone who might have the revocation. There’s probably several ways of implementing that:

  • The hash of the given script would be the stream ID of several messages:
    • the script itself
    • revocations of inner keys
    • other public messages relevant to that key?

Implementation

The DHT will hold only peer data.

The keys will be hashes, and the values will be network locations.

The design for actual data exchange will work as described in Authentication overview

Let’s start with basic functions. We have xor already. What else is needed in Kademlia?

[[kbucket [bits [[zero?] catcher] assemble count]]]
["foo" hash
 "quux" hash
 xor kbucket]
 let

Streams

Overview

A stream defines a content source accessible with a particular symmetric key. For example, family photos that you wish to share with a limited set of family members. You can add more photos to the stream at any time, it stays open indefinitely. (Whether they’ll support explicit “close” is undecided, I’m not sure if that’s actually necessary).

A stream is particular to several things:

  • An encryption key that allows only authorized people to view the content
  • a set of contents that you wish to send to those people

Users interact with the stream concept mostly when sharing content, not viewing it. For example, on your mobile phone you’d select some photos, “Share”, “Subverse Stream”, “My family photos”. In other words, content that is semantically related (say, photos from the same event) might be split up into different streams because of different access controls (you might not want to withhold some of the photos from some members of the group). Streams have nothing to do with how the data is viewed or presented, only how it’s transmitted and decrypted. Information on how the data should be presented may be contained in the stream data (For example, which event the photo is from, for grouping purposes when it’s displayed)

There needs to be some mechanism by which intended recipients of a stream are made aware of its existence, and further that recipients have some idea of what the stream contains. How users “accept” a stream will probably be configurable - could be either automatic based on some kind of priority and/or filter, or manually accepted.

Security

Authentication

There needs to be some mechanism that prevents relays from altering the stream. The contents should all be authenticated so that clients know whether they got content from the originator. The originator could still fork the content, and it’s important that we detect this and reject the rest of the stream, since it’s very rare that an honest originator would cause a fork. Peers that give stream contents that decrypt properly and auth properly should be given higher priority.

Merkle tree auth

Create a merkle tree for all the items (since the stream puts them in order). Whenever the sender has time, sign the merkle root and include it. That will authenticate all the previous items, including items that hadn’t been signed (presumably because they were being created rapidly). It also fixes the order so that it can’t be altered.

It’s also possible to “checkpoint” so that a sender or receiver doesn’t have to re-process the whole list to calculate a new root. The sender would need to calculate the tree state (including where the next node of the merkletree goes, and its co-roots), and sign that. Then he can pick up from there later, without needing to re-read the whole stream.

Perfect forward secrecy

It would be nice if there was a way to achieve this, as most modern message protocols are supporting it.

I believe this can only be done interactively though, whereas this stream design is non-interactive. It would be unfortuate, especially in a design where encrypted data is backed up onto other users’ disks, if keys were compromised much later, that the other users would then be able to decrypt the content.

Deniability

It would also be nice if this was possible, but again it depends on interactive key exchange.

Perhaps the best way forward is to have a protocol like OTR/Signal on top of a swarm protocol. It would be less bandwidth and storage efficient, but better security properties (If Alice Bob and Charlie are messaging in a group, Charlie might be storing the same message encrypted with Alice’s and Bob’s keys). This would basically be treating the other swarm members as MITM (who are required to be unable to attack these protocols anyway).

Implementation

This would be something similar to bittorrent but instead of having a fixed set of bytes to transmit, it’s open-ended (more content can be added at any time). So how could this protocol work?

Similar to bittorrent’s mainline dht, map a hash to some peers (destinations). (what is the hash of, if the stream keeps getting more appended to it? Maybe just generate some random bytes as an id)

Connect to those peers, resolve which pieces can be exchanged for the given hash, and exchange them. There’s the issues of authenticating and assembling the pieces.

I think we can use a merkle tree. Each time a new chunk is appended, the root gets recalculated.

How does a client know he’s got the latest root? I think the old roots are going to be co-roots in the latest one (or you can easily generate it at least). So you can prove to a client that you appended. See https://transparency.dev/verifiable-data-structures/

When Alice makes new content (a new stream, or new additions to an old one), how does Bob know this happened? Does bob have to keep polling to check? Does alice connect to bob’s destination (and if so she might as well just deliver the content too)? Kind of a chicken/egg problem here of if content is distributed, how do you find out about it in the first place - you have to know what you’re looking for, somehow. What does “subscribe” look like here?

Maybe a destination (or pk of some sort) makes a DHT entry of all his streams roots. Each encrypted with a key that also decrypts the content. A user downloads the list, sees which ones he can decrypt and then proceeds to fetch all those streams’ contents (which he can decrypt).

Privacy

Metadata privacy

Is i2p or tor strictly necessary here?

There might be a better protocol for subverse’s stream model. There might be a way to combine the requirements for data duplication and data mixing.

A feature like bitcoin’s dandelion can hide the original source of a stream, and the nodes in the “stem” could also cache data (but they would have to pretend not to have it for some short period of time).

Persistence

Locally a database that we can treat as a stream would be nice (so that we can backup our encrypted database to other users).

Graph db of attributes design

Overview

The idea here is to ignore identity in the database and make that the responsibility of the client. The database only links attributes, and has nothing to say about whether a given entity is the “same” entity as another. It only says “something has both the color brown and the height 1.5m”. What attributes are sufficient to establish identity is not part of the database. It’s just a graph connecting attribute/value pairs.

Attribute naming problem

There are some problems to be considered. For example, let’s say a contact’s address is 1 Main St. And let’s say we also want data about the house located at that address. Both the house and person have an address, but they don’t really mean quite the same thing - the person’s address can change but the house’s really can’t. The house is always at that address, the person can be located elsewhere but tends to return to that spot often. Keeping in mind there’s no entities in the db, only linked attributes, how do we represent these relationships? In general we will do joins on person/house via an address field, regardless of whether those fields have the same name on both objects.

I suppose one way is to just ignore the semantic difference between a person’s address and a house’s address.

However it does seem like a good idea to choose attribute names carefully, at the very least to have conventions about which attribute names are used to represent which pieces of data. For example, is the address of a house its address or location? This is starting to go off into the area of schema consensus. We could have schema declarations but that would only allow you to know whether a program and data are compatible, it would do very little to actually assure any sort of compatibility.

It might be nice to have at least a set of attributes and what they should be used for. For example:

created-at
The timestamp at which the given entity started to exist (for people, their birthdate. For manmade objects, their date of manufacture or construction,
street-address
The number and street where a building or land is located.

Attribute atomicity problem

Should street addresses be one attribute or several?

Join problem

It’s clear to me how to find a book’s author’s birthdate - the author attribute of the book will equal the name attribute of the person, then go to the created-at attribute from there.

What about when the primary key is more than one attribute (for example, when the street address consists of number/street/city/state/country/zip)? It is possible to just include multiple attributes in the join query. But it gets complicated when we don’t know whether a given person lives in a country with states or provinces, so we don’t even know whether their address will have one attribute or the other. Queries will have to be carefully designed to take these quirks into account.

If house number is a separate attribute, it might be possible to query for other things with that number that have nothing to do with street addresses. I don’t know that’s necessarily bad, it’s pretty easy to narrow it further with just one other attribute.

The semantics are still weird, though. If we make street addresses 6 attributes (number/street/city/state/zip/country), does Alice actually have an attribute number = 123 herself? It would have to be home-street-number or something. She might have a business address or even more than one home address. Without some kind of key (to an entity), this gets difficult. If we just link two sets of address attributes to Alice, how do we know which are which (we might mix up which house number goes with which street). Not being able to refer to entities directly in this schema may prove too difficult to overcome. We intend for the address of her house and the address of her mother’s house to be the same “thing” but we’re deliberately not saying whether anything is the “same thing” - they just share attributes.

How do we say “Alice’s home’s state”? Is this different from 1984’s author’s birthdate? Let’s map it out a bit

AttrAValAAttrBValB
nameAliceborn1970
nameAliceheight160
nameAlicehome[-80.01, 31.12]
number123streetMain St
number123citySpringfield
number123stateNT
number123countryUSA
number123location[-80.01, 31.12]

Here, we do need some kind of link between the “house” entity and “Alice” - in this case from the home to location. However it’s not clear this kind of trick will always work, but in theory any value will work, even if you don’t know the location. As long as you are sure that’s alice’s address, you can insert any unique location and be able to query her address this way.

While it’s certainly possible to include id attributes to facilitate linking, the question is whether we want to use that in an attribute-only database.

Can we really get all the data we want just from attributes without referring to entities? If we want “the books titles written by orwell”, what is ‘orwell’? It could be a string (his name), but maybe that’s not enough for uniqueness. Maybe it’s

{:name "George Orwell", dob: "1/1/1905"} 

So then when we want to refer to a book’s author, we could theoretically do:

[["name" "1984" "author-name" "George Orwell"]
 ["name" "1984" "author-created-at" "1/1/1905"]]

We could perhaps even namespace the keys

[["name" "1984" "author.name" "George Orwell"]
 ["name" "1984" "author.created-at" "1/1/1905"]]

Ultimately we need to reach all the attributes of Orwell just given the string “1984” and knowing which attributes lead there, even if there’s more than one potential match for “Orwell”. How do we differentiate “the orwell who wrote 1984” from other Orwells?

When we have explicit entities, we can point to one. If we don’t, are we doomed to adding every attribute from every downstream link, eg “author.address.zipcode”?

It seems like the attribute linking model works well when every link between distinct items is just a single link. But when multiple links are required it gets ugly fast. However maybe it’s still tractable:

[["name" "1984" "author.name" "George Orwell"]
 ["name" "1984" "author.created-at" "1/1/1905"]
 ["name" "George Orwell" "zip" "11111"]
 ["created-at" "1/1/1905" "zip" "11111"]
 ["name" "George Orwell" "created-at" "1/1/1905"]
]

But this could potentially fail too - if we wanted to find out “author of 1984’s zip code” in theory we could write a query that would only match a zip that was pointed to by both attributes. However, there could still be another thing in the db (say a cat) whose name is “George Orwell” and located in zip 11111, and other thing created 1/1/1905 that is also located in zip 11111 (say, a human named “Bob Parker”). Now, with that last row, we could rule out both of those imposters because they would not have that matching row. But we’d need to ensure we included that relation in our query.

It gets increasingly difficult though, as the primary key gets to be more and more attributes. What if the personhood pk was name/birthday/cityofbirth?

[["name" "1984" "author.name" "George Orwell"]
 ["name" "1984" "author.created-at" "1/1/1905"]
 ["name" "1984" "author.created-city" "New York"]

 ["name" "George Orwell" "zip" "11111"]
 ["created-at" "1/1/1905" "zip" "11111"]
 ["created-city" "New York" "zip" "11111"]
 ["name" "George Orwell" "created-at" "1/1/1905"]
 ["name" "George Orwell" "created-city" "New York"]
 ["created-at" "1/1/1905", "created-city" "New York"]
]

If we just left it there as the first 7 rows, there could still be other distinct entities who all live in zip 11111 that aren’t who we’re really looking for. We may need the 8th and 9th rows too, so this seems to create an exponential explosion of required links when primary keys are multiple attributes.

However admittedly we would not have to decide up front how many attributes are needed to uniquely identify a person. We could just use as many as are available.

One solution is to just use uuids or similar, either as an id attribute or as an entity column. I’m somewhat partial to id attribute right now, because only objects whose pk isn’t one of its attributes would need it. For example, the coordinates of a city. We could just as easily use it as the entity column, but the queries would look a bit nicer with it as an id attribute. Either this

entityattrval
uuid1nameBob
uuid1emailbob@bob.com
30.12, -66.23nameFoo City

vs

a1v1a2v2
iduuid1nameBob
iduuid1emailbob@bob.com
coordinates30.12, -66.23nameFoo City

The latter does repeat the id field name a lot, but it also is more specific about naming the coordinates attribute. The latter might make querying slower though. Typical querying of entites with multiple PK attributes would go something like: pk-attrs -> uuids -> other attrs. To make lookups fast, should probably have indices for all the cols. Will have to look into how cozo handles large values. https://docs.cozodb.org/en/latest/stored.html#storing-large-values

Metadata

Can we have attributes of attributes?

AttrAValAAttrBValB
attributenamedescription“The way humans identify the given entity, not necessarily unique”
attributenameencodingstring
attributecreated-atdescription“The time that an entity was created, born, or built”
attributecreated-atencodingtimestamp
attributescripthashdescription“The hash of the identity verification script used to identify a given person, organization or automaton. Should be unique.”
attributescripthashencodingbytes

Can we specify relationships like how a book and author are related? (author <-> name) - This might be difficult, esp the name. When we have unique identifiers (eg scripthash), we should use them, but we won’t always know which one it will be.

AttrAValAAttrBValB
attributenameattributeauthor
attributenameattributeCEO
attributenameattributemother
attributelocationattributehome

Networking

PK -> network address (IP) lookup

Distributed hash table, where each entry is the network location info for the given PK. (could include lots of info like DNS, and can also include addresses for multiple devices if the user is re-using the same key on more than one device)

Design

Setup

Alice wants to send a message to Bob. She has Bob’s master public key (given to her either by Bob directly or via some sort of introduction).

Constraints

In order for a message to reach Bob, and remain private, we have the following constraints:

  • The message must be encrypted to a (ephemeral) key that only Bob has.
  • Bob does not have his master private key at hand, he’s using a working keypair signed (transitively) by his master key.
  • Alice must have Bob’s network address for the message to reach Bob in particular (assume it cannot be broadcast to everyone on the internet).

So Alice needs to query the DHT network for Bob’s master public key PK_B. In response she should get:

Response
  • Current network address for PK_B

Relaying

It would be nice if sending a message to a large group didn’t require the sender to connect directly to all the peers. I’m not sure if bittorrent protocol (or something like it) would work here.

Pull vs push

When publishing content it’s probably better that the subscribers ask for it rather than you trying to reach them. The bittorrent-like protocol should work.

To build on i2p or a new network?

I won’t pretend I have any kind of expertise on mix networks, but I don’t want to dismiss the possibility that we can do better than i2p/tor.

I am skeptical of Tor because it’s not trustless, even though it “works” as long as the Tor project organizers are honest.

I have heard that there are attacks on the totally distributed i2p that don’t exist on Tor, but I don’t know what they are.

The ideal private network

A listener on your internet connection gets nothing

They cannot derive any information at all - not what you’re saying/hearing, not who you’re saying/hearing it to/from, not whether you’re saying/hearing anything at all.

The only way I can think of to do that is if the traffic entering and exiting your node was indistinguishable from random. That’s tall order.

To explore this, let’s think of a tiny network of 3 participants (alice/bob/charlie) and Mallory who can see all the traffic between them. How could they route messages to each other such that Mallory cannot determine anything from either the contents, addressing data, timing, or anything else? And such that the participants cannot tell which underlying IP address belongs to the other two?

First of all we have to assume that our participants are not always talking. So if we only send messages when people are actually talking, Mallory will know when people are not talking (if no packets are being broadcast, no one can possibly be sending or receiving messages). So that violates the requirements.

What if packets were sent at random from each node to some fraction of the others (in our case, 100% because it’s tiny).

For example, Alice is sending 1 packet per second, all the time. Whether each packet goes to Bob or Charlie, is random. If Bob is chosen, and Alice has content that she wants Bob to get, it’s bundled up and sent. Otherwise, dummy data is encrypted and sent.

Mallory cannot tell who Alice is talking to, or if she’s talking at all. If Alice isn’t talking, she still sends 1 packet per second.

This would cause some latency and throughput hits to Alice’s connection but that seems to be unavoidable. Also, Bob would know Alice’s IP address if it worked this way, which violates the requirements.

In order to hide Alice’s IP address from Bob, she would have to randomly route packets through Charlie, so that from Bob’s point of view, half of the packets from Alice arrive from one IP address, and half from the other.

So Alice would be sending at random:

  • to Bob direct
  • to Bob routed through Charlie
  • to Charlie direct
  • to Charlie routed through Bob

Unfortunately this naive approach may not be good enough, it may be possible from timing analysis for Bob to get a good idea of which IP address belongs to Alice. For example, routing through Charlie should take longer (all else equal). It’s not a certainty, but just leaking statistical likelihood is bad and violates the requirements.

So one obvious problem with this model is that the throughput scales with the inverse of n (number of participants), assuming ALL other nodes are in everyone’s anonymity set. If there were 100 nodes, you could only send a packet to your destination directly, 1/100 times.

You could improve this by having packets routed one hop to the destination, then all the packets would eventually reach the destination and throughput is restored. However the problem there is what happens if 10 if those nodes are owned by Mallory?

She’ll see that a lot of packets are coming to her nodes from ip1, and destined for ip2, so ip1 is likely to be talking to ip2.

Unless of course, Alice just fakes it when she’s not really talking to Bob at all.

This is starting to sound a lot like poker, where the node saves resources by bluffing. It keeps Mallory honest.

So how would a node play this poker game on a large network, say 1000 nodds?

  • when idle route to random destinations (with randomized number of hops). First hop doesn’t have to be the set of all 1000 nodes. It could be 10 nodes chosen at random, with 3 hops could plausibly reach all 1000.

UI workflows

Contacts / Address Book

Identify

Description

You have a public key and want to know more about who it might belong to.

In the address book, an unidentified public key is shown as a hooded figure with the face obscured, with the intention to convey that we do not know who this party is.

All unidentified keys are shown with the same avatar, on purpose. If you want to differentiate one unidentified key from another, you must identify one of them.

Click on the obscured face area or the “Identify” link to begin.

A list will be displayed of what is known about that identity from your web of trust. If any of your direct contacts (who you’ve authorized to identify keys) have names for this key, those are presented.

The 2nd to last entry is the key’s self-identification, if any. clicking this brings up a warning “Have you verified in person that this key really belongs to Subverse? if not, this could be an attacker pretending to be Foo. If Yes, type: VERIFIED to continue

The last entry will be “I know who this is” where you can fill in a new contact card from scratch.

Clicking one of those entries will bring up a new Contact form with any information we got already filled in.

Examples:
9c1f8398f5a92eee44aee58d000a4dc1705f9c25e29683f7730215bc1274cff1
  • Alice Smith calls “Joe”
  • Bob Jones calls “Joe Bloggs”
  • Calls himself “Joe the Berserker”
b801a6bd6f4dc2818c8fe86e417a340541008c69317f6265a20055f036587787
  • Alice Smith calls “Online Shopping”
  • Bob Jones calls “Amazon”
  • Google calls “Amazon”
  • Calls himself “Amazon”
Possible optimizations

If you already trust one or more contacts to identify other keys, and the trusted identifiers use the same name as the key presents for himself, automatically add the Contact with that name (assuming no conflicts).

Meet (self-introduce)

Description

The presumption is that the two people exchange names face to face, and that when the digital identities are shared, they’ll be be checked for accuracy.

Technical challenge

Exchange keys without establishing a direct network connection

Possible method 1

The users tap their phones together a few times, and the timings of the taps are recorded via accelerometer on the phones. Since they’re tapping together, the timings should be identical on both. Use those timings as a lookup (or DH exchange) in a distributed table to match the two together.

Then when a match is found, both devices can get each other’s network address and connect directly. A random number/image is displayed on-screen to both users, so they can verify they’ve connected to each other, and not an attacker who’s capturing the timing info via video or audio surveillance.

Might still be vulnerable to MITM, if the attacker can get both the timing info and occupy the network between the two parties trying to connect.

Possible method 2

QR code display/scan.

Literature

safeslinger

Browser

Identify

Description

Works similarly to Contact/Identify

Passwords

Password input fields are disabled by default when the site is not identified (anti-phishing).

Sites that use this protocol natively shouldn’t ask for passwords anyway (since they’ll be able to identify you using the same protocol)

Legacy websites

How to identify if there is no persistent public key? Could possibly use ssl key even though those do change periodically. The client would have to re-identify the site each time it changed its key.

Identify all the things

Map from human-meaningless to human-meaningful (and back)

Maybe call it “universal address book”. It will unify what is today done very piecemeal.

Things that we want identified

Pubkeys

obviously. Who holds the corresponding privkey?

A URL

What content is at that URL? For example a link to a bug tracker or support ticket system. The url has the host and a ticket number in it. You might want an address book entry if you’re the person reporting the issue or the person fixing it.

Cryptocurrency address

Who paid me? Who did I pay?

A hash

What content is this the hash of?

A street address

Who or what is at that address? ***

Ad hoc addressbooks we can replace

Browser bookmarks

Crypto wallet address labeling

Actual address book or “Contacts” apps

Git branches and tags

How would this work? Would git binary implement a protocol to share addressbook entries, that all happened to map hash<->branch/tag ? Git has its own networking methods.

Functions? Programs?

What exactly does it provide?

Is it a service that listens on a network port?

It could be. Sharing of addressbook entries is a great feature, but it would have to be done carefully - only allowing remote access by authorized parties.

Might be better to make it a push model - browser bookmarks are not available over the network for good reason. The default is to remain private, if you want to share, you explicitly share.

However there is a good use case for “make public” and allowing network connections to fetch it.

What kinds of requests?

Since the human-readable names are not universal, I would expect the primary use case to be putting the non-readable in the request and getting a response with name and other info.

However,

Does it make sense to also ‘introduce all the things’?

How would you communicate to someone which other protocol you wish to use to communicate with them, in a decentralized way? You can’t just say “bitcoin” or “http” because those words might mean different things to different people. But protocols don’t have public keys, and it’s not even possible to prove that software perfectly implements a protocol.

A message could say something like, “‘Bitcoin’ is what i call the protocol implemented by the software at x network location, whose source hashes to y.” The problem there is, there may be lots of versions of that software that implement the same protocol. And even then, it’s possible for a bug to cause two versions to actually not be the same protocol, even if they were intended to be.

A curated list of hashes that are known to be software that speak the same protocol, might be a good way to identify the protocol. Or if there’s a spec for the protocol, that might be sufficient- leave the decision about what implementation to use for a different introduction?

Or maybe an introduction should just pick an implementation and the introducee can switch to a different implementation later, if he chooses.

The difficulty here is that it’s not possible to capture the essence of the behavior - the same thing goes for programs or functions. How would you introduce someone to the quicksort function, when the intent is for you to pass your trust of that function (to sort things in n log n time) to someone else?

Data schema

I’ve been considering storing “facts” along with who asserted them:

Who (subject)entity (object)attributevalue
BobAliceage35
MeBobtrust-levelhigh

With these two facts, we can ask the database what Alice’s age is and be confident that the answer is “35”. Note that Bob merely asserting or making an attestation to it, is not enough. We have to have reason to believe Bob’s assertion.

Relationship lifecycle

Meet

Introduce

Mutual in Meatspace

Tapping phones together (ideally) or scanning qr code exchanges self-identify info.

Pull Online

Browsing public posts (in a forum, blog etc) of an unidentified person, you can add their self-identifying info to your addressbook (modifying whatever you want). That will change the displayed name from a pubkey hash (or a robohash or just an anonymous icon) to an actual name.

Paid Push Online

You can accept interruptions to accept someone into your addressbook, for a fee. You set the minimum fee. For example, $5 paid by bitcoin lightning network.

Exchange

Text Messages
Fora

Decentralized fora are difficult - when each person has a different view of who’s participating, how do you display that?

Let’s say there are 3 people in the conversation, Alice, Bob, Charlie. Alice follows Bob and Charlie and vice versa (but Bob and Charlie are unknown to each other).

Alice: I like broccoli Bob: I hate it, it causes cancer. Charlie: So do I Alice: What? it doesn’t cause cancer!

In this case, Charlie sees Alice’s last message but not the message she’s responding to. If we think of the thread as a tree structure, we can just lop off any nodes sent by someone unknown to us, and then we won’t see any replies even if they’re from someone we know. Or we can show the whole tree. Or we can show the unknown nodes as collapsed and let the user manually open them.

I lean toward the conservative - don’t show anything from unknown users. If Alice wants Charlie to see her convo with Bob, she can explicitly recommend his content. If Charlie accepts, Bob’s nodes will appear.

Is this a good model for ALL conversations? Obviously, just two people is a very simple case where the connection must be mutual or else no convo can take place.

Can the entire communication history of the world be modeled this way?

A tree might be insufficient, graph perhaps?

Do we even want a “public” forum? If not, how do we handle people who are invited in the middle of a conversation? In “real life” we have to re-explain what was said to catch people up. The digital equivalent might be unlocking a particular tree node (and its children) so they can catch up.

How this would work with encryption and deniability, though, I have no idea. You wouldn’t want to be having a private convo and say something you don’t want Alice to hear, and then have one of the participants invite Alice and give her access to what you said. When you sign a message it should probably be for “their” eyes only (whoever you believe your audience is).

Money
Media

Database and API access

Given that APIs don’t have requests, just arbitrary programs for the peer to execute on your behalf, what do databases look like and how do we prevent untrusted peers from corrupting our data?

At a high level perhaps we can say that data is only assertions and untrusted peers can only claim things, not change our view of the truth.

Let’s start with an example that has state: a reservation app. Alice has a bicycle that she rents out for $5/hr. We want her to have an API that allows people to reserve the bike (presumably paying a deposit).

How do we implement this API without reverting to typical pre-specified endpoints?

Perhaps a model that fits subverse better would be where everything is a “weave” of some set of streams. Each stream consists of pieces of data and the ‘weave’ is a program that reads that data and does things with it. The only thing missing then is introductions - that’s how an application would know to incorporate another stream. Streams with invalid data would be ignored. Locally there’s a db but users don’t have any access.

I think this is a good model for subverse apps but not to help build subverse - for example how do we make a DHT for sharing key revocations? Without that we can’t authenticate any content so there’s no stream sharing, so we can’t make a DHT api out of that high level construct.

Back to the reservation app- Alice gets introduced to Bob and Charlie. Bob’s stream to Alice would contain:

[[type reservation-request]
 [resource-id #b64 "abcdabcd"]
 [start-time 123123123]
 [end-time 124124124]]

When Alice sees that she adds this to her reply stream:

[[type invoice]
 [network lightning]
 [data #b64 "abcddefghigh..."]]

Bob sees this and pays the invoice via lightning. If he does it within the required time he gets the bike reservation and alice updates her local db to put bob’s id for the time he requested. If he doesn’t pay or there’s anything else wrong with Bob’s request Alice’s app just ignores it.

On success Alice replies:

[[type reservation]
 [resource-id #b64 "abcdabcd"]
 [start-time 123123123]
 [end-time 124124124]]

(or perhaps she doesn’t need to show all those fields again, Bob could put an id on his request and Alice could just refer to it). Note also that one of the streams Alice’s app must consume is from her local lightning node, it needs to stream which invoices have been paid.

When Bob shows up to collect the bike, he need only identify himself and Alice’s app can see it’s his reserve time and release the bike.

Roadmap

Get socially connected

Get a bitcoin vps

  • State “CANCELED” from “INPROGRESS” [2023-04-27 Thu 09:57]
    No longer needed
  • State “INPROGRESS” from “TODO” [2019-04-18 Thu 08:46]

Get phone number

Done via phoneblur

Register twitter

  • State “DONE” from [2022-05-15 Sun 09:04]

Buy domain telepathyk.org (if avail)

  • State “INPROGRESS” from “DONE” [2024-04-02 Tue 09:16]
    changing the project name
  • State “DONE” from “INPROGRESS” [2019-04-24 Wed 10:50]
  • State “INPROGRESS” from “TODO” [2019-04-24 Wed 10:49]

Also got telepathyk.com - namecheap

Let’s see if we can get a domain for “subverse” - obviously “subverse.com” is too expensive but some alternate spelling or tld might work.

[#A] Scripting language kcats

Core language functionality

Testing and bugfixing

  • State “INPROGRESS” from “TODO” [2023-04-27 Thu 09:58]

Scripting language identity features

Signing and verification

  • State “DONE” from “INPROGRESS” [2023-04-26 Wed 08:54]
  • State “INPROGRESS” from “TODO” [2022-05-15 Sun 09:04]

Example scripts

  • State “INPROGRESS” from “TODO” [2023-10-02 Mon 18:11]

Durability

Stream data format design

Discussion

While looking into how to compress streams (perhaps using lz4 or similar), I started to think more about what a stream should consist of. I think it should be a homogenous list.

Here’s the rationale: at least when it comes to compression, the homogenous list lets us compress in a content-aware way - if it’s a list of jpegs, applying compression is pointless. But if it’s a list of text messages, we can probably reduce the size by 50% or more.

We can make a homogenous list of substream ids, in order to group content.

Note that the purpose of streams is NOT to organize the data. It’s just to transport it to the correct place. The organization is done at the other end, and the metadata needed to organize it is contained in the stream. So basically the stream data is a bag of facts, and it’s handed off to someone and it’s their job to dump it out and sort out what is related to what. Perhaps the stream is a bunch of messages. Each message object needs a “in-reply-to” field to determine where it goes in a conversation. Of course, in practice items will be in chronological order but recipients shouldn’t rely on it.

Here’s a mockup of Alice’s side of a conversation: The top level stream object meta:

[[type stream]
 [id #b64 "aaaa"]
 [item-type substream]
 [name "book club"]
 [keyhash #b64 "bbbb"]]

And the actual byte stream

[[id #b64 "xxxx"]
 [name "messages"]
 [item-type message]
 [decrypt-with #b64 "yyyy"]]
[[id #b64 "zzzz"]
 [name "images"]
 [item-type image]
 [decrypt-with #b64 "wwww"]]

Now the messages substream (skipping over the meta here)

[[in-reply-to []]
 [body [p "Hi everyone, it's Alice! Welcome!"]]
 [signed #b64 "siga"]]
[[in-reply-to #b64 "bxyz"]
 [body [p "I'm glad you asked, Bob. Here's the first book"]]
 [signed #b64 "sigb"]]
[[in-reply-to #b64 "xxyz"]
 [body [p "All About Eels" [img #b64 "ixyz"]]]
 [signed #b64 "sigc"]]

Questions

Are signatures required on each item in the stream?

One way to handle this is to use merkle trees. Imagine I am creating a stream of messages, and I can either create them slowly or quickly. I can handle both cases:

p2p protocol

Messages for exchanging identities, signed and encrypted content

Distributed Hash Table for network locations, stream seeding peers etc

Introductions overview

The DHT will be a peer discovery mechanism for authenticated content. The DHT itself will not do any authentication. In order to do introductions we have each party create their “public” stream. This contains several types of information:

  • Their diffie-hellman public key
  • Any revocations of keys (either signing or DH)
  • Public content in the form of sub-streams

    Note what isn’t included because it’s not needed:

  • Their script (they’ll send it the first time they need you to authenticate them)
  • Their network address (you don’t need to know and don’t care)

    When Alice and Bob are introduced (either self-introduced, or by a third party), here’s the sequence of events:

  • Alice and Bob have each other’s IDs, a and b (the hashes of their scripts)
  • Alice queries the DHT for Bob’s ID b, which is the same as the ID for his “public” stream. She receives some network addresses of nodes that have Bob’s public stream data.
  • She downloads the whole stream from those nodes and finds his latest DH key.
  • She calculates their shared secret k from her DH key and Bob’s.
  • She calculates the ID of the message stream from Bob to her, ba as H(b|a|s)
  • She then queries the DHT for ba, and receives the encrypted stream.
  • She then decrypts the stream using key s.
  • The stream presumably contains message objects, which then she authenticates and stores in her local db.

    Bob does the same to recieve Alice’s messages.

Authentication overview

When we execute a script, we’ll need to know whether that script is revoked before we can use it to authenticate. The script should have a stream associated with it that contains any revocation of it or other public info. Note that scripts can call other scripts, so this procedure needs to be done for each level of delegation.

We do the following:

  • If we don’t already have peers with the stream who can update us, search the DHT for h(s) - the hash of the script. THat should return peers we can contact to get the stream.
  • If the stream contains a revocation this script is not usable for authentication and whatever message relies on it as part of its proof, is not authentic.
Encryption

A privacy improvement on this might be that instead of putting h(s) in the DHT, we put Enc(h(s)). The password to decrypt is only distributed when being introduced to somoene who will use s. This prevents the DHT node from knowing who you’re asking about. The data stream is also encrypted with the same key. However using i2p client tunnels to do this will also help even when the node knows the target and has the password - they will know someone is asking about their friend, but they won’t know who is asking.

What are destinations?

Are they pubkeys or IP addresses? I think they’re really pubkeys - they have nothing to do with physical locations.

Clearly each ID (script hash) is going to be using more than one destination over time (due to rolling keys), and so we do need to publish what the ‘latest’ one is, because there’s no other way for anyone to know.

content sharing p2p protocol

based on bittorrent? similar to zeronet.io.

i2p(d) integration

Create destinations based on identity

Bot creation functionality

mobile UI

Simple authenticated messenger

Create a kcats api server that accepts nothing but authenticated messages, and prints them to the console along with who sent them.


About

A private decentralized messaging platform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published