socialiter - wanna be private-first social network
socialiter is proof-of-concept social network that experiments various backend architectural choices.
socialiter wants to proove that a complex application can be developed and operated more easily as a monolithic service using the right abstractions. That's why socialiter use FoundationDB.
If you are on ubuntu or other debian derivatives try the following:
For other distribution it's recommended to use LXC and install Ubuntu 18.04.
How to contribute?
- Read the README and the code
- Pick a task in the the roadmap (see below) or in brainstorming
- Create an issue describing your plan
- Fork the repository
- Create a branch
- Code + Tests
- Submit a pull-request
Thanks in advance!
2018/10/03 - What Are The Civilian Applications
- Continous Integration
- Basic Data Persistence
- Example use of
- Basic Feed Reader
2018/11/31 - Unfortunate Conflict Of Evidence
- Basic Task queue [TODO]
- Example Unit Test that mocks a coroutine [TODO]
- Basic TODO [TODO]
- Basic Wiki [TODO]
- Basic Forum [TODO]
- Basic Paste [TODO]
- CSRF Protection [TODO]
- Basic Search Engine with a crawler [TODO]
- Deploy [TODO]
2018/12/XY - Pick a Culture ship at random
- python-fu [TODO]
Functions for the win
socialiter use a lot of functions. There is nothing wrong with classes. In particular there is no Object Data Mapper (ODM) or Object Relational Mapper (ORM) abstraction, yet.
That said, socialiter rely on
trafaret for data
validation which is built using classes. Also socialiter make use of
SocialiterException class that you can inherit.
Socialiter rely on FoundationDB (FDB) to persist data to disk. Becareful the default configuration use the in-memory backend. The goal with this choice is double:
Experiment with higher level database abstractions (called layers FDB jargon) on top the versatile ordered key-value store offered by FDB.
Experiment operations of FDB from development to deployement of single machine cluster to multiple machine clusters.
src/socialiter/sparky.py offers an abstraction similar to rdf /
SPARQL. It implements a subset of the standard that should be very
easy to pick.
To get started you can read FDB's documentation about the Python client. Mind the fact that socialiter rely on found that is asyncio driver for FDB based on cffi (which is the recommeded way to interop with C code by PyPy).
Of course it would be very nice to have well-thought, easy to use, with migration magics. socialiter proceed step-by-step. Implement, use, gain knowledge, then build higher level abstractions. When things seem blurry, do not over think it and try something simple to get started.
sparky is small RDF-like layer which support a subset of SPARQL.
Simply said, it's a triple-store.
Let's try again.
Simply said, sparky stores a set of 3-tuples of primitive
not supported as-is)) most commonly described as:
(subject, predicate, object)
But one might have an easier time mapping that machinery to:
(uid, key, value)
The difference with a document store is that tuples are very unique! Which makes sense since it is a set ot tuples. Otherwise said, you can have the following three tuples in the same database:
("P4X432", "title", "hyperdev.fr") ("P4X432", "SeeAlso", "julien.danjou.info") ("P4X432", "SeeAlso", "blog.dolead.com")
This is not possible in document-store because the
Querying in RDF land happens via a language "similar" to SQL that is called SPARQL. Basically, it's pattern matching with bells and dragons... That being said, sparky implements only the pattern matching part which makes coding things like the following SQL query:
SELECT post.title FROM blog, post WHERE blog.title='hyperdev.fr' AND post.blog_id=blog.id
Here is the equivalent using sparky:
patterns = [ (sparky.var('blog'), 'title', 'hyperdev.fr'), (sparky.var('post'), 'blog', sparky.var('blog')), (sparky.var('post'), 'title', sparky.var('title')), ] out = await sparky.where(db, *patterns)
That is you can do regular
SELECT without joins or a
multiple joins in a single declarative statment. See the unit tests
See this superb tutorial on SPARQL at data.world.
The roadmap is to implement something like datomic without versioning.
Mind the fact, that since sparky use
fdb.pack for serialiazing a
tuple items, lexicographic ordering is preserved. That is, one can
defer complex indexing to upper layer namely the application ;]
Styles Style Guide
- Do no rely on LESS or SASS
- Only rely on classes and tags
- Avoid class when tag is sufficent to disambiguate
- Prefix class names with component name to avoid any leak
- Avoid cascade ie. all styles must appear in the class declaration (ie. it is not DRY)
- When it makes sens, be precise in the selector (most of the time it must start with