Skip to content
Marxos edited this page Jun 18, 2022 · 32 revisions

Basic architecture for androids will be something to set them apart from humans (Which we already have), making them classical servants in the old AI sense, but capable of normal communication and asking questions where there is ambiguity. The basic architecture is a constraint system, composed in a unique language, much like PROLOG. This allows the accumulation of knowledge, separate and queryable from the neural net, which is transferable and mergable with other like androids to create massive knowledge bases.

The other essential part of their working mind is the neural net, a complex graph of associations built from input data that gets mapped to output actuators through natural feedback to the sensors. This network takes the form of a graph.

Androids are:

Input -> neural net-> output.

Sensors compose the input into the AI. Actuators are motor outputs which have the capacity to initiate change. Along-side this are r2r (robot-to-robot) and h2r (human-to-robot) communications. Androids generally cannot add a task to other robots, however, if it is in harmony with an existing task, it may be conducive to do so.

To avoid robots roaming around (more-or-less) uselessly and expending energy (and to assign blame when a robot causes trouble), units must have an OWNER. OWNER slot begins empty: ∅. This void impels the robot to find a solution.

Androids use an internal, PROLOG-like language for storing predicates, in the form of nested, named sets. Since the world has an orientation, this "set of sets" has an ontology (of sorts), creating a tree-like from of knowledge starting with "the World". From here, things like People and their trust levels, "getting from here to there", objects and their general locations, and maybe a calendar for tracking events are the likely categories. This set of sets gets a weighted fitness function that evaluates the elegance of the knowledge the android has stored.

To avoid the task of training robots to be human, androids have a knowledge base in which to store predicates from their human owners. This saves the years of time it would take to become equivalent in reason, and has generally been found useless to make androids act like humans through such human-equivalent training. It takes huge amounts of resources just to get them to understand basic human principles, but they can be trained easily with predicates. Since humans already exist and need meaningful lives already (rather than being displaced by robots), it is decided not to repeat the old medium in the new (nor hollywood for that matter).

Since a variable slot should never be empty in the knowledge base, if at any-time a variable is ∅, then =>variable_name implies "add to task list". If ¬variable_name, => variable_name. In other words, one of the implicit purposes of any unit, is to turn the Unknown into the Known. STUB (NOTE: =>, →, ENQUEUE are the same: add to TASK_LIST. Also, unlike the C computer language, ¬∅ != T.)

This implies that androids start with an implicit task on their TASK_LIST: =>OWNER (equivalent to "find owner!"). It is conceivable that the law or the manufacturer may have other tasks (survival) embedded within as well.

The next item of business is establishing the default cost-value of tasks given by owner. This establishes how hard the unit will work towards the tasks with the risk of damage or other costs.

! means imperative, ? means question, "." means store predicate, has an implied "to me" appended to it (may store as datum about SPEAKER), or is person-centric irrelevant to TASK_LIST. Each SPEAKER has a trust_level. Unit can use SPEAKER predicates with modifier of trust_level. If SPEAKER is OWNER, then trust_level starts at 100% or 1.0.

If trust_level results in DAMAGE, then trust_level *= doubtof(person).

When ¬TASK_LIST, android enters primary thought cycle: either ACQUIRE(KNOWLEDGE) (when curiosity > 0) or FIND(OWNER).

FIND(OWNER) => if OWNER then goto(OWNER), else ACQUIRE(Owner). Constructive, non-recursive operation.

For any operation FIND(x), x in facts: the android can ask "What is x? or Where do I find out about x'?" find_owner: if person visible ask: "Hello, are you my owner?". if YES, then task_complete(FIND_OWNER), else ACQUIRE_KNOWLEDGE

Android thought cycle is one of: acquire (receive input) -> ascertain (match with prior inputs or ask) -> store (create new nodes/memory) -> return (back to acquiring or check TASK_LIST if no inputs)

Owner is buyer unless unit has broken a law, then the owner is empty until new owner is set.

Modes:

  • listening: suspend tasks, receiving input
  • en-route: on path to performing.
  • on-task: glah
  • returning: task complete
These are the meta-modes related to the primary thought cycle.

When there is a conflict between one stored predicate and another, the android must ask for clarification, use the predicate with the highest trust value, wait for further data, or guess (depending on setting of _).


Note that the presence of robots insinuates that the value of human beings is 0, otherwise why wouldn't you have humans who look for meaning in helping others, for example, or being led by intelligent or creative leaders?

The only use would be where humans are greater than AI. If you don't have a society where that has happened, you have a dysfunctional society. The issue of AI with senses that humans don't have (to build networks of knowledge presently alien to humans) is probably anathema to humans developing themselves.

Clone this wiki locally