Skip to content

SuperPositioning Text to Font & Entangling to a CxHC Datagraphs

License

Notifications You must be signed in to change notification settings

DigiMancer3D/SPX

Repository files navigation

SPX

SuperPositioning Text to Font & Entangling to a CxHC Datagraph




      As we make our way into a more data-driven world it's time the data drive the processes. When some data can drive-itself we can have "dumb data" and "smart data". At the moment, if you want something like 'smart data' you would need to run an API structure along side the data. Encrypting the data to it's API has been the method of most but what if we could store 1% of the real data to store with the API, that could medigate encrypting and allow for simpler prcesses like CRC encoding. Very quickly we went from storing the data with padding in a method to make decrypting difficult to encoding part of the data with padding in a method to make data-determation more difficult. Moving the bar from one lane to a completely different lane.

Determinable Mixed Binary Encoding



      Cryptocurrency and hash mixers have shown that mixing data in a determinist way can add trustlessness into systems that normally would requrie large amounts of trust. The data we type into text fields may sometimes be important but yet will have no problem having insecure & plain-text input fields. The time has come to increase the security of user data to help the integrity of the quickly qrowing soft-lined digital world.
      Mixing binary through faux-superpositioning before recoding state changes should increase probable security by magnitudes greater in comparision to other reversable hashes used online like Base64 & Base58. Taking the encoding a bit further utilizing a flexible procedural way to determine dynamically-modular grid-based shapes, we can increase the natural security to similar methods of lattice encryptions. Tapping into the infinate-average maths constructs to superposition APIs and marker-codes as parity check points that can both direct the use of the data as intended while increasing it's own entropy. Using generic structred standards for the modular-capable internal systems of the algorithm should create stronger obscurity by allowing different types of superpositions for the same encoding standards. This concept is similar to how tokens work on-top of a blockchain token system without the built-in functions of building tokens on that chain.

[ASTRACT]

      My hypothesis is that encoding can be used to have fast-hook swapping of plain text to encode text trustlessly & potentially on the fly. Limiting the shape possibiliites while using a "Talk Model", inspired by the hit TV show "Full House", we can create determinable flexible superpositioning while the generic model will encode/decode all the same. Using the "Futurama Theorem" (Ken Keeler, 2010) to self-correct the talk model during both encoding and decoding regardless of swappable parts of the algorithm to help ensure reversibility. Additionally, using a modified version of the Futurama Theorem equation to perform new parity check methods to help find and correct potentially multiple errors with as little as a single binary bit.

Full House Talk Model [CONCEPT]



      In the classic American TV sitcom, "Full House" (TV series 1987-1995, IMDB) when the Tanner family has a visit from their eldest daughter's firend, Kimmy Gibbler, she would often speak about things that only DJ understands. Stephanie, DJ's younger sister, can explain what DJ transcribes to her dad but not accurately, so Michelle (the youngest Tanner child) would be needed to fill in the gaps or at least get the order correct. In the end, the father and his cohort of male-friend-guardians are often still left unusre of what is being said, so they need the help of Becky to decode what the men have to make sense of the situation. In the show this made a great dynamic that showed how differently generations speak because of the influences of their own sub-cultures of school, family, situations of life & public media.
      The Full House Talk Model uses the idea of bait & switch with the information to make what's actually being said unclear. The Futurama Theorem will help keep everything aligned and settled with modular sections in the talk model & modified versions of the theorem will be used to find-&-fix errors as well as turn anything into a single binary representation for new parity checks. Hopefully when we are done, we'll be able to pull the full conversation from very little information just like they do on the show.
      Before diving into the algorithm, there are some world variables needing to be set. For the most part, variables of sub-functions are modular so as we introduce new sections or functions, new variables may arise & some may decay. Depending on the location of the algorithm you may or may not have some data to rely upon so once the coupling or decoupling starts, you cannot revert beyond that function's section. All instructable data should allow the recommendation for user's to input their modular data like instructions and instructables via Wave-Data H-APIs or CID pulling from IPFS. Modular function's variables are called instructions while modular function's are called instructables (list of instructions and their variables). Instructables help allow for any-input with standard output but most importantly this allows for customization to the algorithm as needed. In short, we use instructables as a form of master-private key and we can use smaller-information-sets like the internal instructions to customize our master-private key. This also means that we can align our encoder/decoders via version types if both parties already have the instructables or has enough instructions to "re-generate" an instructable.
      The Full House Talk Model's base for encoding is to superposition the input to a font designed to be "visually encrypted" called Node. This means the the data we are switching the user keycodes for are keycodes of an end-2-end encrypting font. This is the first step and the first consideration because the faster we can drop what the user actually typed, the better. Once we swap the user input for Node keycodes we also wipe the input and replace it with random gibberish of similar lengths to the input to give the impression that what is being seen may be what's being typed. Throughout the entire process of this algorithm, we will be doing tiny extra steps for various reasons with a consideration being, but definitely not the only consideration, to allow for confusing power consumption to prevent power-light-decoupling nor power-light-decoding which is the act of determining functions from watching a power-light flicker or watching the power volts/amps/watts change in the power cord of the computing device to then work backwards their encrypted process to 'reverse-engineer' a key.

Compounding Extended Hamming Code (CxHC) [ALGORITHM]



      When an input is being read and swapped, we need to load a few variables with data. Variables for each child amoung the processes we'll need throughout this part of the algorithm. This information will need to be set or the input should use a standard instructable set.
      When swapping the input for what Kimmy said, that is going to store the character via Node font, which in turn is what DJ says. The input swap records the Node shape based on what DJ says, places the shape on a centered 17-slot graph then records the positioning of that character to this graph. If the character physically has nodes in the first row, second row, third row or under-table slot this will change what Node keycodes will be pushed through the algorithm. How the shape appears on the graph will determine what Steph (Stephanie) & Michelle says. The instructable for this part of the algorithm (when Kimmy talks to DJ then to the younger siblings) will define what DJ, Steph & Michelle says based on either the keycode or Latin-character input. DJ & Steph only change what they say per character inputted while Michelle adds the new input to what she said last, assuming she started with a blank slate before Kimmy started talking.
      The algorithm has to re-center the null-slot or under-table of the font superpositioning graph with every input. This is the first use of the Futurama Theorem [trunc|_(3.14*n)+2_|]. In this case, we swap "n" for "z" for 'z', the length of what Steph said altogether and one of the best ways to determine internal length of our superposition. The internal data length or how many characters are stored is important but this is something that can be a lattice or matrix instead of plain-text-numercial digits. The internal data length can be a number, hexdec, hash or formula based on how much information you need to hide. The standard is to just base everything on Michelle's memory of length since she's the only one keeping count.
      The superpositioning graph has a unique ordinal system. We will end up with what's basically Compounding Extended Hamming Code (CxHC) so the hidden superpositioning graph uses separated ordinals to not interfere with the other graph ordinal systetm. The superpositioning graph is ordered vertically 1,2,3,4 but horizontally just add a 0 per horizontal right shift. In Example: 1, 10, 100, 1000, 2, 20, 200, 2000; so each row is just 1-4 then the further to the right, the more zeros behind it. The standard code graph setup (like for Hamming Code) is in binary, ie: 0, 1, 10, 11, 100, 101, 110, 111, 1000. The noticeable difference being the superpositioning graph isn't binary capable but if we put the slot number into binary (1-17), we get that same ordinal design for pin-pointing errors for correcting them. This gives us two ways to look at the grid, one being strict to the data inside the grid and the other being strict to the imaginary grid. Because of the type of dynamic system, we have to look at the system capabililties as well as the multiple-internal parts as internal-capabilities. While ensuring we don't overflow the system cap nor cut an internal-cap but still place everthing onto the grid wrapping around the centered imaginary alignment shaft.
      The other aspect of the superpositioning is, when we look at the node placements on the superpositioning graph (per character). If the node has left-to-right objects in succession, the line indentifier will increase for each left-to-right (horizontal) succession. In example if the node is 3 objects long on the x-axis it may be remembered by Michelle as 111 or 333, depending on which line it lands on. If the node has top-to-bottom objects the sums going furthest top to just below to just below, never going beyond the internal-section's addics-limit. This unique approach to superpositioning does give unique shaped objects to have semi-unique corresponding digits allowing us to predict the total possible shapes of input which could be relayed in the structered graph later.
      By having a soft cap of 3-successional allowances for combining forces sets a finite number of possible shapes based on complexity. This is my way to build a system that I can within reason, "control the output" while figuring out how to make all this work via Javascript. To expand this, simply expand the number of successional allowances in any or all directions (vertical, horitontal; connections). Setting directional commands for how successional-allowances are designed instead of a successional cap would be a lattice stylized encoding. The limitations of allowances are based on the shape instructables, all overly-large shapes for a determined instructable may cause a fail error, phantom characters (multiple unwanted characters in place of a single wanted character), or incomplete decoding/encoding. If an encoder is customized or finely tuned outside the common or standard, it should encompass a decoder within the same service, product, website, etc so that they feel as if two sides of the same coin. Standard outputs should be marked somewhere on the website or near the input/output areas with the standard version number. Examples of version number possibilities may be but certiantly not limited to the following: SPX Standards:: v1, v2, v3 (versionNUMBER) or Pulled Standards (imported):: IPFSv1, IBSv2, NNNv1 (LOCATIONversionNUMBER). Wave-Data H-APIs should be used for transmission of importable data including for instructables like version standards rather it be to insert-into a de/encoder or to grab-from a de/encoder.

      Begin cutting what Steph said to place the binary representation into the superpositioning graph. Using the mathematical symbols in order of left-to-right will be the way to superpositioning. Steph instructions may also have a heat map which can also be used for superpositioning, both can be used if the heat map is used as a limitor of the machine writing while the symbols between the binary in Steph codes pushes the complexity consideration. Most encoders will probably use heat map over maths for a look up table is faster then computational-determination. We will be a turning-counter machine principle to keep track of the data. That's where we use a Turning machine and a Counter machine as checks-and-balances for the internal-loops and external-loops, in other words: a vending machine allowing a turning machine to read data but the turning machine hits the vending button based on what is vended to it.
      We read via the turning machine, checking first the read position. Based on what's read move forward or backward to read again. Then based on what's there, repeat or write to the counting machine's slot. When the turning finished it's loops or the turning machine wrote in a slot then the counter machine moves forward one to an empty slot, reset the turning machine to the new position equal to the counter machine's newest position. So the counter machine is to determine the location of the read/write process while the turning machine does more intricate work then finally when both machines agree & something is recorded by the turning machine the next slot is moved in the counter so the entire process restarts again.
      Once we have the array of the 4 rows of the superpositioning graph, we can change what tape the turning-counter machine is disecting to the tape we just had it create, we essentially set it for writing the state changes of the row objects. Starting at the null-position of all the row-tapes, record the first object seen in each row with the turning machine. Once done, move the counter machine one (which resets the turning to the same position), then check each row with the turning machine recording if the same as before (1) or different as before (0) per row's object. This means the sequence "1001" would be recorded as "1010". Each row is treated individually to the other rows, but all four rows are checked per loop cycle, if a row's length has been met, check it's length then rest. If rest happens, this will look like multiple processes are being performed. Once the counter machine reaches the end of the longest row-tape, end the loop cycle or use an escape value to end the loop cycle.
      Graphs are to be made every 45 characters (standard), every 255-bytes or 175 characters (big-block), when a special 'finalize graph' button is pressed (manual), when the "enter" or "return" keycode is seen by Kimmy (dynamic). Which is used is up to the generator but may be to fit specific databases or data-use-situations. When we get to the point of finalizing a graph or subgraph, we need to perform the wrapping protocol. Wrapping the graph is fairly easy. The state-change is recorded and ready so we need to check the algorithm's current state-size to build it's mapping hash. If it's a subgraph it will have a directory key instead but this process is done regardless & the algorithm will record the mapping hash data as the Program 1 slot of the final graph. We need to run parity checks to set our determiner slots next.

      This system allows for a multitude of ways to do parity checks so we will go over the some possible parity checks. A parity check is determining if data was changed. Parity checks in the manner we are looking at are very common in (7,4) Hamming Code but for what we are doing, we are not checking to see if the binary of the state change rows were changed. This will tell the decoder how to tell if the binary of the data has any possible errors and possible corrections. Because we are not directly working with only binary, we can use the determiner slots to drop almost any data ie: the total-byte size of the end datagraph, encode hard-error-detection markers, apply trinary computational actions, entangle data identifiers, entangle API markers or nearly anything else with Data-Wave H-APIs.
      The most common way to parity check is the (7,4) Hamming code method of checking specific row associations to cross-check the entirety of the datagraph. (7,4) Hamming code looks at 2 rows/columns at a time per parity check slot. 1st parity slot checks columns 2 & 4, place a 0 or 1 in D1 to ensure this check has an even number of ones. 2cd parity slot checks columns 3 & 4, place a 0 or 1 in D2 to ensure this check has an even number of ones. 3rd parity slot checks rows 2 & 4, place a 0 or 1 in D3 to ensure this check has an even number of ones. 4th parity slot checks rows 3 & 4, place a 0 or 1 in D4 to ensure this check has an even number of ones. Using trinary actions would allow you to use also a "2" in each D-slot to say that at that point, with the '2' being a '1' as well, there is an even number of the ones in the entire graph (at that determinor point, there are a total of four determinor spots).
      The alternative parity check method is the (11,1) Determinless method that is using the D-slots for any value with bases higher than 3 (integers over "2", decimals, binary, Trinary, or any numerical-digit-representation of non-numerical objects) but does the parity checks by running any non-binary data through a Modified Futurama Theorem (MFT) [Ceiling|^Trunc_(Trunc|_(n*3.14)_|+z)_|%2^|] and using the MFT output for the parity check. This method can still read parity checks in the method of row association checking number of ones to create dynamically-modular or ruggedly-singular checks which may include but are not limited to the following: Find Errors, Correct Errors, Mutate Data, Layer Data, Lattice-fill (quantum resistance), fast-verifications.
      It's the Modified Futurama Theorem that allows for dynamic determining slots because it turns any decimal or binary representation of what we give it into a single binary bit. Placing the data as a whole input for 'n' as it's decimal form and it's length as z we can help ensure we reliably get the same output for the same inputs. Recording the result of the parity check goes into the null position at the end as a non-numerical character that ensures the number of ones for the parity check type has an equal number of ones. Every time this recording needs to happen after the inital time, we store the new 0 or 1 parity check bit as a variable instead of changing the actual recorded bit. Changing the variabled-bit with each loop in the pattern gives us a "Meta-Pattern" that can be used to check for potenitally falsified data. Once the parity sequence is done the ending variable parity bit should be the same as the initial recorded parity bit. So, re-perform the inital recorded parity check to ensure the inital needed bit is the same as the final recorded bit, as seen following: A+B=1;C+D=x;E+F=y;G+H=z;A+B+[...}+H=(A+B). By having this "Unknown Meta-Pattern" and because the MFT we are using is generically using the infinate-average of binary 1's from what we give it, we can ensure we have the correct corresponding data. The infinate-average is a generic way of saying, "the average of everthing becomes the ending result" so if the MFT looks at the total number of 1's in the binary form of what we give it, then as we give it more data we are more likely to end at 0 so one check being of the sum of all the other checks has a more-likely chance of being 0 then the inintal first check (which is included in the sum of all other checks check). So if both is 0, we can say it's possibly fomred but if both are 1, we can say it's imporabably formed (not-likely to be formed) and thustly if both do not align it's possibly errored.
      The wrapper is just about ready. We use an ABI (Aplication Binary Interface) to display what type of data the wrapper application process needs. If you don't have the data for a particular section, just send any non-numerical and non-speciality-characters, like a alphabet character. Now we mix in the final values of our header ABI, which is as follows:
    datagraph {
      Graphhash [
        avg loop cycle (numerical; decimal preferred); "x"; internal data size (numerical; decimal preferred); "e" (normal encoding) |or| "f" (compressed encoding) |or| "g" (graphed encoding); chain weight (numerical; decimal preferred); nonsensitical non-numeric parity bits (Alphabetical; obscure latin-characters preferred)
      ], Determiner slot 1 [
        anything
      ], Determiner slot 2 [
        anything
      ], Programming slot 1 [
        anything
      ], Determiner slot 3 [
        anything
      ], Row 1 [
        anything; anything; anything
      ], Determiner slot 4 [
        anything
      ], Row 2 [
        anything; anything; anything
      ], Programming slot 2 [
        anything
      ], Row 3 [
        anything; anything; anything
          "." nullstop
            Row 0 [
              anything
            ]
          ],
        }


      Once the wrapper is finished we can send that graph and set of subgraphs. It doesn't matter if we send graph sets when we are ready, on the fly or whenever & however the system or users chooses to. These graphs and subgraphs are designed around keeping the data intact within the specified order to ensure the data is reversable even if it's split into sub-sections or sub-graphed. Going as far as being able to drop the parity bits to clean-up and reduce buffer-data per subsequencial block of data or sub-graph only gives more options.
      Using the Modified Futurama Theorem to turn any data into a parity check does open up more potential of oppritunities but in the same way that modified maths formula gives us provability without prior knowledge. These are Minimal-Knowledge Proofs for smart storage & smart launching data-enriched applications.

Handling Complexities of Detangling



      Entangling data is easy, it's a simple one-direction mathematical formula but being able to detangle requires a bit more thought and planning. We encode to entangle in a very specific way so we can detangle in a very generic way. This allows for direct computational entangling so everyone can have unique entangles but the way we detangle them will run all the same rules. Instructions within a rule can be swapped and changed to have an additional layer of decoding obfuscation. The first "rule-of-thumb" so-to-speak, We use a 17-Grid (4x4 Grid [16] + Null-Grid Slot [17th]) to keep track of the shapes the complexity will form. This is the most important rule, this allows us to take an unknown direction and formulate it into a relative direction that we can reverse. We spiral the data, always to the right, around the centered "null-slot" which keeps the shapes all aligned together. Now we have a way to form order, let's go over the rules of concept for "Node-Based" en/detangling.

17-Grid(visual)
Row 1: 1101001000
Row 2: 2202002000
Null: ---0----- --
Row 3: 3303003000
Row 4: 4404004000


To write out a 17-grid you use this formation:
    Row1-Slot1,R1S2,R1S3,R1S4;Row2-Slot1,R2S2,R2S3,R2S4;Row3-Slot1,R3S2,R3S3,R3S4;Row4-Slot1,R4S2,R4S3,R4S4;NULL-SLOT.

When doing the maths behind this grid try:
      x*y === row*cols but when adding use this method:
        x === Row Position (x1,x2,x3,x4,[...],x[n])
        y === Column Position (y1,y2,y3,y4,[...],y[n])
         
      This works for grids greater than 17 as long as there's a null slot dead-center in the graph, that forcefully offsets some objects by individual complexities so where the more complex the object the higher chance it'll auto-reset to the null-slot, helping us force order without prior knowledge.


    17-Grid Rules



        Use Binary-Grid identifiers for data-mapping (0,1,10,11,100,101,110,[...]1110,1111,10000)
        Use Forward-Down identifiers for complexity-mapping (1,10,100,1000,2,20,200,[...],400,4000,0)
        Use sub-string identifiers for simplistic complexities:
          + - Either on left of Null or inside Null (use Null-complexity identifiers for determining further)
          - - Right of Null or no Null present at this spot (use Null-complexity identifiers for determining further)
          / - Complex Null-Block (Null-complexity Identifier, Orientation 1)
            n/1 || 0/1 (top [black/white] bottom)
            n/0 || 1/0 (top [white/black] bottom)
          \ - Complex Null-Block (Null-complexity Identifier, Orientation 2)
            n\1 || 0\1 (bottom [white/black] top)
            n\0 || 1\0 (bottom [black/white] top)
          * - Vertical Multi-Block, always use in Top*Bottom*Lower[*...] formation
          : - Vertical Multi-Block, always use in Top*Bottom*Lower[*...] formation
        Use decimal values as APIs:
          0 - [NO DECIMAL NULL] Space Object (empty)
          0.01 - Center object with objects on both sides
          0.001 - Center object value of 0
          0.1 - Center object value of 1
          0.2 - Center object value of 2
          0.3 - Center object value of 3
          0.4 - Center object value of 4
          0.5 - Center object value of 5
       Treat each value after "0." in the manner seen, +"01" for center object value of 0, +"1[...]" for center object value of 1 and beyond. This creates a dynamic way to always give extra info on objects during detangling.


      Use the known complex objects mapping before venturing into dynamically defined objects:
          11|22|330|4400 - Doubled Same Digits are side-by-side duo x-axis shapes (box-box), trailing zeros are right-shift positions
          3|5|7|30|500|7000 - top-down adding of the box-value are duo y-axis boxes, trailing zeros are right shift positions
          111|222|3330|4440 - Triple x-axis shape (box-box-box), trailing zeros are right shift positions
          6|90|600|9000 - Top-down adding of the box-values are triple y-axis boxes, trailing zeros are right shift positions
          50.1 - Unique ID for hard-left trio (duo y-axis shape + Null slot-box), change value behind decimal if needed for new-unique null-slot object value, this is a force centered object
          -500.1 - Unique ID for hard-right trio (Null slot-box + duo y-axis shape), change value behind decimal if needed for new-unique null-slot object value, this is a force centered object
          220.1 - Unique ID for hard-top trio (duo x-axis shape [box-box] on-top of Null slot-box), change value behind decimal if needed for new-unique null-slot object value, this is a force centered object
          -330.1 - Unique ID for hard-bottom trio (Null slot-box on-top of duo x-axis shape [box-box]), change value behind decimal if needed for new-unique null-slot object value, this is a force centered object
          770 - Unique ID for duo stacked side-by-side "square" shaped
          55.01 - Unique ID for Sandwiched center object with duo stacked shapes (5-sided dice shape), do not change the decimal, this shape should be unique enough to determine front-to-end during detangling
          Some instructables may be morphed by the encoder/decoder to remove instructable hits or two characters with the same Steph & Michelle Codes. This is done by adding 5 (mathematically) to the left of the "." (or end of code) for example if the algo sees two different hits on -500.1, the second Kimmy code with the same Steph & Michelle code will go from -500.1 to -505.1 without changing the version number nor the extractable instructable. This should be done by the encoder & decoder automatically without question. This is an anti-user-error feature.
      Trailing Zeros always show right displacement (if 2 on the right-side of grid, 2 zeros should be on the right of left side of decimal for the object code)
      If reading L2R (left-2-right): Place beside another (ie: 111 or 222)
      If reading T2B (top-2-bottom): add top-down rows and place L2R beside another (ie: 6 or 9 or 619 or 770)



[check the Full Tech PvtPpr (private paper) for "Breaking Everything Down" & "Building Everything Back" sections]

Finalizing the Point [CONCLUDE]



      The process does seem to work and can perform the various actions, ie: SuperPositioning text on the fly, recording state changes, Parity Check using MFT formulas, decodable graph output. The idea for at least is a success. With more tweeking & testing more possibilities should be posible later on but at the moment we are able to entangle and detangle some outputs. There are still errors in the works of being fixed and some of the conceptional ideas are still being figured out on how to impliment with Javascript (vanilla).
      There are still a few questions left to be answered but all the questions we directly wanted to know were answered. "Do we need to know what we are typing? [no]", "Do we need to send this data? [not now]", "Can something represent data instead [yes]", "Can this be done on the fly? [yes]". But we also made a few more discoveries along the way for example: Can we send near dataless-data? Yes!, Can we send the data at will? Yes!, Can we interlace codes with our near dataless-data? Yes!, Can we decode our encoding & everytime? Yes & not yet everytime, >Can we detect, find & correct multiple errors? Debatable, Can we have single-byte parity checks? YES!, Can we have mathematical only parity checks? YES!.
      To end this on a high note, yes my hypothesis seems to be correct or loosely verified under the best of conditions. Encoding does seem to be quick enough, even if with a complex encoding function, to handle on-the-fly encoding for most devices but is potentially strong enough to be used as supplimental data storage or reduant storage. Rather or not the modularity propsed creates more trustlessness is still to be prooven. I honestly do believe this could be a new storage design to loosen what data is being stored and where. Working with the live example and test-beds will get us to build a fully functioning version of this concept as a single webpage. There are the parity check demo and basic encoder demo within the files here on Github but the "testing version" or last-updated of the active version being worked on.


Speacial thanks to Jake La`Doge in assiting in the self-correcting methods and directing me to the "Futurama Theorem" which ended up being the glue to getting this concept system to work dynamically.

Speacial Thanks to The Mota Club (on telegram) for helping in motivation and spelling corrections for this paper and concept.


Note:: There may be spelling errors still.