Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Paradigm: Point at things PAT and they do their IoT thing securely (turn on/ play music, feed dog) #2

Open
konacurrents opened this issue Dec 23, 2023 · 6 comments

Comments

@konacurrents
Copy link
Owner

konacurrents commented Dec 23, 2023

The Semantic Marker™ when combined with SMART - the Semantic Marker™ Augmented Reality of Things, supports a new programming paradigm:

Pointing at things to securely invoke their functionality

PAT = Point at Things

This includes scanning and searching (ie. scanning in-situ in the physical world, or searching among virtual items). And the 2d Optical Vision Marker - the Semantic Marker™ is just the tool: precise naming using an image.

And these don't involve special language, or spoken word. For example you walk into a room and there are 100's of lights. What is the voice command for turning on the 50th light, and which one is the 50th? How is the naming convention arranged (column or row major, big or little indian, etc).

Now if a user could somehow just point at the item desired, this 50th light, they wouldn't need to know the naming convention; Just turn on the one I'm pointing at. Much like the physical light switches of old, a direct connection to the light is performed (once the appropriate light switch for the 50th light is found.)

  1. Aside from putting an image on all the items, it's possible a printout is available that shows all the switches and their corresponding Semantic Marker™. So pointing at the 50th light switch is really pointing at the printout (or on-line) denoting that 50th light.
  2. Using context of multiple optical markers is also valuable. This means that if two markers are seen, describing that the left or right marker is more valuable, or when combined they unlock the key, etc.
  3. Using internal memory of previous scanning (pointing) events. This means scanning a mode optical marker, and then scanning another generic marker. The mode is used to instantiate that generic Semantic Marker™
  4. Security is vital. Bottom line, the username and password should be hidden (even from a tool like wireshark that decodes internet messages). SMART Buttons usually require end users to instantiate with their own credentials, the username and password. But in an friendly environment (no intruders, etc) new SMART buttons can be created that include the instantiated values (eg., the username and password).

Important

Because of the unique Language Addressability of SMART, it enables a powerful Inheritance capability for extensible and adaptable applications; all based on pictorial images. Security is supported since not all the secrets are out in the open - such as encoded in the optical marker. Instead, additional parameters are used to instantiate the SMART button.

Messaging is the key

SMART relies on a robust and extensible internet messaging capability. These are describe throughout the Semantic Marker API document. This includes the following:

  • Message Language Design (JSON, Barklets, BNF, IDL, SQL)
  • Message Transport Capabilities (MQTT, WebSocket, RPC, DDS, CORBA, Telegram, HTTP)
  • Message security (transport encryption, passwords, tokens, user accounts, namespaces)
  • Application hooks for support of messages (e.g., publish & subscribe)

References to real-world examples

The concepts described above can be found in everyday use:

  • At the COSTCO store they scan a special barcode (an optical marker) at the cash register. That code denotes that the next set of items scanned (and added to the cash register) are on the bottom level of the users cart. They also might add that 17 items are known in one area of the cart. These are used to verify a customers cart is the same at the exit as it was at the cash register. Reading the receipt helps the verifier in what to look for (bottom 17 items). (Item 3 above)
  • During College Football - they are not allowed to radio information to the quarterbacks (unlike the NFL). To to convey plays for the quarterback to call, most teams deploy a couple players on the sidelines who hold up signs saying the play. These are in code but also having two showing codes for the quarterback to decode - the hope is the other team cannot decipher those codes (visible to all) and the resulting play. To help that confusion, the 2 codes are shown - and they were told ahead of time that the left optical marker should be used this time. (Item 2 above, where positional relationship of markers is important)
  • The TV technology always has some kind of guide of index. The user can look for the weather station, and find out it's channel (if TV) or which stream (with streaming). This is similar to Item 1 above. But there is a important difference:

Tip

Other than a known common entity (and ad avatar) across services, such as AcuWeather for the weather station, how else is a user to find the weather station? Aside from reading words throughout the guide, what if there were images that could be used. These could be similar to the European Sinage, or other common images (stop sign, etc). Or there could be actual Photographic images that denote the concept of the weather station. This might be an image with lightning or rain, with a question mark? The Semantic Marker™ provides for these human recognized images, or Photo Avatar.

@konacurrents
Copy link
Owner Author

Perfect mapping

Versus almost every other paradigm today, especially AI.

There are no partial text mapping, wrong speech recognition, wrong face or object recognition.

attack at dawn

vs

attract at dawn, Monday

Perfect recognition

The Semantic Marker™️ optical visual marker is perfectly recognized, or nothing is recognized; no partial results.

Text links are exact but limited

Current hyper links to other endpoints have been useful - and are exact. The tool that references the link and the tool that can be invoked traveling to the marked location.

But the calling tool has to infrastructure to support them describing this link. Thus web pages have href, word processors have hyperlink metadata, pdf has a hyperlink, etc.

There is no text recognition as these tools have a special hyperlink design (one that hides the metadata, the hyperlink, but if touched can (usually) invoke that link.

Outside of 1988 Hone grown Hypermedia - without an indirect mediator - all links will go to the specified endpoint.

SMART buttons of 2023 support that customized indirection. Be it a changing web page address, or presentation language, or a full IoT messaging capability - it requires this indirect mediator.

@konacurrents konacurrents changed the title New Paradigm: Point at things and they do their IoT thing securely (turn on/ play music, feed dog) New Paradigm: Point at things PAT and they do their IoT thing securely (turn on/ play music, feed dog) Dec 24, 2023
@konacurrents
Copy link
Owner Author

konacurrents commented Dec 27, 2023

PAT uses a Light Saber

FullSizeRender.mov

@konacurrents
Copy link
Owner Author

konacurrents commented Dec 27, 2023

Brainstorm on 3D holder of scanner with ATOM

image

Maybe with an M5 display.

image

@konacurrents
Copy link
Owner Author

konacurrents commented Dec 28, 2023

Blind users

The PAT Light Saber could be used by blind (sight challenged) Semantic Marker™️ users:

  • Feel around for Brail
  • Use PAT to scan SM just above
  • Listen for beep of successful scan
  • Or add vibration (or other sensor feedback)

@konacurrents
Copy link
Owner Author

JiffySoft 017

@konacurrents
Copy link
Owner Author

#3 shows 3D printed enclosure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant