Skip to content

rustyoldrake/Character-Cartridges-Embodied-Identity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Character-Cartridges-Embodied-Identity

POC for Sensemaking Systems with Emotional & Anthropomorphic Traits - and synthetic identity

New links

May 17th Towards a Conscious AI: A Computer Architecture Inspired by Neuroscience https://eecs.berkeley.edu/turing-colloquium/schedule/blum

May 17 from B "The sensibleness of a chatbot is the fraction of responses labeled “sensible”, and specificity is the fraction of responses that are marked “specific”. The average of these two is the SSA score. The results below demonstrate that Meena does much better than existing state-of-the-art chatbots by large margins in terms of SSA scores, and is closing the gap with human performance. " https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html and https://arxiv.org/pdf/2001.09977.pdf

may 12th release! Announcing ML-Agents Unity Package v1.0!ML-Agents is an open-source project that enables games and simulations to serve as environments for training intelligent agents. It includes a C# SDK to set up a scene and define the agents within it, and a state-of-the-art machine learning library to train agents for 2D, 3D, and VR/AR environments https://blogs.unity3d.com/2020/05/12/announcing-ml-agents-unity-package-v1-0/

May 8 2020 https://openai.com/blog/emergent-tool-use/ In terms of observing autonomous or semi autonomous agents, Our character cartridge infused agents could be observed in a similar environment but instead of hide and seek for goal seeking there’s something more around curiosity or stress or cortisol (thanks for reminder lucy)_

Jake update video - APril 28 https://www.youtube.com/watch?v=0VyLE_fG_7o&feature=youtu.be Explosions and also first cut of the room of observation / monitoring room

Jake update bideo march 31sat - Orb tests, switch throw, Regina and John/Jake https://youtu.be/IBpVZaGsWrI Avatars "animated" after switch tthrrow (check out 1:45 'big stretch')

Test video Feb 5 - Three Toasters loading - BLue5 Red1 Yellow1 https://www.youtube.com/watch?v=UzgKLKAutXM

Jake Test Video - Feb 4 2020 A short demo of some concepts to discuss in call https://drive.google.com/open?id=13J85ERDewAGbgYWj1rneY7uiung7H61D

Prototype - Cartridges for Personality https://drive.google.com/open?id=1fhQMKUT-NH52kjPctRO0ZeSbR8H_ZoIj

Pad Thai Data driven avatars https://drive.google.com/file/d/0B3WOmvm7uBq-RXZ1dlRYZWg5cmM/view

Bubbleman Avatar https://youtu.be/fOfFrGsNwHo

User Story - Character Cartridges and a Martian Prometheus

-- Isolation often drives inspiration and innovation…

Isaac Newton and William Shakespeare, isolated by European plagues, created marvelous works of science and literature.

Mary Shelley in the volcanic winter of 1816, took refuge with Lord Byron by Lake Geneva in Switzerland. She produced the story of Victor Frankenstein, inspired by Prometheus, a Titan who created humanity by blending clay with stolen god-fire.

Which brings us to YOU!

You are a Martian Astronaut, on a resupply spaceship headed to Mars. Your 23 crew mates are in a deep sleep for the next 6 months. Your daily jobs of checking in on the ship and crew health, take just a small portion of your time. Communications with earth sucks (given the 10-minute delay) and watching movies and playing video games got a stale after 2 months… so time for a little inspiration and innovation – Prometheus style!

The ship has a VR training simulator for Martian construction tasks. But with some tinkering, you’ve figured out how to build a VR lab to compose virtual characters.

The creation process should be fun, and the virtual characters will help provide entertainment – and possibly companionship – on the long voyage ahead…

Virtual Lab Layout

As you begin to build your virtual lab, you’ve decided a few things:

• Décor & Design - As a fan of all things 1980’s – you’re making your virtual lab with an 80’s vibe. VHS video tapes, 80’s fonts and colors, and general 80’s feel. • These Characters will not talk, at least initially. But they will move, see and ‘feel’ – and you will be able to observe the characters in action. • Character “Cartridges” will be used as building blocks for identity and personality • Types of cartridges initially will include simple things like “values” and “needs” and personality facets (like assertiveness) – but latter will explore compositions • About 50 cartridges will be available initially - but option to unlock (from special glass case) hundreds more, as the user interface evolves. • Observation Area - for early testing, you want to be able to make a bunch of characters (e.g. 2-12) put them in an area – see how different characters behave • Stimulus- the observation area needs something for interaction. Could be a ‘thing’, animal, virtual TV playing content, science experiment, or something else that would result in differing responses from variety of characters.

VR Character Build Process

• VR User (you!) enter your virtual lab ready to create several Characters and • You OBSERVE a couple of lifeless avatars, appear to be dormant or sleeping • You see Character Cartridges (color coded) on a table in front of you – RED Cartridges for VALUES; Orang for NEEDS and a selection of blue for personality TRAITS/FACETS. • You DECIDE which one you want to ANIMATE, by pointing the switch (or pushing a button) to select character in focus. • You GRAB the cartridges you want, for each character, and put them in the correct color coded TOASTER / receptacle.
• After you fill up all the slots, you THROW THE BIG LEVER – animating and bringing to life your Character • When you push Pushes Animation Lever, FLASH! light show, transmission of some animation and/or flashing lights AVATAR Comes to life – eyes open, stand up straight, and activation of orb-of-identity ORB of identity hovers in/near chest and represents - later the Avatar BEHAVIOR will vary depending on combination of Char Chars • You OBSERVE an Orb of identity – which floats o n /near character - which is a representation of your Character’s identity, personality and traits; You also observe the character stretching , like waking from a long sleep – before stepping off the pedestal , and WALKING (moving) to the “ROOM OF OBSERVATION” • The Pedestal replaces the humanoid with another sleeping, empty version, in case you want to build another. (maybe with different color clothing article, randomly generated, or random name tag, so you can tell them apart – by Orb and by ID tag/color of clothing)

Room of Observation – Baseline / Version 1

• After creating 1, 2 or a half dozen characters, it’s time to “Observe the Observers” – see how they interact in an environment. • The room of Observation has less of an 80’s vibe compared to the lab – at least initially.
• Maybe we’re in the room with them (invisible observer POV) or looking at them through 1-way mirror. But we observe them. • STIMULUS – in addition to each other, the room has at least one “thing” that has meaning for the Characters – and can stimulate changes in (a) Cognitive Emotional State; (b) Physical location in the room. (the former drives the latter) • Interpersonal – initially – the characters don’t really interact with each other, but later on, they might – like in a sense of Hierarchy. • EXAMPLE 1 – Fear / Curiosity – Room has table at one end – on the table are three items, a knife, a lit candle (fire), and an box with “open me” on side. Cautious and incurious characters will stand at the far side of the room (fear trumps curiosity); Curious and assertive, souls will move towards the Table. (near term , it’s simply a stand where you are most comfortable equation) • EXAMPLE 2 – Content Affinity – Room has four corners – each with artefacts in each one – (i) Play pen full of dogs/puppies/kittens; (ii) Science gear, tools and lab equipment; (iiii) food on table; and (iv) bamboo and rock (quiet area) • LOCATION OF CHARACTERS – As we change characters, we see where they go in room – their inner workings guide their external location

Room of Observation – Version 2 – Medium Complexity

• TBD • Consider ‘observe


Old Summary 2019

https://dreamtolearn.com/ryan/cognitivewingman/18/en

Character Cartridges - Embodied Identity

We are entering a magical convergence phase in media and technology. The next dozen years through 2030 are going to be very interesting as we begin to understand how to develop character and identity for sensemaking systems, leveraging AI.

Multiple technologies – including in mobile, AR, and deep learning - are rapidly converging to enable organizations to compose AI-powered systems only dreamt of in Sci-Fi novels and Hollywood movies.

These sensemaking (and empathetic) systems will quickly enable assistants who can play roles that include a “Cognitive Wingman” – similar to the automated intelligence seen in media: JARVIS (Iron Man); KITT (Knight Rider); HAL (Space Odyssey); Samantha (Her); TARS (Interstellar).

These systems will: · Talk and listen · Have identity · Have relationships · Are situationally aware · Reason, Understand and Learn · Understand context and remember things · Can hold state for multiple ‘conversation turns’ · Behave in a manner that simulates emotional intelligence

With readily available technology - can build alpha versions of these systems today. Mind you many POC's quite crappy – but it’s a start - and demonstrates feasibility. And with widely available ML tools and techniques, and our human tendency to improve on things – good stuff will happen soon.

A key component for the creation of Digital Humans (for applications extending well beyond gaming) is a sense of identity and character. Empathetic systems that embody cognitive elements need personality. The best rendered face and eyes, is still just a collection of high resolution pixels – until we add voice, emotion, identity and soul.

Jake and Ryan and Pad Thai

https://drive.google.com/file/d/0B3WOmvm7uBq-RXZ1dlRYZWg5cmM/view in this vidoe - a dozen humanoids are platonic representations of data / people - they march out and can be interrogated by voice

Key embodiments of intelligence

So how do we begin to solve for that?

Well, let’s begin with a brainstorm - by segmenting into three big buckets, things that are knowable about our acronym-heavy friends above. Most elements of our AI Cognitive Wingmen can tie out to being:

  1. Declared

  2. Measured

  3. Calculated

DELCARED – this one is easy. What’s the name of the “other”? Is it physical or virtual form? What’s the unique identifier, if one exists? What is the stated purpose? What are the ethical and moral guardrails?

MEASURED – measured elements are a little more complicated. But include components that are perceivable or measurable. Low battery? Riding in a car? Damaged? Lost? Perceiving joy or frustration in the environment from humans? Sensing a command to behave a certain way?

INFERRED – if the system has been programmed to convey emotion – is it happy or sad? (based on stimulus and interaction); If the character is learning and evolving, what type of character is it? Introvert/extrovert? Confident/submissive? Rude/polite?

Some cases – behavior changing variables (extroversion, curiosity, humor) may be declared – and might be modified through an admin-level declaration. In other cases, it may be programmatically inferred based on system’s experience.

I'm not sure if this method of organizing is the best - but it's a place to start. Encapsulation

I grew up in the late 70’s and early 80’s. Stranger Things indeed. At the time, the state of the art for electronic tech (when not playing D&D) was Atart, Intellivision and ColecoVision. The beauty of Atari and similar systems is once you decided on the game - you could grab the physical game cartridge and slam it into the console. Then flip on the system. 80% of the time it would fire up - the other 20% you'd need to re-seat the cartridge (jiggle and retry) which usually worked

Now as we approach 2030 - might we learn something from the 1980's - in terms of loading one (or multiple) cartridges into the system? Will they be fully baked characters, or character components? Who will build the characters? How will the identities evolve?

It's going to be a fun decade - I hope my generation can create something as engaging - as the developers engineers as the 1970/80's did

Story and Narrative Generation
Scene and Event Auto-creation
Game Level Spontaneous Generation
Game Difficulty Auto-Tuning
Personalization & Segmentation

PART 2 - Proof of Concept - The Observers

https://dreamtolearn.com/internal/doc-asset/EJ3GFL3OYKWZQ3CNB03JSD911/IMG_9517.jpg

If we imagine an experiment

Three Different "Character Types" created from (1b) bootstrap from real person or blend of people (e.g. JFK plus Ronald Reagan; or (1a) Character Template archetype)

The CHARACTERS (2) can be thought of as "Observing Bots" - the watchers .  No chat, no dialog required. but they do Watch.  have emotional states (3) that are impacted by the Data/Traffic/Chatter/Context of Situation (4) - and differing REACTIONS (8) on how they see the world. different lenses

OBSERVABLE world - the data - could be call center traffic, twitter streams, a twich channel in esports, unstructured observable data that is sliced along time domain (6) and analyzed (7) using standard and/or custom tools like NLC NLU NLP Tone extraction, and/or ML/DL filters 

INTERPRETER considers information in front from (7) along with Character Type (2) ; Emotional State at a moment in time (3) (e.g. is the Character already upset or angry?; and (4) Context - e.g. is it appropriate to use profanity workplace (no) ; or with friends at pub (perhaps yes);

SYSTEM SUM of signals - will result in some feedback on Emotional State (3) and MIGHT result - near term in simply setting a flag; if a TRIGGER (10) is reached

====

Part 3 - Composing Characters – Public Blog Composition (2015-2018) - Nuggets from https://dreamtolearn.com/ryan/cognitivewingman

Evolution of Storytelling: Augmented Reality, Virtual Reality & First/Shift Person Storytelling is Evolving - Within Media and Entertainment, the number of distribution vectors continues to grow, as do the differing forms of story experiences. Non-linear choose-your-own-adventure stories on Netflix; Augmented Reality Star Wars Storm-troopers on iPhones, Virtual Reality immersive games and first person and “shift person” stories. The sector is experiencing a period of punctuated evolution.

Foundation Artificial Intelligence & Cognitive Computing Cognitive Test Kitchen - Rapid advances in Cognitive Computing and Artificial Intelligence are accelerating innovation in consumer goods, enterprise, and entertainment. Using the metaphor of a well-equipped ‘test kitchen’ we discuss how the technology ingredients can be used for exploration and innovation – developing recipes to move from cupcakes to wedding cakes.

Skills Building – Expanding Studio Capabilities through 2050 Skills Building - Agencies and studios wishing to compete in a rapidly evolving “new-media” space will need to build capabilities and culture to foster evolution. The type and composition of talent require to deliver non-traditional media in 2050 will differ greatly from present day. Storytellers will need to expand skills, and create connections to technologists, and leverage technology without losing the core artistic and storytelling elements. In this section we explore ‘what might be’ by examining early adopters – and lessons learned from successes & failures.

Digital Humans and Digital Assistants – An Evolution of Technology & Ideas Widely adopted Conversational Agents such as Amazon’s Alexa, have set the stage for more sophisticated Digital Humans and Digital Assistants. What was once pure science fiction (KITT, HAL, Ash, JARVIS, and TARS) now seem closer than ever. As the space matures to a second generation “Cognitive Wingman” – we explore questions such as: What will they look and sound like? How will they emote and behave? How much access to personal information should they have?

Architecting Empathetic Systems – Empathy & Emotional Intelligence We explore the degree to which Identity, Empathy and Emotional Intelligence, can support use cases and enhance a user’s experience. Signal extraction services such as tone and emotion analyzer, natural language understanding, custom Natural Language Classification models, sentiment analysis, alongside standardized personality model and type mapping, can provide systems to adjust to the emotion of user, and in cases, emulate emotions and emotional responses.

Characters and Content – New World and New Channels Studios and Media Conglomerates own characters worth billions of dollars. For example, Marvel Cinematic Universe (MCU) purchased by Disney in 2009 for $4b and has grossed more than $11b at the box office. We explore how evolving technology and media vectors are potential opportunities for studios character assets, to unlock potential – from both user experience and financial aspects. Is having your own augmented reality "Jarvis" as a cognitive wingman an appealing value proposition? if so, how valuable, and why?

Stepping Stones to a Master Composer • Observe the Observing Agents (Watching the Watchers) • Verbal Summary of Character using NLC NLU and clustering to find best approximation of • Methods o Declared o Measured o Calculated

Bootstrapping Characters

  • Reference well established TYPES (widely understood mental models)

Arc of Believability - Character Cartridges

  1. Dumb Chat bot. No memory. No ‘personality’ Transactional, One size fits all,
  2. Basic Personality (hard wired, some differing behaviour or voice)
  3. Wingman / Sidekick (mechanical) – KITT, Jarvis, Hal – embodied identity, but relationship is clear this is a robot identity with personality (non-human-OK)
  4. Total Believability (Turning-esque) - permitting suspension of disbelief, believabilty

Ingredients Exist

Applications / Use Cases

• Media, Entertainment, Gaming & Fantasy - Sophisticated VR allows users to make-believe. Multiplayer, massive communities, realistic, exciting, and immersive. Value drivers of modern cinema , plus player immersion inside plots – which will include adult entertainment. Bend physics, time & space in a Holodeck and ‘virtual worlds’; Media and Entertainment (AR VR XR) – populating characters in virtual environments; • Decision Support Help humans summon and engage data to help with decisions. Consumer: high-value feature rich options (home decorating, automobile) helps buyers compare & understand (see) options. ERP / Strategic: Executives with data-on-demand, verbal command & control BI ERP integrations. Shared visualizations • Cognitive Extenders - Help executives and innovators reduce cognitive load and extend cognitive range. Better reasoning, recall, decision support, social navigation & connection making. Context aware information augmentation. Instant context-aware data recall and visualization for decision support. Collaboration catalyst. LEARNING & COGNITIVE EXTENDERS -> Helping create connections and amplify abilities – Loci; COMPREHEND & CLARIFY CONTENT -> Helping to surface signal and distill information • Cognitive Wingman - Jarvis, KITT, HAL. Sensemaking systems understand context, to help. Use cases include autism, eldercare, Alzheimer’s & PTSD. Cognitive Wingman is a human assistive AI/ADA buddy embedded inside AR headset – microphones & camera enable sensemaking & AR projection & audio to guide - or to guard. Can also be used as moment-recall by therapists and caregivers. • Expertise Projection - Amplify and project scarce expertise. Highly skilled medical specialists projecting expertise 2000 miles away to nurse practitioners who touch patients. Industrial – leverage expert engineers at distance to help low skilled workers repair or deploy complex assets. Hands free. Information overlay. • Knowledge Map & Recall - Dark Data / Data Exhaust. Enterprises are drowning in data. Knowledge & expertise fuels continuing innovation and digital transformation. Workers retiring, taking key knowledge. AR enables knowledge capture & recall across time/space. Verbal command/control, visual delivery. Leverage spatial memory. Neural Prosthetics. • Unified Communications - The final destination for UC? As close to being present, without actually being present. Project remote attendee into an empty seat at a board meeting 3000 miles away. Re-watch 2 year old meetings. Look into the eyes & face of job applicant. AR for UC3.0 enables human communications at distance • Infrastructure - AR enables engineers to see into, and project onto, complex and/or aging infrastructure assets to make best use of data, in field, real time. Touches Digital Twin; Decision Support; Knowledge Mapping and recall, to enable AR equipped user ; Digital Twin / Industrial - Digital Twin is virtual/digital representation of a physical entity or system, living model that evolves over time, includes structured and unstructured data. IOT, Edge appliances and predictive analytics.
• Neural Adaptive - (Speculative) NLU powered context gathering / sensemaking. Emotion and eye tracking. Neural network & deep learning powered systems to recognize patterns from biometric signals (EEG/FMRI). Education optimization. AR content & agents serving as baseline reference for neural adaptive AR systems. • Education - AR opens up new ways for children and adults to interact with, and consume and retain knowledge, in the most efficient and effective way – for each person. Customization; interactivity; flexibility and leverage spatial and visual components of AR for learners most benefiting from methods. Key Elements of Solution: 1. Augments known educational best practices (learning outcomes, learning pathways) with insights gained from ML/DL analysis combined with traditional data science // 2. Leverages dynamic segmentation and clustering (cohorts, archetypes) // 3. Uses deep learning to surface key features in natural language and knowledge set / ontology // 4. Uses Machine Learning (e.g. Random Forest) to surface key features in learning path // 5. Applies Natural Language Understanding for signal extraction from students & teachers // 6. Interacts – Provides natural language and visual interactions to exchange information, including but not limited to Augmented Reality, Virtual Reality and Digital Humans. // 7. Learns. Evolves with the content, learners and teachers to improve over time

Composing Agents - Four Corners of Character

  1. KNOWLEDGE /CORPUS (Brains) – Knowledge. This is actually the easy part. Creating a system that can tap public internet knowledge, aggregate, curate and disseminate data, information, knowledge and wisdom (DIKW). The Avatar needs to know stuff – so this knowledge needs to be available.
  2. VR / VISUAL / AUDIO (Eyes and Ears) – Sensory. In this case I’m assuming a headset worn to deliver AR or VR photons into eyeballs – but could be a flat screen or an immersive room. But the avatar needs to be seen and heard (and to also see and hear); A beautifully rendered AV Piece is essential, but also requires the other ingredients
  3. DIALOG & CONTEXT (Story & Script) – Experience. whether it’s a 10 second interaction to discuss a shopping list, or a multi-decade relationship – there is a story arc to the relationship. Dialog. Scripts, Flow, are needed. Done well, they will produce what seems to be emotional intelligence – and moments of “Aha” (serotonin shots)
  4. AVATAR (Heart and Soul) – Authenticity. Relationship. Empathy. When the other ingredients are composed - this where the magic happens – when the eggs, flour and sugar become a wedding cake. It’s where the user feels there is an “other” being interacted with. An “I believe” moment sufficient to overcome periodic trespasses and errors of logic. System that can remember, hold state, know context and react with a reasonable level of emotional intelligence.

Characters in Non-Linear Storytelling

• Story and Narrative Generation • Scene and Event Auto-creation • Game Level Spontaneous Generation • Game Difficulty Auto-Tuning • Personalization & Segmentation

KNA - Knowledge Nexus Arrays are

• Universal • Objective (consensus) • Portable / Shareable • Efficient

KNA - Knowledge Nexus Arrays should

· Talk and listen · Have identity · Have relationships · Are situationally aware · Reason, Understand and Learn · Understand context and remember things · Can hold state for multiple ‘conversation turns’ · Behave in a manner that simulates emotional intelligence

Voice and Speech – how the agent speaks and conveys emotional state and tone, and conversely, ability to understand human’s tone of voice (in addition to transcript)

Theme Parks

  • “Animate” Characters o Conversations o Continuity o Emotion & Identity
  • Content o Stories. Moments. Connections. Frameworks for ‘day long interactions’
  • Personalization o Sensemaking systems “know you” o Flavors of people. o Role playing and Fantasy. Imagine.
  • Brand Amplification o Surface more characters (e.g. MCU) o Monetization options o Venue / Crowd Management o Dynamic Load Balancing

Storytelling for Knowledge Transfer (and Character Bootstrapping?)

Appendix / Other

Six Thinking Hats (DeBono)

  1. Managing Blue – what is the subject? what are we thinking about? what is the goal? Can look at the big picture.
  2. Information White – considering purely what information is available, what are the facts?
  3. Emotions Red – intuitive or instinctive gut reactions or statements of emotional feeling (but not any justification).
  4. Discernment Black – logic applied to identifying reasons to be cautious and conservative. Practical, realistic.
  5. Optimistic response Yellow – logic applied to identifying benefits, seeking harmony. Sees brighter side of situations.
  6. Creativity Green – statements of provocation and investigation, seeing where a thought goes. Thinks creatively, outside the box.

About

POC for Sensemaking Systems with Emotional & Anthropomorphic Traits - and synthetic identity

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages