Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moony's Better AI #160

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft

Conversation

moonheart08
Copy link
Contributor

Or "what if we didn't copy SS13, and went a route that works well with cool borgs?"

@github-actions github-actions bot added Design Related to design documentation for Space Station 14. English labels Feb 11, 2024
@Errant-4
Copy link

Errant-4 commented Feb 11, 2024

So the Machine Network would be a conceptual "wireless brain" which comprises of the AI nodes and any AI-connected devices, and as long as the Network has enough collective processing power, the AI entity would remain alive?
Would there be any kind of possible interactions with other machinebrains, such as connecting a positronic brain to allow or force the AI to migrate into it, or let an "occupied" posibrain or MMI to migrate to the Machine Network?

PS:
A concern about the proposed law, doesn't it essentially mean that if/when the crew, or parts of the crew, pose a noticeable danger to the station or the AI, the AI would not just be allowed but required to use any and every means necessary to stop them, including lethal force? (I'm guessing this would be the borgs)
This would mean the AI would have to for example care more about running power than life support or keeping Medical operational
The only thing I know about Bay AI is that they removed it altogether, didn't they?

PPS:
Really like the whole direction, though

@asperger-sind
Copy link

Does this doc imply that AI can convert power into "compute" energy? If not, I believe it should as that gives meaning to excess power which is effectively useless right now.

How would AI handle situations with energy problems, and later station-wide blackouts according to this doc? If a device is unpowered or otherwise broken can the AI "see" the device? Can the device in question be remotely powered by the AI at the cost of their own compute/energy?

Would the AI be able to survive "bad" blackouts (e.g. station completely unpowered for >10 minutes), or would it not be able to find a host and die in the process, or would it live because AI core is self-sustaining? If so, what powers AI core and should the AI be allowed to re-route power from the AI core into, for example, an APC on-board to temporarily power the room.

How does the AI see the station at all, does the AI have an ability to see newly made, disconnected "chunks" of the station (example, Urist makes a 5x5 room extending from the map's northern maints, does not connect it to the station's power grid and instead makes their own from scratch, does the AI see his room?).

Does the AI have outlining for walls to make the "overmap" more recognizable as a map or do they see exclusively electronics-related structures on-board, I'll assume the AI does because that falls vaguely into "many engineering overlays" mentioned in the doc.

Now then, about "automation", how much does it lean into pseudo-programming territory? How complex can the "do X if Y" operations get and how much can they be chained together or extended?

Example:

  1. If X (any APC on-board) is Y (below 85% battery level) do Z (inform AI)
  2. If XA || XB (any APC or substation on-board) is Y (same as above) do ZA, ZB (inform AI, say on Binary comms)
  3. If XA && XB (kitchen microwave and grinder) is Y (have worked in the last 15 seconds) do Y (message AI "chef is working")
    ( "||" is "OR", "&&" is "AND")

What "variables" does the AI have access to for automation, can the AI make their own variables based on the ones the AI already has? Does the AI have "macros" allowing to save and load "complex" (see example 3/2.) operations?

Can AI "settings" (e.g. which radio channels are on/off, which overmap layers should be enabled), and macros/variables be save/loaded to persist throughout rounds or be rapidly switched (e.g. via hotkey or automation trigger) for specific situations?

How does the AI tackle clear-ish metas? Should things like making an automation task for cameras like:

if MOTION TRIGGER CAMERA 78 is TRIGGERED activate CAMERA 78 until MOTION TRIGGER CAMERA 78 is NOT TRIGGERED && CAMERA 78 !sees LIVING ENTITY
("!" is "NOT")

...be more efficient than just using the camera in room 78? This doesn't seem to have any real drawbacks for surveillance measures because it logically speaking should be cheaper. Or should this be priced at around the same level as just having the camera on?

How much should automation lean into pseudo-programming anyway? On one hand it kind of gates AI gameplay, but on another it should allow for a lot more versatility, which is probably good in spessmans.

@moonheart08
Copy link
Contributor Author

moonheart08 commented Feb 11, 2024

Does this doc imply that AI can convert power into "compute" energy? If not, I believe it should as that gives meaning to excess power which is effectively useless right now.

How would AI handle situations with energy problems, and later station-wide blackouts according to this doc? If a device is unpowered or otherwise broken can the AI "see" the device? Can the device in question be remotely powered by the AI at the cost of their own compute/energy?

Would the AI be able to survive "bad" blackouts (e.g. station completely unpowered for >10 minutes), or would it not be able to find a host and die in the process, or would it live because AI core is self-sustaining? If so, what powers AI core and should the AI be allowed to re-route power from the AI core into, for example, an APC on-board to temporarily power the room.

How does the AI see the station at all, does the AI have an ability to see newly made, disconnected "chunks" of the station (example, Urist makes a 5x5 room extending from the map's northern maints, does not connect it to the station's power grid and instead makes their own from scratch, does the AI see his room?).

Does the AI have outlining for walls to make the "overmap" more recognizable as a map or do they see exclusively electronics-related structures on-board, I'll assume the AI does because that falls vaguely into "many engineering overlays" mentioned in the doc.

Now then, about "automation", how much does it lean into pseudo-programming territory? How complex can the "do X if Y" operations get and how much can they be chained together or extended?

Example:

1. If X (any APC on-board) is Y (below 85% battery level) do Z (inform AI)

2. If XA || XB (any APC or substation on-board) is Y (same as above) do ZA, ZB (inform AI, say on Binary comms)

3. If XA && XB (kitchen microwave and grinder) is Y (have worked in the last 15 seconds) do Y (message AI "chef is working")
   ( "||" is "OR", "&&" is "AND")

What "variables" does the AI have access to for automation, can the AI make their own variables based on the ones the AI already has? Does the AI have "macros" allowing to save and load "complex" (see example 3/2.) operations?

Can AI "settings" (e.g. which radio channels are on/off, which overmap layers should be enabled), and macros/variables be save/loaded to persist throughout rounds or be rapidly switched (e.g. via hotkey or automation trigger) for specific situations?

How does the AI tackle clear-ish metas? Should things like making an automation task for cameras like:

if MOTION TRIGGER CAMERA 78 is TRIGGERED activate CAMERA 78 until MOTION TRIGGER CAMERA 78 is NOT TRIGGERED && CAMERA 78 !sees LIVING ENTITY ("!" is "NOT")

...be more efficient than just using the camera in room 78? This doesn't seem to have any real drawbacks for surveillance measures because it logically speaking should be cheaper. Or should this be priced at around the same level as just having the camera on?

How much should automation lean into pseudo-programming anyway? On one hand it kind of gates AI gameplay, but on another it should allow for a lot more versatility, which is probably good in spessmans.

The thing with "metas" like that is they cost constant compute to engage in, and when other problems crop up (say, rogue ai, damage to your core, nukies, medical is missing, whatever) you're going to need that compute for other things. Sure you can have a list of tasks you set up round start, and doing so is likely good, but it's not a "hard" meta as stuff like motion detection in particular is very expensive. (Having compute monitor a camera for you costs more than viewing the camera, and cameras are already expensive to watch)

@moonheart08
Copy link
Contributor Author

As for bad blackouts, essentially if a device is powered on while the AI is in it, it will stop counting toward compute but not boot out the AI. A blackout can "kill" it this way, but it will resume having sufficient compute to function when the blackout ends. (So as long as the AI doesn't forfeit after losing control due to lack of compute, it'll resume being alive when power is back.)

@moonheart08
Copy link
Contributor Author

I don't want the AI to be directly able to turn compute into energy. It can, however, do so indirectly, by getting the crew to build more equipment.

@moonheart08
Copy link
Contributor Author

AI vision is built on the existing station map system, so look into the limitations and concerns of how it works to see what properties it'll have.

@moonheart08
Copy link
Contributor Author

As for the capabilities of the programming system, the doc itself only outlines "do x if y" and nothing more (no variables, macros, etc)

@moonheart08 moonheart08 reopened this Feb 11, 2024
@moonheart08
Copy link
Contributor Author

As for the programming capabilities, the doc itself only outlines extremely basic function. Whether the AI should have more, and what that "more" should be, is fully up to debate.

@moonheart08
Copy link
Contributor Author

moonheart08 commented Feb 11, 2024

So the Machine Network would be a conceptual "wireless brain" which comprises of the AI nodes and any AI-connected devices, and as long as the Network has enough collective processing power, the AI entity would remain alive? Would there be any kind of possible interactions with other machinebrains, such as connecting a positronic brain to allow or force the AI to migrate into it, or let an "occupied" posibrain or MMI to migrate to the Machine Network?

PS: A concern about the proposed law, doesn't it essentially mean that if/when the crew, or parts of the crew, pose a noticeable danger to the station or the AI, the AI would not just be allowed but required to use any and every means necessary to stop them, including lethal force? (I'm guessing this would be the borgs) This would mean the AI would have to for example care more about running power than life support or keeping Medical operational The only thing I know about Bay AI is that they removed it altogether, didn't they?

PPS: Really like the whole direction, though

Yep! Intended. NT values the equipment more than you. However any player thinking more than one step ahead will also recognize that keeping the crew alive tends to help the vessel. The goal is to make the AI more neutral to crew ongoings.

@moonheart08
Copy link
Contributor Author

image
Tangent from the discord on design goals.

This set is up for debate, modification, etc, and is just an example for what lawsets should aim for.

## The AI's role with the cyborgs.
The AI is, for all intents and purposes, the eyes and ears of the cybernetic crew. As they are unable to speak for themselves (outside of the binary comms channel, exclusive to robotic personnel), the AI speaks for them and functions as their superior. The cyborgs are strictly required to obey the orders of the AI and no more, and have access to a shared bulletin the AI and cyborgs can post to. The bulletin's primary purpose is for longterm orders (i.e. "Serve the crew." or "Find George Melons"), shared information ("John Nanotrasen isn't actually from Central Command and is a passenger."), etc. The AI is allowed to modify or remove any note on the bulletin, cyborgs can only modify their own.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this imply that borgs won't have mouths anymore? Because that's a no-go. Borgs need to be able to keep working without an AI (whether the AI died or just there isn't one this round), and that means "no AI spawned, I literally am no better than a mime" is kind of a no-go. Also means that "we take away speech when borg gets linked to AI mid-round" is a no-go.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative proposal: the AI can turn off a linked borg's speech module to gain processing power for itself, but otherwise borgs can speak just fine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative proposal: the AI can turn off a linked borg's speech module to gain processing power for itself, but otherwise borgs can speak just fine.

This doc got picked up on the Discord again and I think there's a case to be made that the AI's abilities shouldn't be based on making antagonisms between itself and the borgs. It would be a bit unfun if the AI decides to turn off your speech just so it can run another program. Considering that the AI can get more compute from a lot of machines, it could end up that the AI just uses this ability to be a bit of a dick.

It might be better if the AI could improve a borg's communication ability (maybe by default borgs have some kind of restricted speech?) at a cost or if there was some kind of non-antagonistic reason to mute borgs (maybe they have a limited pool of borgs that can freely speak?).

Alternatively you could lean more into the borgs and AI having standing antagonisms between each other but I think that kind of design would need more thought as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old borgs design document gave borgs restricted (emote-based) communication by default and allowed R&D to upgrade them, which is what this was written off of.


The changes from the original set are:
- Survive was moved up, the AI is more valuable than the crew is and if it is in danger it reserves the right to protect itself regardless of whom.
- All mentions of stations were replaced with "assigned vessel" for broader applicability.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assigned vessel is quite ambiguous in terms of whether it means the station or the AI's core. It should be changed or clarified.

Copy link

@Errant-4 Errant-4 May 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could put in a parameter in place of "assigned vessel" that gets filled in from the map prototype (or some property of the Grid) when the AI is initialised. It could default to "station", as the most likely scenario

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could likely be polyfilled in with the type of ship/station/whatever they're assigned to.

## The Laws
The proposed AI Lawset is as follows, being a modification of Bay's laws:
1. Safeguard: Protect your assigned vessel to the best of your abilities. It is not something we can easily afford to replace.
2. Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel (i.e. besides Captain and Research Director) to tamper with, modify, or destroy your equipment.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, time to do a little history lesson, Asimov's laws of robotics from the story they originate from are *flawed but in a intricate way which prevents any true fuckery. Given these laws are based off Asimov's there are FIVE ways that a Shift in Hierarchy but Ill focus on one. When law 3 becomes law 1 in a sense the second there is a threat human or not they can use EXTREME and VINDICTIVE measures to protect itself, given the ways they do this is under interpretation if the AI wishes he could plasma flood the station due to the fact it feels threatened. I would not recommend this lawshift.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I happen to be fully aware of what Asimov's laws are, having read every published short before. You don't need to school me and I'm well aware of the flaws.

You cannot make a perfect lawset.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its not about the perfect lawset its the interpretation that can turn a AI into a Murderspree maniac due to law interprets. The way the laws are set up would cause problems.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its not about the perfect lawset its the interpretation that can turn a AI into a Murderspree maniac due to law interprets. The way the laws are set up would cause problems.

Given the AI is the vessel in this scenario, yes, they're going to be defensive of themselves and it's intended.

If you abuse this to be a murderspree maniac, like you can with any other lawset (including the original Asimov lawset), it's an admin problem like any other non-AI murderspree maniac is.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats the thing, the fact that you could do that within the lawset sets up a big practical problem that would need a rework in hindsight, I simply dont agree with the hierarchy that is set up due to flaws even if it would prevent the AI from defending itself against some antags. Its simple enough to have CE set up a freeform than have a possible malf AI without the malf.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats the thing, the fact that you could do that within the lawset sets up a big practical problem that would need a rework in hindsight, I simply dont agree with the hierarchy that is set up due to flaws even if it would prevent the AI from defending itself against some antags. Its simple enough to have CE set up a freeform than have a possible malf AI without the malf.

This lawset is working as intended, you can disagree but it's the intent of the document at this time to give the AI room to defend itself, especially when you can do the same thing on any other lawset (often moreso) given similar rules abuse.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also the simple fact this is written such that the crew's goals and AI's goals do not always align and may conflict (sometimes extremely so)

Once again by design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Design Related to design documentation for Space Station 14. English
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants