Replies: 7 comments 27 replies
-
The existence and limitations of the Oracle imply some things about AI in this universe, but nothing is firmly stated and I think there are plans to change up the stuff around that. |
Beta Was this translation helpful? Give feedback.
-
Actually, why doesn't everything have AI in 3014? We are working on self-driving cars now, so I would assume after a thousand years we would have good driving/piloting AI. Why don't we? |
Beta Was this translation helpful? Give feedback.
-
Can the title be amended to "AI development in universe"? Every time this pops up in my feed, I think it's an umpteenth attempt to use real world AI to develop this game... |
Beta Was this translation helpful? Give feedback.
-
It doesn't really explain the lore, but from a meta perspective technology generally follows a more restricted path than it would probably do in real life. That is to say, technology generally doesn't get developed unless there's a reason for it to be developed; and that reasons for it to be developed don't happen unless there's a story reason for it to happen. In real life, societies tend to be very broad scientifically speaking, with people haring off in all directions. This tends to mean that, like a well-balanced game of Civilization (Civ 6 or other modern tightly balanced civ-style games), everyone is always at roughly the same technological level. This is great for strategic balance and well-balanced combat challenges, but boring for telling stories; and terrible for writing good science fiction, wherein having contrasts between civilizations is a particularly valuable tool for exploring concepts and differences. This also ties into the idea that technological progress is not linear. Despite whatever progress humanity might make in the present day towards widespread and general use AI, there's no guarantee that we continue to do so. Whether it just so happens that humanity runs into a roadblock with it and never develops it further; or if we have some kind of crisis wherein we make the choice to utterly and permanently excise AI technology from our society; the practical result is that AI is not particularly advanced within humanity, and generally seems to be viewed with mistrust. Firefly/Serenity seems to be an example of the former; whereas the Dune series is the latter. Likewise, having another civilization (like the Alphas or Koraths) having advanced AI is no guarantee that humanity would. From a meta perspective, our current level of AI plus another thousand years of development, would result in an AI that renders human participation in the world utterly irrelevant. Especially for something like piloting a spaceship, where processing high volumes of information, high speed analysis of said data, long range physics modelling, and high speed reactions are essential elements. In such an environment, having a human at the controls is a severe liability with little gains. To quote Joker and EDI from the Mass Effect series, the only benefit a human brings to space combat is that their capability of randomly making sub-optimal decisions that defy the analysis of what would be the best choice. In other words, they can succeed by being bad: making choices that are so bad that the opposing AI doesn't believe that they'd actually do something so bad. Unlike Mass Effect (where ship weapons have generally a low rate of fire but are high speed and high impact), our typical weapons are almost entirely high volume fired weapons with inaccuracy sufficient that dodging and any kind of decision making about a particular shot is irrelevant, as the statistical average of shots generally makes a cone large enough to be almost impossible to dodge a meaningful number of shots unless one manages to dodge all the shots. As such, making a sub-optimal choice that leads one to being where the AI expects one to not be is useless. (It is, indeed, ironic that the very measure intended to make our game AI appear to be more human and less precise has the result of devaluing human participation and rendering dodging and personal skill less meaningful.) Obviously, it's pretty hard to make a game that encourages human participation when the AI is sufficiently good that humans just sit back and let the AI do its thing, so the baseline assumption is that AI is not capable of sentient-entity level of performance as a general rule, and that for whatever reason, development thereof is avoided, suppressed, or otherwise not happening for some reason, intentionally or otherwise. This could be something that is explored at greater length; or it could be simply left to be a vague idea that most cultures have either chosen to not develop it, or had some sort of traumatic experience pushing them away from it. |
Beta Was this translation helpful? Give feedback.
-
As potentially relevant, the Android description:
Now, there's a couple caveats here. On the one hand, this "lack of sufficient autonomy" is in regards to replacing an engineer or highly skilled crew member; not just merely driving a car down a street or some other highly defined invariable task. The other note is that it mentions economic reasons, and that no manufacturer licensed them. They could have been miracle machines as far as we know, but if no manufacturer was willing to license them, then they still wouldn't be used. Now, I'm not saying this is the case, but if a strong luddite movement arose that had large segments of the population intentionally destroying robots and AI whenever found (and by large, I mean something big, like >30% of the population, maybe even the majority.)... then no-one is going to invest in AI and robots will fall into disuse (or at least secretive use). For instance, look at the reception of AI right now. Some people love it, some people hate it, and a not-insignificant portion of the population is vehemently and aggressively opposed to it as an existential threat to jobs, careers, and even humanity's existence. And that's just to what amounts to a really good chatbot. A not insignificant number of people are fixating on the existential threat of general purpose AI that doesn't even exist yet (and might never) to the point of ignoring all the problems and existential threats that humanity is facing right now. So it isn't unlikely that - if someone in Endless Sky did develop a true general purpose AI at some point in humanity's history, the response was to kill the person(s) responsible, bomb their facility into a crater, and then track down and melt into slag every electronic device and storage medium that any of those people had come within a km of, not to mention every even potentially networked device that anyone in said facility might possibly have even thought about networking with. (This is basically the situation in Dune. Albeit in Dune there actually was some historical events to use as justification for such fear and paranoia) |
Beta Was this translation helpful? Give feedback.
-
#8828 shows some navigation AI in the works, idea being that given their limited numbers uhai may use AI to support their crew at some point, but its still not there yet |
Beta Was this translation helpful? Give feedback.
-
ES ships are already ridiculously undercrewed by normal sci-fi standards, which somehow allows for indefinite life support. I wouldn't be surprised if the purpose of crew in most ES ships was to keep everything running while the autopilot did stuff at the (human) pilot's command. |
Beta Was this translation helpful? Give feedback.
-
It seems that humans do not have very advanced AI. They can make drones, but that's about it. I don't see why they don't have larger AI systems, for to me, ships are controlled fairly simply. Maybe they can't get the AI complicated enough to cover a whole ship.
I would like to know about AI lore, if there is any.
If there isn't already, I would say everyone was too busy fighting and stuff to work on AI. I also had the idea of a storyline where you help some Deep scientists work on AI.
What do other people think about this?
Beta Was this translation helpful? Give feedback.
All reactions