- Travel between universes is possible, but is not unlimited. Each universe has a sort of relative phase, and you can't (directly) interact with any universe at a different phase. Think of a graph of tan(x), and you can only travel to/from where the y-axis is 0. Any set of universes connected to each other are all at the same phase, and the universes accessible are all the same (you can't take a circuitous path to get somewhere that's not accessible from the starting universe). As a result, despite the infinite number of actual universes, the number of accessible ones from any given universe is finite, with an infinite number of universes between each pair of them. The science of our protagonists' home universe discovered this property and has a name for the "unit of universal phase distance" (tentatively named "DeLoreans"). Physics textbooks frequently illustrate it as a simple plane, although in practice it actually has many, many dimensions to it. Also, every accessible universe is an integer number of DeLoreans away from every other accessible universe (always 1 or 2 or 23874, not 0.6 or 2.8 or pi).
You'd still have access to an infinite number of happentracks - you'd just be restricted to a countable infinity, not an uncountable one.
The tan graph thing is an illustrative crutch. Accessible universes don't extend infinitely in any direction, or maybe it loops back at some point or something. You still have infinity in between any two pairs, although my mathematics knowledge is too rusty to say if that would be a countable or uncountable infinity.
-Do AIs get salaries and civilian rights? Do they get to retire after X years and then you have to pay the immortal one for all eternity?
Yes and yes on salaries and civilian rights. Yes, they can retire and collect on their pension until it runs out. Those pension plans with indefinite continuation clauses got changed relatively soon after AIs started integrating into human society. A few of the early AIs are still benefiting from them.
Also, AIs are not immortal. In theory, they could live forever, but the reality of accidents (be they hardware-related or data corruption) and the fact that they aren't actually perfectly programmed (humans made the original template, and there are bugs and memory leaks and whatnot that still haven't been fixed) means that the expected lifespan of an AI is only actually a couple thousand years. Not that they've even been around that long yet, but that's the expectation going forwards.
Following on that, what happens with AIs that find themselves stuck in crappy hardware or were poorly coded from the start and are constantly crashing/unable to do any proper job? Do they get "mercy-erased"?
There are similar sorts of social safety nets for AIs as humans in most places.
-Can an AI refuse orders from a fleshbag for its own interest?
Yes, except for some of the earlier AIs that were programmed to be unable to do so. AI shackles like that are considered unethical these days, and there are laws in many nations that such limits either can't be programmed into 1st generation AIs or must be removed once the AI hits majority.
Humanity seems awfully nice for the AIs in your setting. Nowadays we see governments spying us through our electronic devices and there's backdoors built in everywhere, and there's still human beings being enslaved here and there.
If nothing else, I would expect "black market" AIs that are created with in-built shackles for criminal/secret organizations.
This is, would the military want to risk that their battleship AIs can suddenly rebel or switch sides or go "I value myself more than you, I'm out of here!"?
I'm trying to lean more towards the idealism side of the idealism-cynicism scale. I like my earlier Star Trek-esque striving for the best of humanity more so than the grim and grittier take that seems more prevalent in fiction these days. Also, I'm kind of assuming that most of the growing pains of these scary AIs becoming a thing happened and got resolved some time in the past and most everyone understands that AIs aren't Skynet or The Matrix or what have you.
Black market AIs exist, but aren't very common. From a risk-reward standpoint, it's generally much easier and more reliable to use non-AI software for your criminal activities instead of building, commissioning, or enslaving an AI. Where AIs are involved in criminal enterprises, it's vastly more common for them to have joined or formed the criminal organization than being built by/for it.
Militaries use and trust AIs like they do any other soldier (or ship captain). Training and discipline, loyalty, trusting the crew on you bridge to have your stern and vice-versa, and so forth. Mostly only AIs in the military inhabit military vessels, just like you don't see many privately owned tanks and fighter planes and aircraft carriers in the real world. Exceptions include a few grandfather clauses from back when AIs were new and everyone was still figuring things out, AIs that are rich or have political influence and just privately own some older or surplus military vehicles, etc.
-If an AI commits a crime, how is it punished?
Same way as any meatbag. Fines, community service, imprisonment, mandated therapy, forced resocialization (literally reprogramming), etc. AI equivalent of jail is executing in a sandbox environment with strictly controlled access to external systems.
So AIs can be reprogrammed after creation, that's a significant difference from meatbags. Are there cases of terrorrists/spies hacking an AI and inserting sleeping programs to make them go mad? Who has the master password to reprogram the AI?
Not as different as you'd think. Therapy, conditioning, up through full-on brainwashing is a thing for meatbags even today, so imagine the techniques available in the space future. Psychic surgery is an emerging field in this setting, some of those powerful powerful telepathic psychics being able to, say, remove PTSD or dementia like a brain surgeon would a tumor (we're talking maybe a half-dozen people having the power to try and the skill and knowledge to not make the patient into a vegetable).
With respect to reprogramming AIs, it's usually more a matter of adjusting the personality datastores when it does get meted out as punishment. Forced resocialization/reprogramming is a pretty serious sentence, around the same scale as the death penalty or life imprisonment.
Another problem I see with that is if ship AI is judged guilty, what do you with the ship itself while the AI is put into sandbox enviroment? I suppose those things are really expensive, and nobody would want them to sit idly while the criminal AI does its time.
And heck, how would a space ship do community service?
AIs aren't all in spaceships. It's not not even the most common thing for AIs, just something that's socially visible enough that societal customs have grown around it.
A boxed AI will usually get someone else to take care of the ship while it's incarcerated.
-Can AIs upgrade, or are they to be scrapped?
Yes, they can upgrade. Hardware-wise, it's just a matter of acquiring the new hardware and either migrating to it or integrating it with the existing hardware. Software-wise, AIs learn and grow much like humans do.
Do they now? Because meatbags can't just eat up a CD and automatically learn a new language or similar. Do AIs also need repetition to properly learn something? Can AIs forget/confuse stuff? Can an AI suffer delusions and believe in something that never actually happened?
An AI with a CD describing a skill is like a person who has memorized an instruction manual. It still takes practice and experience to generalize that information to less-specific situations and knowing what knowledge to apply under what circumstances. Said learning process tends to go much faster than with humans, but it still takes some time. There are a few AI code bases that have a tendency towards trying to apply a newly learned skill to everything for a little while, no matter how inappropriate it may seem (to a hammer, everything looks like a nail).
Memory leaks and data corruption do occur. The basic AI code bases are still imperfect. There are a number of bugs floating around there.
-Where do you draw a line between "advanced program" and "AI"?
It's a very fuzzy line. AIs have physical properties that "advanced programs" don't, but it's really difficult to measure. There are a handful of powerful psychics who can sense and distinguish true AIs, albeit without 100% reliability especially when near the breakpoint, and research is still ongoing into how that works. There are some rare exceptions that throw a monkey wrench into it anyways. And no, AIs are not demonstrably better at differentiating between programs and AIs when they're that close to the AI equivalent of uncanny valley. Fortunately, most everyone involved in the AI game knows to create AIs well beyond what's been identified as safely into true AI territory.
So, what avantages does an AI exactly offer over the super program that follows orders without questions and doesn't risks forgeting/confusing/stuff, and/or just hiring some specialized meatbags?
Mostly ego and bragging rights. Let's face it, there's a real ego boost to knowing you have an honest to goodness AI working for you as opposed to just some computer program. A lot of AIs were also made so that the creators could say that they created a new form of life. That's why most of the 1st generation AIs were created in the first place. After that, there were enough AIs around that they became a proper part of society as opposed to merely technological curiosities, especially once they started reproducing. 1st generation AIs don't get made as frequently as they were at the beginning of the AI boom.
In a few thousand generations, AIs will start developing a sort of psionic potential of their own, but they're not there yet.
-Who builds the AIs? Can they make more of their own without human support? Do they belong to the ones who built them? If they're independent, then why do people build them?
1st generation AIs are AIs built/programmed by humans. It's mostly big organizations that make AIs. Corporations, research groups, governments, militaries, etc. Most modern AIs are capable of budding off and growing a new AI based on themselves. The quantum mechanics/arcanosynthesis wibbly wobbly stuff means that the child AI is similar to the parent, but not the same. The differences wind up about halfway between what you get with human reproduction and what you get with asexual division. AIs tend not to birth child AIs very often, though.
For the most part, created AIs are treated like minors by the law. They're children, not slaves. Sometimes children that are born to help out on the server farm, but the same principles apply. Debates over the morality of military organizations commissioning AIs (child soldiers and so forth) are ongoing.
Do military AIs suffer PTSD and/or remorse over blowing up meatbags? If yes, where exactly did they learn that? Meatbags can be pretty cruel by default. If an AI starts looking very happy about blowing up meatbags, do the humies start to worry?
Yes, although nowhere near the same as humans do. AI PTSD triggers and works somewhat differently from human PTSD.
By the time AIs became mature enough that the military considered using them, a) they were considered people well and proper, and b) the military folks remembered what happened when they made human murdertron super soldiers and decided not to repeat the same mistake by encouraging it in the AIs that might be running kill-sats and nuke silos.
When an AI starts to like blowing up humies, it's as or more worrisome as when a humie starts to like blowing up humies.
What do you do when you get a "retarded" AI right in the development proccess due to whatever reason? Do you keep developing it and pray for the best, or do you "cancel"?
It's seen sort of like birth defects and abortions, although the laws aren't nearly as mature for AIs as they are with humans. Yes, there is still ongoing debate about where AI life starts. Usually, though, unless your programming team is incompetent, large enough flaws that would necessitate cancellation would be noticed well before when anyone considers an AI to be alive, and smaller flaws can be corrected.
Meatbag children still need to go to school and get vaccinned. Are there a set of stuff that every AI needs to learn and safety programs they need integrated?
Every AI should keep its virus definitions updated. That's just common sense in health care.
There are basic education requirements and AI equivalents of GEDs and university degrees. It's more a matter of certification than extensive education (the basic knowledge stuff really isn't an issue for AIs), although like elementary and high school, there is an element of socialization involved.
What happens when it's discovered some company is "abusing" young AIs? Is there AI orphanage and adoption program?
A company screws up and a young AI in development gets deleted. It is considered a crime? How severe?
Similarly a company goes bankrupt, what happens to the AIs in development?
Most of these depend on local legislation.
This is way further than I ever intended to explore the topic of AIs. If you have any more questions or ideas, that's great, but I would prefer to discuss some of the other topics, too.