Author Topic: Sci-fi Multiverse Physics  (Read 3444 times)

Offline Garryl

  • DnD Handbook Writer
  • ****
  • Posts: 4503
    • View Profile
Sci-fi Multiverse Physics
« on: December 16, 2015, 02:16:50 AM »
I've got some ideas in my head about how some multiverse mechanics/physics might work out for a hypothetical sci-fi setting. I'm posting some notes about it here so I can actually record it somewhere before I move on and forget it. The feedback and discussion is a bonus.

  • It's a many-worlds multiverse, as in, when you flip a coin, there's one universe where it lands as heads, and one as tails. Although the actual universe division is down to the smallest, subatomic levels. Heisenberg uncertainty and all that.
  • The number of actual universes is infinite (in theory). Finite possible universe states only comes into effect after the start of the universe. Before that, there's an infinite amount of variation in universal constants, physical properties, and even starting time. A lot of variations don't actually produce much of anything (say, universes where subatomic particles can't stick together well enough to even form atoms). Note that since start time of the universe can vary, there are universes that are exactly identical to each other, but time shifted (eg: Universe A is the same as Universe B was 10 seconds ago).
  • Travel between universes is possible, but is not unlimited. Each universe has a sort of relative phase, and you can't (directly) interact with any universe at a different phase. Think of a graph of tan(x), and you can only travel to/from where the y-axis is 0. Any set of universes connected to each other are all at the same phase, and the universes accessible are all the same (you can't take a circuitous path to get somewhere that's not accessible from the starting universe). As a result, despite the infinite number of actual universes, the number of accessible ones from any given universe is finite, with an infinite number of universes between each pair of them. The science of our protagonists' home universe discovered this property and has a name for the "unit of universal phase distance" (tentatively named "DeLoreans"). Physics textbooks frequently illustrate it as a simple plane, although in practice it actually has many, many dimensions to it. Also, every accessible universe is an integer number of DeLoreans away from every other accessible universe (always 1 or 2 or 23874, not 0.6 or 2.8 or pi).
  • While you can't interact with, travel to, or observe any intermediate universes directly, that's not to say that you can't at all. It is possible to view large swaths of universes as a probability cluster. The protagonists' universe's technology includes multiversal supercomputers that trawl these intermediate universe probability clusters in order to produce probabilistic information, including prediction of the future and insight into the past (using those time-shifted universes). Powerful enough supercomputers can separate the wheat from the chaff, identifying clusters of intermediate universes similar enough to the universe they're trying to predict to have useful information about a specific topic. For example, "destiny engines" are designed to identify points of divergence, or "destiny loci", where small changes cause large subsequent differences between resulting universes.
  • Also relating to probabilistic interaction, a current weapon in development is the "phased ion cannon". It fires off a stream of particles in an energy field that shifts the universal phase of positively- and negatively-charged particles differently. The effect twists electromagnetism at a right-angle with respect to the universal phase. Electrons get shaken loose from the nuclei they orbit, but they are still attracted to each other, just barely enough to keep them in the same universe in a vacuum. When the phased particle stream hits more matter, the phased electrons start interacting with and replacing non-phased electrons, which in turn begin to orbit phased ions (the positively-charged phased nuclei). At this point, the phased electrons, no longer attracted to phased nuclei, pull the non-phased nuclei they're now orbiting into another universe, and vice-versa for the phased nuclei and the non-phased electrons caught in their orbit. The particles scatter probabilisticaly across intermediate universes. The end result is one electron shunted out of universe per nucleus fired, but one nucleus shunted per electron fired. The end result is a lot less matter in the universe, with the area fired upon being filled with free-floating electrons.
  • Permutations of interactions between individual universes create their own resultant universes, but in a different multiverse. Interaction between multiverses, even in a probabilistic manner, is beyond the capabilities known or available to the protagonists' universe.
  • Travel between universes is not instantaneous. It takes time, and more of it the larger the universal phase distance.
  • Psionics exists. It's uncommon, but well-recognized and the basics are fairly well understood. Only about 1 in 38 people in the protagonists' universe even have the potential for it, and far fewer devote the effort to being able to act on that potential. Most who work on it only get as far as levitating small objects, simple empathic senses, or minor precognitive abilities (in D&D terms, think the Hidden Talent feat). Said precognitives aren't all that special, as they're usually not much better than a small-sized commercial destiny engine network. A very small number of psychics can pull off crazy awesome stuff psionic magic-like stuff. Science studies psionics like any other phenomenon in reality.

Now, onto some of the technology and other setting stuff.

  • Current best travel time between universes is 127 hours per DeLorean, but propulsion systems are always improving. With a few exceptions, interuniversal travel is done in space ships. trying to do it inside the atmosphere is harder to predict, less reliable, and a lot slower if it does work. Although Earth is in roughly the same spot for most nearby universes, it's not in the exact same spot, so going to another universe on the planet's surface is very, very likely to deposit you in the middle of space anyways.
  • Faster than light travel does exist, but it's still slow as far as interstellar distances are concerned. Travel from Sol to Alpha Centauri is still on the order of months.
  • Since travel between universes is a lot faster and there are plenty of alternate Earths to settle on even without terraforming, protagonist universe has mostly settle other universes instead of other planets. Sure, there's a colony on the moon and Mars has been terraformed for decades, but Alpha Centauri is the only other solar system humans have tried to settle, and even that's only because a bunch of Richard Branson types sponsored the expedition.
  • Most of the other technology is roughly Star Trek level (adjusted for being a projection forwards from modern technology instead of 70s technology). Replicators, teleporters, laser guns, etc.
  • There are teleporters. Long-distance teleportation requires a rig of some sort at both ends, one of which must be a full-fledged teleportation device. If travel is supposed to be one-way, the sending end need only be a teleporter beacon (travel from beacon to teleporter, but not the other way around). Short-range teleportation is possible with only one device. Personal teleportation units (about the size of a backpack) have an effective range of between 20 and 200 feet, depending on the model and on local conditions.
  • Universal translators exist. They're called "intent engines", because they work with the intentions that the communications is meant to convey. Grammar frequently gets mucked up, but still very understandable, although names usually throw them for a loop.
  • There are AIs. They're treated like people, not glorified software. Acting like an AI is an object or just a program or something gets you funny looks. Traditionally, a ship with an AI inhabiting an running it is referred to by the AI's name.
  • As mentioned in the physics section, supercomputers can sort of predict the future. There's also military systems to interfere with this.
  • The largest and most powerful destiny engine around is the AI named Precipice. Precipice inhabits a planet-sized supercomputer/universal sensor array. It's built into, across, and throughout an entire alternate universe's Earth. Precipice took nearly a decade to build. It's capable of predicting major events with a 95% accuracy at up to 7 months in the future, and identifying destiny loci with a 75% accuracy at 10 years ahead.
  • The "Arcanosynthesis Theories" are like the quantum mechanics for psionics and a bunch of other esoteric future physics stuff, including parts of FTL and interuniversal travel. Parts of it are taught in high school. Technology incorporates it. Physicists publish papers about it. There's a ton about it they don't understand, but that's science for ya, always looking for the next answer and the questions that follow it, just apply that scientific method.
  • Psychics are sometimes informally called "38ers". The slang stems from the public conception that roughly 1 in 38 people have the potential.
« Last Edit: December 16, 2015, 03:11:47 AM by Garryl »

Offline SolEiji

  • Epic Member
  • ****
  • Posts: 3041
  • I am 120% Eiji.
    • View Profile
    • D&D Wiki.org, not .com
Re: Sci-fi Multiverse Physics
« Reply #1 on: December 16, 2015, 02:49:38 AM »
I just wanted you to know, I'm stealing DeLoreans as a unit of measurement.   :D
Mudada.

Offline oslecamo

  • DnD Handbook Writer
  • ****
  • Posts: 10080
  • Creating monsters for my Realm of Darkness
    • View Profile
    • Oslecamo's Custom Library (my homebrew)
Re: Sci-fi Multiverse Physics
« Reply #2 on: December 16, 2015, 03:29:45 AM »
  • There are AIs. They're treated like people, not glorified software. Acting like an AI is an object or just a program or something gets you funny looks. Traditionally, a ship with an AI inhabiting an running it is referred to by the AI's name.
How does that even starts to work?
-How smart are AIs? If they can run a space ship by itself, I guess the answer is "very". Why haven't the AIs taken over then (not necessarily war, just "meatbag unemployment reaches 99% levels as AIs are smarter and can work all day and don't get sick and stuff")?
-Do AIs get salaries and civilian rights? Do they get to retire after X years and then you have to pay the immortal one for all eternity?
-When a new AI ship model rolls out from the factory next year, what happens to the obsolete AI model ships? Who will pay and take care of their maintenance until the end of times? Or do they get scrapped for spare parts? Or do people keep boarding the same ship for centuries and pray that entropy doesn't catch up?
-How easy it is to copy a particular AI? If I blow up an AI but there's a backup stored, does it counts as murder or just property damage?  Can an AI hurt/kill a meatbag who tries to harm it, even if the AI is a lot easier to repair than the meatbag?
-Can an AI refuse orders from a fleshbag for its own interest?
-If an AI commits a crime, how is it punished?
-Can an AI apply for any job they want?
-Are AIs personalities coded in from the start or are they random spontaneous stuff? Are they taught human concepts of wrong/right and instincts like "try to don't blow yourself up"?
-Can AIs upgrade themselves or it's easier to make more advanced ones from scratch?
-Where do you draw a line between "advanced program" and "AI"?
-Who builds the AIs? Can they make more of their own without human support? Do they belong to the ones who built them? If they're independent, then why do people spend their money and time building them? Did copyright defenders all commit mass suicide?

Not critisizing, just honestly interested in knowing how "please treat the giant super smart space flying immortal one as you would any other person on the street who need to breathe/eat/sleep and bleeds and will be dead in a few decades tops" would look like.
« Last Edit: December 16, 2015, 03:39:00 AM by oslecamo »

Offline Garryl

  • DnD Handbook Writer
  • ****
  • Posts: 4503
    • View Profile
Re: Sci-fi Multiverse Physics
« Reply #3 on: December 16, 2015, 04:44:11 AM »
  • There are AIs. They're treated like people, not glorified software. Acting like an AI is an object or just a program or something gets you funny looks. Traditionally, a ship with an AI inhabiting an running it is referred to by the AI's name.
How does that even starts to work?
-How smart are AIs? If they can run a space ship by itself, I guess the answer is "very". Why haven't the AIs taken over then (not necessarily war, just "meatbag unemployment reaches 99% levels as AIs are smarter and can work all day and don't get sick and stuff")?

AI smarts are indeed "very". And they generally get along with people, just like any other member of civilized society. We're all in this universe together (at least until the PD Drive finishes warming up).

AIs have likes and dislikes just like people. There aren't nearly enough that want to be managers or manual labor to automate out so much of normally-meatbag employment opportunities. And they're not like arbitrary intelligent software, so the AIs that do want to do things can't exactly expand their processing power and resources indefinitely to take over everything. Plus, a lot of them like not having to do everything themselves. Just because you can run a drone network to do maintenance yourself, doesn't mean it's not convenient to have someone else do it for you.

Quantity-wise, humans outnumber AIs by... probably somewhere around a million to one. I'm not sure if this is relevant to your question, but I'm putting it out there.

Quote
-Do AIs get salaries and civilian rights? Do they get to retire after X years and then you have to pay the immortal one for all eternity?

Yes and yes on salaries and civilian rights. Yes, they can retire and collect on their pension until it runs out. Those pension plans with indefinite continuation clauses got changed relatively soon after AIs started integrating into human society. A few of the early AIs are still benefiting from them.

Also, AIs are not immortal. In theory, they could live forever, but the reality of accidents (be they hardware-related or data corruption) and the fact that they aren't actually perfectly programmed (humans made the original template, and there are bugs and memory leaks and whatnot that still haven't been fixed) means that the expected lifespan of an AI is only actually a couple thousand years. Not that they've even been around that long yet, but that's the expectation going forwards.

Quote
-When a new AI ship model rolls out from the factory next year, what happens to the obsolete AI model ships? Who will pay and take care of their maintenance until the end of times? Or do they get scrapped for spare parts? Or do people keep boarding the same ship for centuries and pray that entropy doesn't catch up?

A ship-board AI buys a new model ship, migrates its primary processes over to the new hardware, and lives in its new home. It'll probably keep a hold of the old ship, leaving behind a vestigial child process that it might either upgrade to a full child AI (who would inherit the ship) or simply stop the process when it sells the ship off to someone else.

Quote
-How easy it is to copy a particular AI? If I blow up an AI but there's a backup stored, does it counts as murder or just property damage?   

There's some quantum mechanics and arcanosynthesis going on with the actual pattern of an AI's software and execution. Trying to perfectly copy an AI runs into Heisenberg uncertainty/Schroedinger's cat/magic bullshit that simply transferring it doesn't (even from a software perspective, it should be the same thing). The pattern of an AI as a whole (as opposed to just really advanced software) registers in sort of the same way as a person does, and there are some psychics who can pick up on it. And yeah, there's theological debate over whether or not this is proof of the existence of souls.

Quote
-Can an AI refuse orders from a fleshbag for its own interest?

Yes, except for some of the earlier AIs that were programmed to be unable to do so. AI shackles like that are considered unethical these days, and there are laws in many nations that such limits either can't be programmed into 1st generation AIs or must be removed once the AI hits majority.

Quote
-If an AI commits a crime, how is it punished?

Same way as any meatbag. Fines, community service, imprisonment, mandated therapy, forced resocialization (literally reprogramming), etc. AI equivalent of jail is executing in a sandbox environment with strictly controlled access to external systems.

Quote
-Can an AI apply for any job they want?

For the most part, yeah. Some nations treat AIs as second-class citizens, and there's one Martian colony that hasn't quit gotten over the anti-technology movement that took power for a few decades (much to the chagrin of the other Martian colonies), so they are exceptions. And, of course, there are several other human societies in other universes, each of which have their own laws and customs.

Quote
-Can AIs upgrade, or are they to be scrapped?

Yes, they can upgrade. Hardware-wise, it's just a matter of acquiring the new hardware and either migrating to it or integrating it with the existing hardware. Software-wise, AIs learn and grow much like humans do.

Quote
-Where do you draw a line between "advanced program" and "AI"?

It's a very fuzzy line. AIs have physical properties that "advanced programs" don't, but it's really difficult to measure. There are a handful of powerful psychics who can sense and distinguish true AIs, albeit without 100% reliability especially when near the breakpoint, and research is still ongoing into how that works. There are some rare exceptions that throw a monkey wrench into it anyways. And no, AIs are not demonstrably better at differentiating between programs and AIs when they're that close to the AI equivalent of uncanny valley. Fortunately, most everyone involved in the AI game knows to create AIs well beyond what's been identified as safely into true AI territory.

Quote
-Who builds the AIs? Can they make more of their own without human support? Do they belong to the ones who built them? If they're independent, then why do people build them?

1st generation AIs are AIs built/programmed by humans. It's mostly big organizations that make AIs. Corporations, research groups, governments, militaries, etc. Most modern AIs are capable of budding off and growing a new AI based on themselves. The quantum mechanics/arcanosynthesis wibbly wobbly stuff means that the child AI is similar to the parent, but not the same. The differences wind up about halfway between what you get with human reproduction and what you get with asexual division. AIs tend not to birth child AIs very often, though.

For the most part, created AIs are treated like minors by the law. They're children, not slaves. Sometimes children that are born to help out on the server farm, but the same principles apply. Debates over the morality of military organizations commissioning AIs (child soldiers and so forth) are ongoing.

Quote
Not critisizing, just honestly interested in knowing how "please treat the giant super smart space flying immortal one as you would any other person on the street who need to breathe/eat/sleep and bleeds and will be dead in a few decades tops" would look like.

Please, keep 'em coming. Good questions help flesh out a setting.

Offline oslecamo

  • DnD Handbook Writer
  • ****
  • Posts: 10080
  • Creating monsters for my Realm of Darkness
    • View Profile
    • Oslecamo's Custom Library (my homebrew)
Re: Sci-fi Multiverse Physics
« Reply #4 on: December 16, 2015, 07:15:18 AM »
Since you insist. :p
AI smarts are indeed "very". And they generally get along with people, just like any other member of civilized society. We're all in this universe together (at least until the PD Drive finishes warming up).
But see, the definition of "get along" most humans have quite contradicts what an AI can do:
-An human has a rough idea of what hurts an human. An AI cannot know how annoying it is to be hungry/thirsty/tired/sleepy or even in pain.
-You can't go in a drinking night in town with the ship AI.
-You can't sex a ship AI. Sex is a major part of society and most adult humans make an huge deal out of it.
-You can't offer those cookies/snacks you made yourself to the ship AI (well you can but they can't taste it).
-You can't have a ship AI join that friendly football game. You could maybe challenge the ship AI to a Call of Duty 25 match, but then the ship AI would most certainly have to hold back. And you would know it.

Soooo, all that's left for bonding with the ship AI is work and plain talking, which may be enough for some meatbags, but most humies simply wouldn't have the necessary pieces to develop a "normal" relationship with a ship/factory AI. They'll rather spend their time with other meatbags that can drink/fuck/eat/play together.

Quantity-wise, humans outnumber AIs by... probably somewhere around a million to one. I'm not sure if this is relevant to your question, but I'm putting it out there.
That's nice to know, quantity does have a quality of its own and stuff.  It solves the problem in the short term, depending on just how fast they can "bud".

Quote
-Do AIs get salaries and civilian rights? Do they get to retire after X years and then you have to pay the immortal one for all eternity?

Yes and yes on salaries and civilian rights. Yes, they can retire and collect on their pension until it runs out. Those pension plans with indefinite continuation clauses got changed relatively soon after AIs started integrating into human society. A few of the early AIs are still benefiting from them.

Also, AIs are not immortal. In theory, they could live forever, but the reality of accidents (be they hardware-related or data corruption) and the fact that they aren't actually perfectly programmed (humans made the original template, and there are bugs and memory leaks and whatnot that still haven't been fixed) means that the expected lifespan of an AI is only actually a couple thousand years. Not that they've even been around that long yet, but that's the expectation going forwards.
Following on that, what happens with AIs that find themselves stuck in crappy hardware or were poorly coded from the start and are constantly crashing/unable to do any proper job? Do they get "mercy-erased"?

Quote
-When a new AI ship model rolls out from the factory next year, what happens to the obsolete AI model ships? Who will pay and take care of their maintenance until the end of times? Or do they get scrapped for spare parts? Or do people keep boarding the same ship for centuries and pray that entropy doesn't catch up?

A ship-board AI buys a new model ship, migrates its primary processes over to the new hardware, and lives in its new home. It'll probably keep a hold of the old ship, leaving behind a vestigial child process that it might either upgrade to a full child AI (who would inherit the ship) or simply stop the process when it sells the ship off to someone else.
That's somewhat contradictory with what you say next:

Quote
-How easy it is to copy a particular AI? If I blow up an AI but there's a backup stored, does it counts as murder or just property damage?   

There's some quantum mechanics and arcanosynthesis going on with the actual pattern of an AI's software and execution. Trying to perfectly copy an AI runs into Heisenberg uncertainty/Schroedinger's cat/magic bullshit that simply transferring it doesn't (even from a software perspective, it should be the same thing). The pattern of an AI as a whole (as opposed to just really advanced software) registers in sort of the same way as a person does, and there are some psychics who can pick up on it. And yeah, there's theological debate over whether or not this is proof of the existence of souls.
You cannot have both "Ship AI can transfer to a shiny new body" and "Cannot copy ship AI". Transfer of data between hardware implies copying said data from one place to another. Erasing the original data on top of that to simulater "transfer" only makes the proccess more complicated, not less. If you can perfectly copy the ship AI to a new body, the old body would still have the original AI. There would be no actual "transference" actually, the AI just created a copy of itself in the new model. Unless you add some "magic bullshit" for that I guess.

Quote
-Can an AI refuse orders from a fleshbag for its own interest?

Yes, except for some of the earlier AIs that were programmed to be unable to do so. AI shackles like that are considered unethical these days, and there are laws in many nations that such limits either can't be programmed into 1st generation AIs or must be removed once the AI hits majority.
Humanity seems awfully nice for the AIs in your setting. Nowadays we see governments spying us through our electronic devices and there's backdoors built in everywhere, and there's still human beings being enslaved here and there.

If nothing else, I would expect "black market" AIs that are created with in-built shackles for criminal/secret organizations.

This is, would the military want to risk that their battleship AIs can suddenly rebel or switch sides or go "I value myself more than you, I'm out of here!"?

Quote
-If an AI commits a crime, how is it punished?

Same way as any meatbag. Fines, community service, imprisonment, mandated therapy, forced resocialization (literally reprogramming), etc. AI equivalent of jail is executing in a sandbox environment with strictly controlled access to external systems.
So AIs can be reprogrammed after creation, that's a significant difference from meatbags. Are there cases of terrorrists/spies hacking an AI and inserting sleeping programs to make them go mad? Who has the master password to reprogram the AI? 

Another problem I see with that is if ship AI is judged guilty, what do you with the ship itself while the AI is put into sandbox enviroment? I suppose those things are really expensive, and nobody would want them to sit idly while the criminal AI does its time.

And heck, how would a space ship do community service?


Quote
-Can AIs upgrade, or are they to be scrapped?

Yes, they can upgrade. Hardware-wise, it's just a matter of acquiring the new hardware and either migrating to it or integrating it with the existing hardware. Software-wise, AIs learn and grow much like humans do.
Do they now? Because meatbags can't just eat up a CD and automatically learn a new language or similar. Do AIs also need repetition to properly learn something? Can AIs forget/confuse stuff? Can an AI suffer delusions and believe in something that never actually happened?

Quote
-Where do you draw a line between "advanced program" and "AI"?

It's a very fuzzy line. AIs have physical properties that "advanced programs" don't, but it's really difficult to measure. There are a handful of powerful psychics who can sense and distinguish true AIs, albeit without 100% reliability especially when near the breakpoint, and research is still ongoing into how that works. There are some rare exceptions that throw a monkey wrench into it anyways. And no, AIs are not demonstrably better at differentiating between programs and AIs when they're that close to the AI equivalent of uncanny valley. Fortunately, most everyone involved in the AI game knows to create AIs well beyond what's been identified as safely into true AI territory.
So, what avantages does an AI exactly offer over the super program that follows orders without questions and doesn't risks forgeting/confusing/stuff, and/or just hiring some specialized meatbags?

Quote
-Who builds the AIs? Can they make more of their own without human support? Do they belong to the ones who built them? If they're independent, then why do people build them?

1st generation AIs are AIs built/programmed by humans. It's mostly big organizations that make AIs. Corporations, research groups, governments, militaries, etc. Most modern AIs are capable of budding off and growing a new AI based on themselves. The quantum mechanics/arcanosynthesis wibbly wobbly stuff means that the child AI is similar to the parent, but not the same. The differences wind up about halfway between what you get with human reproduction and what you get with asexual division. AIs tend not to birth child AIs very often, though.

For the most part, created AIs are treated like minors by the law. They're children, not slaves. Sometimes children that are born to help out on the server farm, but the same principles apply. Debates over the morality of military organizations commissioning AIs (child soldiers and so forth) are ongoing.
Do military AIs suffer PTSD and/or remorse over blowing up meatbags? If yes, where exactly did they learn that? Meatbags can be pretty cruel by default. If an AI starts looking very happy about blowing up meatbags, do the humies start to worry?

What do you do when you get a "retarded" AI right in the development proccess due to whatever reason? Do you keep developing it and pray for the best, or do you "cancel"?

Meatbag children still need to go to school and get vaccinned. Are there a set of stuff that every AI needs to learn and safety programs they need integrated?

What happens when it's discovered some company is "abusing" young AIs? Is there AI orphanage and adoption program?

A company screws up and a young AI in development gets deleted. It is considered a crime? How severe?

Similarly a company goes bankrupt, what happens to the AIs in development?

Offline Amechra

  • Epic Member
  • ****
  • Posts: 4560
  • Thread Necromancy a specialty
    • View Profile
Re: Sci-fi Multiverse Physics
« Reply #5 on: December 16, 2015, 03:32:33 PM »
  • Travel between universes is possible, but is not unlimited. Each universe has a sort of relative phase, and you can't (directly) interact with any universe at a different phase. Think of a graph of tan(x), and you can only travel to/from where the y-axis is 0. Any set of universes connected to each other are all at the same phase, and the universes accessible are all the same (you can't take a circuitous path to get somewhere that's not accessible from the starting universe). As a result, despite the infinite number of actual universes, the number of accessible ones from any given universe is finite, with an infinite number of universes between each pair of them. The science of our protagonists' home universe discovered this property and has a name for the "unit of universal phase distance" (tentatively named "DeLoreans"). Physics textbooks frequently illustrate it as a simple plane, although in practice it actually has many, many dimensions to it. Also, every accessible universe is an integer number of DeLoreans away from every other accessible universe (always 1 or 2 or 23874, not 0.6 or 2.8 or pi).

You'd still have access to an infinite number of happentracks - you'd just be restricted to a countable infinity, not an uncountable one.
"There is happiness for those who accept their fate, there is glory for those that defy it."

"Now that everyone's so happy, this is probably a good time to tell you I ate your parents."

Offline Garryl

  • DnD Handbook Writer
  • ****
  • Posts: 4503
    • View Profile
Re: Sci-fi Multiverse Physics
« Reply #6 on: December 16, 2015, 07:24:27 PM »
  • Travel between universes is possible, but is not unlimited. Each universe has a sort of relative phase, and you can't (directly) interact with any universe at a different phase. Think of a graph of tan(x), and you can only travel to/from where the y-axis is 0. Any set of universes connected to each other are all at the same phase, and the universes accessible are all the same (you can't take a circuitous path to get somewhere that's not accessible from the starting universe). As a result, despite the infinite number of actual universes, the number of accessible ones from any given universe is finite, with an infinite number of universes between each pair of them. The science of our protagonists' home universe discovered this property and has a name for the "unit of universal phase distance" (tentatively named "DeLoreans"). Physics textbooks frequently illustrate it as a simple plane, although in practice it actually has many, many dimensions to it. Also, every accessible universe is an integer number of DeLoreans away from every other accessible universe (always 1 or 2 or 23874, not 0.6 or 2.8 or pi).

You'd still have access to an infinite number of happentracks - you'd just be restricted to a countable infinity, not an uncountable one.

The tan graph thing is an illustrative crutch. Accessible universes don't extend infinitely in any direction, or maybe it loops back at some point or something. You still have infinity in between any two pairs, although my mathematics knowledge is too rusty to say if that would be a countable or uncountable infinity.

Quote
-Do AIs get salaries and civilian rights? Do they get to retire after X years and then you have to pay the immortal one for all eternity?

Yes and yes on salaries and civilian rights. Yes, they can retire and collect on their pension until it runs out. Those pension plans with indefinite continuation clauses got changed relatively soon after AIs started integrating into human society. A few of the early AIs are still benefiting from them.

Also, AIs are not immortal. In theory, they could live forever, but the reality of accidents (be they hardware-related or data corruption) and the fact that they aren't actually perfectly programmed (humans made the original template, and there are bugs and memory leaks and whatnot that still haven't been fixed) means that the expected lifespan of an AI is only actually a couple thousand years. Not that they've even been around that long yet, but that's the expectation going forwards.
Following on that, what happens with AIs that find themselves stuck in crappy hardware or were poorly coded from the start and are constantly crashing/unable to do any proper job? Do they get "mercy-erased"?

There are similar sorts of social safety nets for AIs as humans in most places.

Quote
Quote
-Can an AI refuse orders from a fleshbag for its own interest?

Yes, except for some of the earlier AIs that were programmed to be unable to do so. AI shackles like that are considered unethical these days, and there are laws in many nations that such limits either can't be programmed into 1st generation AIs or must be removed once the AI hits majority.
Humanity seems awfully nice for the AIs in your setting. Nowadays we see governments spying us through our electronic devices and there's backdoors built in everywhere, and there's still human beings being enslaved here and there.

If nothing else, I would expect "black market" AIs that are created with in-built shackles for criminal/secret organizations.

This is, would the military want to risk that their battleship AIs can suddenly rebel or switch sides or go "I value myself more than you, I'm out of here!"?

I'm trying to lean more towards the idealism side of the idealism-cynicism scale. I like my earlier Star Trek-esque striving for the best of humanity more so than the grim and grittier take that seems more prevalent in fiction these days. Also, I'm kind of assuming that most of the growing pains of these scary AIs becoming a thing happened and got resolved some time in the past and most everyone understands that AIs aren't Skynet or The Matrix or what have you.

Black market AIs exist, but aren't very common. From a risk-reward standpoint, it's generally much easier and more reliable to use non-AI software for your criminal activities instead of building, commissioning, or enslaving an AI. Where AIs are involved in criminal enterprises, it's vastly more common for them to have joined or formed the criminal organization than being built by/for it.

Militaries use and trust AIs like they do any other soldier (or ship captain). Training and discipline, loyalty, trusting the crew on you bridge to have your stern and vice-versa, and so forth. Mostly only AIs in the military inhabit military vessels, just like you don't see many privately owned tanks and fighter planes and aircraft carriers in the real world. Exceptions include a few grandfather clauses from back when AIs were new and everyone was still figuring things out, AIs that are rich or have political influence and just privately own some older or surplus military vehicles, etc.

Quote
Quote
-If an AI commits a crime, how is it punished?

Same way as any meatbag. Fines, community service, imprisonment, mandated therapy, forced resocialization (literally reprogramming), etc. AI equivalent of jail is executing in a sandbox environment with strictly controlled access to external systems.
So AIs can be reprogrammed after creation, that's a significant difference from meatbags. Are there cases of terrorrists/spies hacking an AI and inserting sleeping programs to make them go mad? Who has the master password to reprogram the AI? 

Not as different as you'd think. Therapy, conditioning, up through full-on brainwashing is a thing for meatbags even today, so imagine the techniques available in the space future. Psychic surgery is an emerging field in this setting, some of those powerful powerful telepathic psychics being able to, say, remove PTSD or dementia like a brain surgeon would a tumor (we're talking maybe a half-dozen people having the power to try and the skill and knowledge to not make the patient into a vegetable).

With respect to reprogramming AIs, it's usually more a matter of adjusting the personality datastores when it does get meted out as punishment. Forced resocialization/reprogramming is a pretty serious sentence, around the same scale as the death penalty or life imprisonment.

Quote
Another problem I see with that is if ship AI is judged guilty, what do you with the ship itself while the AI is put into sandbox enviroment? I suppose those things are really expensive, and nobody would want them to sit idly while the criminal AI does its time.

And heck, how would a space ship do community service?

AIs aren't all in spaceships. It's not not even the most common thing for AIs, just something that's socially visible enough that societal customs have grown around it.

A boxed AI will usually get someone else to take care of the ship while it's incarcerated.

Quote
Quote
-Can AIs upgrade, or are they to be scrapped?

Yes, they can upgrade. Hardware-wise, it's just a matter of acquiring the new hardware and either migrating to it or integrating it with the existing hardware. Software-wise, AIs learn and grow much like humans do.
Do they now? Because meatbags can't just eat up a CD and automatically learn a new language or similar. Do AIs also need repetition to properly learn something? Can AIs forget/confuse stuff? Can an AI suffer delusions and believe in something that never actually happened?

An AI with a CD describing a skill is like a person who has memorized an instruction manual. It still takes practice and experience to generalize that information to less-specific situations and knowing what knowledge to apply under what circumstances. Said learning process tends to go much faster than with humans, but it still takes some time. There are a few AI code bases that have a tendency towards trying to apply a newly learned skill to everything for a little while, no matter how inappropriate it may seem (to a hammer, everything looks like a nail).

Memory leaks and data corruption do occur. The basic AI code bases are still imperfect. There are a number of bugs floating around there.

Quote
Quote
-Where do you draw a line between "advanced program" and "AI"?

It's a very fuzzy line. AIs have physical properties that "advanced programs" don't, but it's really difficult to measure. There are a handful of powerful psychics who can sense and distinguish true AIs, albeit without 100% reliability especially when near the breakpoint, and research is still ongoing into how that works. There are some rare exceptions that throw a monkey wrench into it anyways. And no, AIs are not demonstrably better at differentiating between programs and AIs when they're that close to the AI equivalent of uncanny valley. Fortunately, most everyone involved in the AI game knows to create AIs well beyond what's been identified as safely into true AI territory.
So, what avantages does an AI exactly offer over the super program that follows orders without questions and doesn't risks forgeting/confusing/stuff, and/or just hiring some specialized meatbags?

Mostly ego and bragging rights. Let's face it, there's a real ego boost to knowing you have an honest to goodness AI working for you as opposed to just some computer program. A lot of AIs were also made so that the creators could say that they created a new form of life. That's why most of the 1st generation AIs were created in the first place. After that, there were enough AIs around that they became a proper part of society as opposed to merely technological curiosities, especially once they started reproducing. 1st generation AIs don't get made as frequently as they were at the beginning of the AI boom.

In a few thousand generations, AIs will start developing a sort of psionic potential of their own, but they're not there yet.

Quote
Quote
-Who builds the AIs? Can they make more of their own without human support? Do they belong to the ones who built them? If they're independent, then why do people build them?

1st generation AIs are AIs built/programmed by humans. It's mostly big organizations that make AIs. Corporations, research groups, governments, militaries, etc. Most modern AIs are capable of budding off and growing a new AI based on themselves. The quantum mechanics/arcanosynthesis wibbly wobbly stuff means that the child AI is similar to the parent, but not the same. The differences wind up about halfway between what you get with human reproduction and what you get with asexual division. AIs tend not to birth child AIs very often, though.

For the most part, created AIs are treated like minors by the law. They're children, not slaves. Sometimes children that are born to help out on the server farm, but the same principles apply. Debates over the morality of military organizations commissioning AIs (child soldiers and so forth) are ongoing.
Do military AIs suffer PTSD and/or remorse over blowing up meatbags? If yes, where exactly did they learn that? Meatbags can be pretty cruel by default. If an AI starts looking very happy about blowing up meatbags, do the humies start to worry?

Yes, although nowhere near the same as humans do. AI PTSD triggers and works somewhat differently from human PTSD.

By the time AIs became mature enough that the military considered using them, a) they were considered people well and proper, and b) the military folks remembered what happened when they made human murdertron super soldiers and decided not to repeat the same mistake by encouraging it in the AIs that might be running kill-sats and nuke silos.

When an AI starts to like blowing up humies, it's as or more worrisome as when a humie starts to like blowing up humies.

Quote
What do you do when you get a "retarded" AI right in the development proccess due to whatever reason? Do you keep developing it and pray for the best, or do you "cancel"?

It's seen sort of like birth defects and abortions, although the laws aren't nearly as mature for AIs as they are with humans. Yes, there is still ongoing debate about where AI life starts. Usually, though, unless your programming team is incompetent, large enough flaws that would necessitate cancellation would be noticed well before when anyone considers an AI to be alive, and smaller flaws can be corrected.

Quote
Meatbag children still need to go to school and get vaccinned. Are there a set of stuff that every AI needs to learn and safety programs they need integrated?

Every AI should keep its virus definitions updated. That's just common sense in health care.

There are basic education requirements and AI equivalents of GEDs and university degrees. It's more a matter of certification than extensive education (the basic knowledge stuff really isn't an issue for AIs), although like elementary and high school, there is an element of socialization involved.

Quote
What happens when it's discovered some company is "abusing" young AIs? Is there AI orphanage and adoption program?

A company screws up and a young AI in development gets deleted. It is considered a crime? How severe?

Similarly a company goes bankrupt, what happens to the AIs in development?

Most of these depend on local legislation.

This is way further than I ever intended to explore the topic of AIs. If you have any more questions or ideas, that's great, but I would prefer to discuss some of the other topics, too.