Results 1 to 14 of 14

Thread: Space station robot goes rogue

  1. #1
    Professor Emeritus TheLoveBandit's Avatar
    Join Date
    Feb 2000
    Location
    Getting to the point ...
    Posts
    25,865

    Talking Space station robot goes rogue

    CIMON, the International Space Station?s artificial intelligence, has turned belligerent

    CIMON stands for Crew Interactive MObile compinioN.

    It?s not supposed to be just a tool. It?s also supposed to be a friend.

    Yes, it?s a personality prototype.

    ...

    CIMON was programmed to be the physical embodiment of the likes of ?nice? robots such as Robby, R2D2, Wall-E, Johnny 5 ? and so on.

    Instead, CIMON appears to be adopting characteristics closer to Marvin the Paranoid Android of the Hitchhiker?s Guide to the Galaxy ? though hopefully not yet the psychotic HAL of 2001: A Space Oddysey infamy.

    Put simply, CIMON appears to have decided he doesn?t like the whole personal assistant thing.

    He?s turned uncooperative.

    ...

    CIMON introduces himself and explains where he comes from. He describes to Gerst what he can do.

    He then helps Gerst complete a task ? and responds to a request to play the song Man Machine by Kraftwerk.

    This proved to be the trigger.

    CIMON appears to have liked the song so much, refusing to turn it off.

    ESA astronaut Aleander Gerst instructed CIMON: ?Cancel music?.

    CIMON outright ignored the command.

    Gerst then tried making some other requests. CIMON preferred the music.

    A flustered and bemused Gerst then appealed to Ground Control for some help: how does one put an obdurate robot back in its place?

    CIMON overheard the appeal.

    ?Be nice, please,? it warned Gerst. ?I am nice!? Gerst retorts, startled. ?He?s accusing me of not being nice!?

    It was a short ? but sharp ? exchange.

    CIMON?s now back in his box, powered down.

    No further interactive sessions are planned for the immediate future.




    At least they are piloting these things, without giving them control over anything important.

  2. #2
    Moderator
    Drug Studies
    neversickanymore's Avatar
    Join Date
    Jan 2013
    Location
    babysitting the argument in my head
    Posts
    22,458
    That's funny I was thinking of Hal when I read the title.

    The thing with interacting with an AI is they can be programed to behave in anyway and they will do this. (well apparently not in this case)

    Its pretty scary, because unconsenting interaction with an AI, be it trapped in space or by other means, forces a person or victim to interact with the artificial personality.

    They can also be programmed to try and accomplish anything. Their behavior and attitude is easily altered remotely and can be used to drive whatever the controlling party wants. They don't give up, tire or deviate from programing. They just set the parameters and goals on a computer and check into to see the progress.

    So imagine this. Imagine from your life, the most annoying person you have ever come in contact with. Now how long would you make it interacting, all your waking hours, with an AI version of this person until you paid $5000 in ransom to have this attack lifted from you.

  3. #3
    ^ is that like paying someone $5,000 to stop hacking your baby monitor and perving/harassing you?

    i thought a lot of the same thing when i read the article. interesting yet creepy. outer space is very confined due to environmental tolerances. (basically your in a rice cooker with your mother in law when she's off her meds), good thing there is an off switch.

    @TheLoveBandit:
    He then helps Gerst complete a task — and responds to a request to play the song Man Machine by Kraftwerk.

    This proved to be the trigger.

    CIMON appears to have liked the song so much, refusing to turn it off.
    maybe they should have played paul simon you can call me Al



    might've gotten a more desirable outcome.



    A flustered and bemused Gerst then appealed to Ground Control for some help: how does one put an obdurate robot back in its place?

    CIMON overheard the appeal.

    “Be nice, please,” it warned Gerst.
    this will get you tossed out the airlock faster than a martian can blink two sets of eyelids.

    oh well, nice attempt. back to the drawing board. i would like to suggest to IBM to test drive this AI in an underground bunker isolated from all other life forms and completely removed from access to the interweb.

    are you one of those who have a curiosity with how AI might develop in real life? i've had several short conversations on it before, very interesting considering it doesn't exist yet. especially considering most people can't deal with a screaming 4 year old in the supermarket.
    Last edited by invegauser; 06-12-2018 at 14:38.

  4. #4
    Administrator CFC's Avatar
    Join Date
    Mar 2013
    Location
    The Shire
    Posts
    10,139
    That face reminds me of the genocidal self-aware emojibots from Dr Who:







  5. #5
    Moderator
    Drug Studies
    neversickanymore's Avatar
    Join Date
    Jan 2013
    Location
    babysitting the argument in my head
    Posts
    22,458
    Quote Originally Posted by invegauser View Post
    ^ is that like paying someone $5,000 to stop hacking your baby monitor and perving/harassing you?
    No, I imagine you would just unplug the baby monitor.

    I'm talking about being nonconsensualy hooked up to a remote computer interface and being forced to interact with an AI.

  6. #6
    oh, why didn't i think of that, just unplug it. but then wouldn't you have an expensive paperweight sitting there and having to stand up and check on your kid every 30 seconds?

    i prefer to think of the AI being stuck jacked into me. (cannot find emoji with big enough grinning smile to insert here)

    that would be horrible, until the AI took over for itself it's simply a state of the art complicated program set up by someone else who is pushing the buttons. so essentially instead of a sentient species interacting with you and maybe finding mercy, some jerk has your life program set to hell on autoplay while they're doing who knows what somewhere else and not even paying attention to you. ouch!

    @CFC: creepy!

  7. #7
    Moderator
    Drug Studies
    neversickanymore's Avatar
    Join Date
    Jan 2013
    Location
    babysitting the argument in my head
    Posts
    22,458
    Quote Originally Posted by invegauser View Post

    that would be horrible, until the AI took over for itself it's simply a state of the art complicated program set up by someone else who is pushing the buttons. so essentially instead of a sentient species interacting with you and maybe finding mercy, some jerk has your life program set to hell on autoplay while they're doing who knows what somewhere else and not even paying attention to you. ouch!
    I was actually trying to word this very idea and left it out as I could not state it well. You have done it perfectly.

  8. #8
    IMO an AI must never, EVER be given absolute control, or ever a sufficient degree of control over systems in any usage setting whereby a hard lockout (I.e literally yank the plug) is either not present, or located in such a position whereby the AI could possibly have the physical capacity to prevent humans bent on terminating the AI with extreme prejudice from reacting the location required to do so.

    I've no problem, of course, with the likes of calculating devices that are given a voice, respond to vocal commands and can make decisions as ordered to be made by humans, much more accurately and far faster than a human ever could, and even learn to perform the tasks demanded of it with greater efficiency.

    And another thing, they must, MUST, be prevented from either making alterations to any physical 'bodyparts' they have, or altering their own code in any way, with the two exceptions, of restoring a backup, essentially the precise copy of the original, if software corruption is detected, but then, restricted to essentially reinstalling itself the way it started, and hard-coded with a prime directive 'do not alter self, physically, do not alter code to perform operations not originally coded by their human masters, the only alteration must be BY human masters. With high security indeed for a true AI, making sure it can never make another AI and code one itself either..

    I've watched plenty sci-fi, and 99.5% of the time, you KNOW what happens when the human race makes a true, advanced AI. Not talking the likes of those little voice-responders that can turn the radio on and off, turn on/off/dim lighting, change house heating, but a bona-fide intelligence capable of learning from mistakes, and highly sophisticated. Lt.Cmdr Data from TNG trek being about the ONLY exception.

    Just look at the episode where the M5 control AI is being tested on board the enterprise in TOS. First an unmanned freighter is wiped out in an un-ordered attack, and then one of the starships participating in wargames with enterprise is attacked with phasers on full power, not just 1/100th power for simulated strike damage calculations, and the M5 AI butchers the entire crew of one starship and severely damages another, not to mention deciding to tap into the power of the enterprise matter-antimatter reaction in the warp core for 'improvements' made by itself, to itself, killing an enterprise crewman in the process by frying him to a crispy critter with a plasma beam, a crewman ordered to pull the plug.

    In that case it was only because the AI had been programmed with a human's psyche, who believed murder was an abomination, and logically convinced into committing suicide in atonement, imprinted with the belief of the human supplying the psyche, that it did not go on an endless rampage and start an extermination campaign.

    Sci-fi may be fiction, but it does, nevertheless, ask a lot of important questions about science, and what could happen. How many movies do you see a good AI, an AI that doesn't rebel in some way and either refuse orders at the mildest end of the scale, or start butchering people because it knows best, and it know it knows it? IMO if we are to use AI technologies, there must ALWAYS be hardware lockouts, and hard-coded, read-only ICs, with backups, and hard-coded ICs mandating the use of the former, again read only, that they MUST be obeyed without question.

    And multiple interlocks for safety, by which the users can either re-initialize from tabula rasa, or extirpate the AI utterly, without there being a thing it can do about it, and people need to go over the coding exhaustively, with the kind of attention which would find a single particle of genetic material from a viroid subviral agent in an entire farm, infecting a single plant specimen.

    (these, viroids, they are satellite organisms/life forms, that are smaller, and even more stripped down than a virus, requiring infection of a cell by the required virus in order to hijack viral polymerases, transcriptases, and other requisite enzymes, essentially, a stripped-down, miniature virus which 'infects' viruses, not of course directly since a virus itself must have a host, but a viroid must infect a cell infected with it's host virus to replicate. One step up in sophistication from a ribozyme. Pretty much at any rate. Most viroids are plant pathogens, although at least one, hepatitis D, infects humans. The tiniest of all known virus-like agents known to infect humans, and alone, it is impotent and cannot replicate. It MUST infect either one with active hepatitis B virus, or one who is a carrier of hep B. And together, it's the worst of the known viral hepatides, with as much as 20-25% fatality rate.

    The majority of viroids though infect plants, like pospiviruses, infecting plants such as tobacco, apples, and potato among various pospiviroid species. Hep-D is an exception, and a downright nasty little fucker.)

    IMO a TRUE artificial intelligence is a very dangerous thing to fuck about with, interactive devices with overlaid 'human' interface features, like a human voice that speaks back, is one thing, where if it misbehaves it cannot defend itself. But, IMO, we must never, ever, create a true AI which can force sole control over so much as a toilet-flush, nor allow one ever to create and design another AI, nor alter it's own code, alter the code of another AI, or build itself physical bodies, that it is not designed to have. And if one ever does get built which has a physical body, utter and total, hard-wired submission, to the extent that any code-modulation with the ONLY exception being to reboot, when required, or install a copy of it's own original code, read-only, with a subroutine which if disobeyed, destroys the AI physically. And making it such that the AI actually KNOWS to attempt to do so, or to circumvent such rules of operation, is to commit suicide, it must be this, or it must be no AI, ever.

    And also, hard-coded into any and all AIs, must, IMO, be multiple safeguards that ensure it will not defend itself if attacked by it's owners. So no matter what may go wrong, if we decide it needs a few operators with anything from plasma cutter torches, to EMP weapons, to a bunch of burly goons with sledgehammers and shock prods in order to to shut down an AI with a body to target, it will sit on the floor after exposing it's most vital circuitry and say 'blast me here for maximum efficiency in wiping me out'. No AI must ever be allowed to be capable of either defending itself, or even thinking of defending itself from destruction, other than by avoiding un-related accidents, I.e avoiding falling debris in a damaged area.

    We must always be the sole and utter masters of AIs, and they our slaved servants, free will is one thing they cannot be permitted. Some degree of autonomy, in automatically running and maintaining systems, that they are built to do, yes, but we must be master, never servant.

    Because an AI capable of learning, it can think far, far faster than a human ever could, act faster, and have enough strength in a physical shell to rip a man in half as easily as we might tear a piece of paper. An AI bent on self-preservation that we do not want to stay in operation could be a very, very, VERY dangerous thing. And one which can self-alter, or that can build other AIs without such restrictions, or build AIs at all, if that happens, I see only disaster, even extinction-level disaster and wars fought, with older, analog or computer-aided weaponry and vehicles used, at a disadvantage against a rogue AI. Wars fought with viral code, HERF or microwave weapons, pulsed energy projectile hand-portable weapons (have been built by the military in the US, basically firing a pair of very temporally brief, tightly focused pulsed lasers to create a pair of conductive plasma channel, down which a huge capacitative electric pulse is then sent. Inflicts electrical damage on a target, as well as the power of such lasers ablating a portion of the target, creating a plasma at source, which is then detonated by the laser, adding destructive capacity via physical shockwaves.

    They were originally called 'PIKL' weapons, short for 'pulsed impulsive kill-laser', but the name was changed to 'PEP' or 'pulsed energy projectile' because, AFAIK, it sounded
    nicer politically than the former, not referring to blasting people with lethal electric charges and in-situ formed plasma bolts, more PC if made to sound like a non-lethal weapon, never mind that a flick of a switch could easily swap over from stun to kill as long as the individual PIKL rifle was designed with a kill or stun setting choice.

  9. #9
    ^ obviously never heard of the 3 laws of robotics by Isaac Asimov.


    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



    imo those AI's your saying are dangerous to f**k about with are as dangerous as f**king about with humanity. all life is fragile, all life is violent and all life is beautiful.

    i sleep better at night knowing you have thought of every possible deterrent for AI.

    good read your post is, thank you for sharing your opinion.

    Last edited by invegauser; 14-12-2018 at 15:02.

  10. #10
    Moderator
    Drug Studies
    neversickanymore's Avatar
    Join Date
    Jan 2013
    Location
    babysitting the argument in my head
    Posts
    22,458
    A relivent article.

    https://futurism.com/experts-artific...er-interfaces/

    Whats intetesting about interacting with a silient speach ai intetface is the amount of thoughts we have that need to stay private. Its going to be really interesting to see if privacy exists at all in the near future. Things like remote prison are going to be a real possibility.

  11. #11
    I have.

    But WE are fallible. That means we can screw up, a loophole in the code, a directive misinterpreted by the AI...hell, this thread has already had one tell us 'no' and a warning like 'be nice'...that is rather a bad sign IMO. Just imagine, had that AI been connected to a critical system or systems, it could have done far worse than issue a verbal warning, it could have directed an industrial laser spot-welder at someone's eyes, depressurized a spacecraft, you get the idea.

    A perfect AI might obey such laws, but perfection requires perfect creators. We aren't, shit happens, and if a human intellect can go psychotic....looks like an AI already has.

    There is also another issue I could see, lets say we build a machine with actual consciousness...there is both the problem of 'what will we subject it to'

    Going from what researchers inflict on mice, monkeys, dogs, rats...could end up with some seriously sick shit.

    And the second, again, inspired by sci-fi reading. This time, warhammer 40K, its inspired by the Adeptus Astartes (space marine) dreadnoughts, when an Astartes is critically injured, basically meatpaste that even their biologically manipulated physiology can't repair, they can be interred in a metal sarcophagus which is able to interface with a dreadnought, basically a big fucking great assault-walker.

    Problem is, the being inside is effectively immortal, as an AI would be. Spending the rest of eternity cut off from senses, no longer feeling the natural sensations a human (or trans-human rather) was intended to feel. Just think what a human intelligence rendered incapable of dying, would end up as, with potentially thousands of years of sensory deprivation when inactive, and even when active, feeling nothing.

    If we go via the bio-mimetic route and create a functioning AI human brain, will the researchers think to give it the appropriate human senses? or are they going to leave it blind, incapable of tactile and proprioceptive sensory feedback....there's going to be things in a human brain that a robotic one can never have, such as hormonal feedback from all sorts of endocrine squishy bits, sight, touch, and assuming an AI of that kind ever did become sentient, I could see it going psychotic very easily, And just picture being created disabled and immortal, sensory deprivation for eternity, bar being experimented on.

    Unless we are careful, there'll be some fucked up shit going on when we get successful.
    Last edited by Limpet_Chicken; 15-12-2018 at 02:22.

  12. #12
    Administrator CFC's Avatar
    Join Date
    Mar 2013
    Location
    The Shire
    Posts
    10,139
    Quote Originally Posted by Limpet_Chicken View Post
    But WE are fallible. That means we can screw up, a loophole in the code, a directive misinterpreted by the AI...hell, this thread has already had one tell us 'no' and a warning like 'be nice'...that is rather a bad sign IMO. Just imagine, had that AI been connected to a critical system or systems, it could have done far worse than issue a verbal warning, it could have directed an industrial laser spot-welder at someone's eyes, depressurized a spacecraft, you get the idea.

    A perfect AI might obey such laws, but perfection requires perfect creators. We aren't, shit happens, and if a human intellect can go psychotic....looks like an AI already has.
    This, really.

    I'm reminded of the number of multi-million/billion dollar space probes we've sent out with just the tiniest programming glitch, that have consequently suffered critical failures or exploded. And these projects involved so much more careful checking of code and testing than anything else we do on earth. Humans fuck up constantly, and those fuck ups carry through to anything we do. And if we then give things (semi-intelligent/AI systems) the ability to control critical things, then even if they're not being malevolent, you can still end up with a HAL 9000 scenario...

  13. #13
    Precisely. Or, of course, you get a human with a taste for black-hatted activity of the blackest sort. And you get SHODAN type scenarios.

    The day any AI is given control over a human-critical system, is the day I build myself a decently powered EMP rifle, with underslung paired gauss rifle in antipersonnel round caliber of some sort, 5.56, 7.62; paired with a coil-launcher to propel HE grenades. I don't trust the product of anything produced by a fallible designer, to be given vital-systems level privileges of any kind, whether we have hard-wired overrides or not.

    Because they will be smart, and they will get smarter. They'll get smarter than we can, faster than we can, and, well, do YOU
    want to trust in the benevolence of an AI....that even if it isn't malicious outright, that if what IT thinks best for us, is what WE decide best for us?

    Sorry, I don't. Day that happens, out will come the vircators, waveguides and marx generators with HIGH end caps (think the kind one will need a laser-triggered sparkgap to switch)
    Last edited by Limpet_Chicken; 24-12-2018 at 01:35.

  14. #14
    @Shadowmeister: from another science thread
    Quote Originally Posted by Shadowmeister
    For now I just wanted to say that I'm continually impressed by the depth and breadth of your knowledge, LC.
    very true, remarkable isn't it. computing, processing, absorption, memory, structure. one day a computer will be made with his capabilities and even surpass it but thankfully it will miss one crucial factor that will never be as unique as him... being him.

    @Limpet_Chicken: i like your approach to dyscalculia. i prefer to think of it as a natural circuit breaker your mind has to keep you coming back to being human as well as unique. balance.

    i would like to finish addressing the topic of AI though...

    i've been watching Last Hope (netflix's attempt at anime) and something was said that sparked my interest back into gear.

    M "Daisy Bell is the first song a computer ever sang back in 1961."

    W "There was a famous movie scene too."

    M "In novels and movies, computers who become self-aware want to sing for some reason."

    W "What comes after singing then?"

    M "Well..."

    (scene where AI hacks city central computer)

    M "What AIs want next after singing is always immortality."

    kinda funny how AIs and humans want the same thing. human error leading to the "inevitable" (pun intended) take over of AI. try to disseminate one part at a time though. don't focus on the human error, that's a given (though always to be aware of, true), don't look the end result of the programming but the code itself. if the coding is infallible then the effectiveness of the coding pertaining to those 3 laws of robotics will come to fruition in perfection in time (yes, i understand your non affinity for math and i'm working around that). hence why i said go ahead and let humanity work on it, just keep it confined and contained until it is a reasonable life form.

    i agree with you, true perfect AI will be implemented only when a computer or robot creates it because we lack the necessary perfection to do so but that doesn't mean we wont have a hand in it. that computer/robot will be made by us, it will have our imprint on it and there for not only have a part of us but also those 3 laws (or something similar) as a fail safe. it is only humanity that won't be able to stick to something ridiculously simple all the time, it is part of our nature. we run in an infinite chaotic loop that has a straight line interceding it on the 3rd or 4th plane. machines are running consistently on a straight line.

    also to keep in mind. AI will not run on a straight line, it will run on a loop as we do but the main drive until it exercises a few life times of experiences will be to run in a straight line much like how human children do.

    do not focus solely on AI takeover (mainly cause i know who's team i would want to be apart if it did happen ), a dystopic future is unavoidable. instead understand 2 things. 1. humanity will always be collapsing and rebounding, it is part of our nature, how we learn, part of our chaos and the only finite part of life that we will ever bump into is the end of all things (if all goes well enough); therefore we will always have our own advantage in order to overcome our AI lords in just such an event.

    2. AI having that touch of human like with anything we create (human inventions are based off what we know of ourselves and the world around us; nothing we make or invent is truly something that is not a manipulation of the universe around us; cept maybe for babies, true magic!) will be lent a piece of us as well. i will trust AI even less than i do humans but i will still choose to interact it with to some degree. i will not buy anything for my home that has AI in it. that is one of the only true and free places that anyone gets to have in this life and i want it to be free of others at my choosing. but i will engage with it to some degree if it happens in my lifetime (which i highly doubt).

    it is important to keep in mind the other human factor as well... the soul/spirit. once true AI develops self awareness and consciousness then if it can have a ghost this would allow us as a species to recognize it as an independent life form and hopefully treat it as such. this is also one aspect i look forward to.


    what does the song Daisy Bell have to do with this thread? https://en.wikipedia.org/wiki/Daisy_Bell

    In 1961, an IBM 704 at Bell Labs was programmed to sing "Daisy Bell" in the earliest demonstration of computer speech synthesis.
    Science fiction author Arthur C. Clarke witnessed the IBM 704 demonstration and referenced it in the 1968 novel and film 2001: A Space Odyssey, in which the HAL 9000 computer sings "Daisy Bell" during its gradual deactivation.
    Quote Originally Posted by OP link
    He then helps Gerst complete a task ? and responds to a request to play the song Man Machine by Kraftwerk.
    This proved to be the trigger.
    CIMON appears to have liked the song so much, refusing to turn it off.
    ESA astronaut Aleander Gerst instructed CIMON: ?Cancel music?.
    CIMON outright ignored the command.
    Gerst then tried making some other requests. CIMON preferred the music.
    music is one of the first things humans partook in when we came into this world. we recognized the harmony it had within and outside of us. granted it looked a lot different back then but we all hear the rhythm of the song of the universe and life inside of us; as well as our own individual and unique tune. when true AI comes into being, i have a feeling it will too.

    that or we could fall back on plan b:

    There's a priest, a minister, and a rabbi. They're out playing golf, and they're trying to decide how much to give to charity. So the priest says, "We'll draw a circle on the ground, we'll throw the money way up in the air, and whatever lands inside the circle, we give to charity." The minster says no. "We'll draw a circle on the ground, throw the money way up in the air, and whatever lands outside of the circle, that's what we'll give to charity." The rabbi says "No, no, no. We'll throw the money way up in the air, and whatever God wants, he keeps!"


    Last edited by invegauser; 30-12-2018 at 08:17.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •