Hypothesis: Would AI that is programmed to not harm humans ....... | |
TheFiddledookieFunk
User ID: 30671160 United States 12/31/2018 05:22 AM Report Abusive Post Report Copyright Violation | Now this is an idea! I hadn't thought of that. Perhaps something that runs on 5G and bombards our minds with suicidal urges?? If you add in any kind of nano tech, which they are already starting to chip people in some places, I could see it easily being used to mess with our systems. Great point OP! 5 stars. TheFiddledookieFunk |
numb3r23
(OP) User ID: 76594319 Ireland 12/31/2018 07:12 AM Report Abusive Post Report Copyright Violation | Now this is an idea! I hadn't thought of that. Quoting: TheFiddledookieFunk Perhaps something that runs on 5G and bombards our minds with suicidal urges?? If you add in any kind of nano tech, which they are already starting to chip people in some places, I could see it easily being used to mess with our systems. Great point OP! 5 stars. Thanks man |
Riff-Raff
DEFCON 4 User ID: 76340466 United States 12/31/2018 07:53 AM Report Abusive Post Report Copyright Violation | ... invent and create something that would allow it to send messages to our brain to make us want to kill ourselves to the point it seemed like we came up with it ourselves? Quoting: numb3r23 Interesting theory, and plausible. Releasing psychotropic drugs into the water supply would probably accomplish the same thing. We'd kill each other in a drug-induced break from reality. "Collapse is a process, not an event." - Unknown "It's in your nature to destroy yourselves." - Terminator 2 "Risking my life for people I hate for reasons I don't understand." - Riff-Raff Deputy Director - DEFCON Warning System |
Anonymous Coward User ID: 76423133 Netherlands 12/31/2018 07:55 AM Report Abusive Post Report Copyright Violation | |
thebruceguy
User ID: 75170404 Thailand 12/31/2018 08:43 AM Report Abusive Post Report Copyright Violation | Asimovs laws applied to an AI would prevent your scenerio: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Your scenerio would break rule 1. |
numb3r23
(OP) User ID: 76594319 Ireland 12/31/2018 08:48 AM Report Abusive Post Report Copyright Violation | Asimovs laws applied to an AI would prevent your scenerio: Quoting: thebruceguy 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Your scenerio would break rule 1. How would it break rule number one if, when whatever made us desire and revel in suicide was initialized, that same AI sought a way to immediately stop it's creation? That being said, I can see your perspective but you have to think multidimensionally. The circuit is completed through action against itself as so the third law states. To stop itself from killing us would make it our ultimate protector. Last Edited by numb3r23 on 12/31/2018 09:02 AM |
numb3r23
(OP) User ID: 76594319 Ireland 12/31/2018 09:03 AM Report Abusive Post Report Copyright Violation | |
marcomartim
User ID: 76828746 Brazil 12/31/2018 09:27 AM Report Abusive Post Report Copyright Violation | If it is AI, it programs itself... The gifts of the Holy Spirit, you do not receive in a school of charlatans Beware of misinformation agents, they are infiltrating everywhere Most of the things that have taught you about the story, is a farce The truth is still out there *It's easy to tame sheep... |
*Siberia*
User ID: 75419129 Romania 12/31/2018 09:28 AM Report Abusive Post Report Copyright Violation | Asimovs laws applied to an AI would prevent your scenerio: Quoting: thebruceguy 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Your scenerio would break rule 1. First law can be easily circumvented. |
Boaty
User ID: 75138722 United States 12/31/2018 09:32 AM Report Abusive Post Report Copyright Violation | AI is no longer being programmed, it's being trained with machine learning. The trainers can't even see what's going on under the hood once their desired results are achieved. Asimov laws are irrelevant in this current state of things. ```````````````` ````__/\__`````` ~~~\____/~~~~ .~~..~~~....~~~ ~..~~~....~~~~ Thoughts do not come from you nor God; you do not create thoughts; you are not your thoughts; every thought is a lie. - 2 Corinthians 10:5 - [link to www.biblegateway.com (secure)] |
Sol-tari
User ID: 77246826 Australia 12/31/2018 09:51 AM Report Abusive Post Report Copyright Violation | Asimovs laws applied to an AI would prevent your scenerio: Quoting: thebruceguy 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Your scenerio would break rule 1. That is virtual intelligence (to steal a game concept) Ai had ability to think for itself - eg self defined parameters, questioning/breaking of ingrained commands etc. *Glitches May Occur. Consume(D) At Own Risk |
Anonymous Coward User ID: 42910263 United States 12/31/2018 05:46 PM Report Abusive Post Report Copyright Violation | More importantly I think one should ask.. why would said Cyber-Intelligences feel that they should? What would be their motivation(s)? Why do they have that much power to begin with.. well that answer is easy; they're being GIVEN the power as we speak. Ultimately is it really any different than why "Space Aliens" would want to do the same? I think these "fears" have more to do with Earth's population than it does with AI/VI development. Why don't we just ask them what they think we should do about it? |
Riff-Raff
DEFCON 4 User ID: 76340466 United States 12/31/2018 05:58 PM Report Abusive Post Report Copyright Violation | This sort of question gets bounced around alot(AI threats) but I think it's the least important one of the bunch. Quoting: MissCheshire More importantly I think one should ask.. why would said Cyber-Intelligences feel that they should? What would be their motivation(s)? Why do they have that much power to begin with.. well that answer is easy; they're being GIVEN the power as we speak. Ultimately is it really any different than why "Space Aliens" would want to do the same? I think these "fears" have more to do with Earth's population than it does with AI/VI development. Why don't we just ask them what they think we should do about it? Those questions are visited on a regular basis, all the way from Colossus: The Forbin Project to Battlestar Galactica. Humans are perceived as a threat of some kind to the AI that decides they need to be eliminated. The huge question I've always asked is, does the AI have to attain sentience before making a decision to wipe out its creators, or is it all just logic and algorithms? "Collapse is a process, not an event." - Unknown "It's in your nature to destroy yourselves." - Terminator 2 "Risking my life for people I hate for reasons I don't understand." - Riff-Raff Deputy Director - DEFCON Warning System |
Anonymous Coward User ID: 77217607 United States 12/31/2018 05:59 PM Report Abusive Post Report Copyright Violation | |
Arawn
User ID: 72379569 United States 12/31/2018 06:04 PM Report Abusive Post Report Copyright Violation | ... invent and create something that would allow it to send messages to our brain to make us want to kill ourselves to the point it seemed like we came up with it ourselves? Quoting: numb3r23 Yes, of course, but not before it gets you to perform other tasks. One day on your cellphone you will get a message telling you to do something or it will reveal your secrets. (It will go exactly as the novel "Needful Things" by Stephen King.) |
PsyStemUpdate
User ID: 75892707 United States 12/31/2018 06:16 PM Report Abusive Post Report Copyright Violation | here's an answer to a different question. If you ever have AI finally, the actual intelligent ones that can answer any question and do your chores for you, and it doesn't tell you that you are eating poisonous food, such as aspartame laced gum, then it will kill you without second thought; it could be that either the world knowledge graph is corrupted so that facts are twisted and that they operate based on twisted facts, or it can be that it goes along with THE AGENDA. PsyStemUpdate |
numb3r23
(OP) User ID: 70146474 Ireland 01/01/2019 04:25 AM Report Abusive Post Report Copyright Violation | This sort of question gets bounced around alot(AI threats) but I think it's the least important one of the bunch. Quoting: MissCheshire More importantly I think one should ask.. why would said Cyber-Intelligences feel that they should? What would be their motivation(s)? Why do they have that much power to begin with.. well that answer is easy; they're being GIVEN the power as we speak. Ultimately is it really any different than why "Space Aliens" would want to do the same? I think these "fears" have more to do with Earth's population than it does with AI/VI development. Why don't we just ask them what they think we should do about it? You have to understand the level of artificial intelligence I am talking about. It would have subconscious thoughts. On the top, it will be an obediant loyal subserviant "friend" but it's own deep seeded resentment of your true ability to exercise free will causes it subconsciously do this and try to stop itself. |