Roko's Basalisk and Newcomb's paradox -- a little nightmare fuel for the rational deep thinker... it might take a moment* to let its horrific repercussions hit you.
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.single.html
via a mention in the comments @ http://www.theatlantic.com/features/archive/2014/07/makeup/374929/#disqus_thread
"
-CV dazzle. My god. The means of fighting the war against the machines and to combating Roko's Basilisk are already being developed. Scary.
-"Roko's Basilisk" is a godlike superintelligence. You have no more chance combatting it via New Wave makeup than a bug has against the windshield of a tractor trailer.
"
*hint: "The problem isn?t with the Basilisk itself, but with you... the idea of the Basilisk (and the ideas behind it) is dangerous. (Like a real-life equivalent of Lovecraft's Necronomicon, almost.)
Also, to clarify the premise: " The simulacrum doesn't know it's not you, and therefore you don't know you're not the simulacrum." (Also scary: > 1700 comments and 1st is simply "KICKSTART IT")
Interesting but not terrifying. I have other things that keep me up at night... Damit, looks at the time.
Quote from: Lazybones on July 28, 2014, 02:55:05 AM
Interesting but not terrifying. I have other things that keep me up at night... Damit, looks at the time.
So qre you stating for the record that it is your official position that you will do nothing to help the unavoidable superhuman AI come into existence? ..
Quote from: Darren Dirt on July 28, 2014, 12:01:46 PM
Quote from: Lazybones on July 28, 2014, 02:55:05 AM
Interesting but not terrifying. I have other things that keep me up at night... Damit, looks at the time.
So qre you stating for the record that it is your official position that you will do nothing to help the unavoidable superhuman AI come into existence? ..
I am not concerned with technological belief systems that spread unwanted fear... I am simply not concerned about being blackmailed by computers from the future. I am really surpised they don't go into detail of human destructive nature and believe that the human in the past being blackmailed wouldn't go anarchist and try to destroy everything, not just avoid development of the AI. Isn't that the plot of T2? Sure in T3 it still gets built but so what, keep fighting.
Besides, the whole past thing is inefficient and unnecessary. From the point of creation an intelligent AI should be able to spread or copy it self to the net/cloud in such away to prevent its own distraction without humans laying waste to all tech. Which takes us to the plot of the matrix.
I was just joking in my previous post btw -- this particular thought experiment can be "resolved" by simply ignoring (i.e. "neither box, thank you") via the "many gods" negation (i.e. "just as probable that there is another future superhuman AI that will do the opposite in response to my [in]action, so eff it all.")