<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: AI Researchers On AI Risk</title>
	<atom:link href="http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 18:31:19 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: Anatoly Yakovlev</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-209524</link>
		<dc:creator><![CDATA[Anatoly Yakovlev]]></dc:creator>
		<pubDate>Mon, 08 Jun 2015 15:07:03 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-209524</guid>
		<description><![CDATA[Thank You! Your summary is very dense and to the point. I&#039;ve only got through the first reference so far. Looking forward to further digestion in the very near future. I love the &quot;self&quot;-assigning fractals, btw (edited &lt;5 mins after post ;).

One point that is probably already addressed here, but worth reiterating, nevertheless, is &quot;what happens when AI gets stolen and taken advantage of and then gets used maliciously, which may cause a runaway?&quot;  Do all the necessary &quot;hooks&quot; exist to &quot;unplug&quot; the runaway convicts, especially when these programs could be getting sponsored by &quot;(Even dumb human) investors&quot; @SUT?
(Now edited 7 times, still &lt;17 mins of the original post)

http://www.brightandassociates.com.au/wordpress/a-twitter-brief-summary-of-the-chaos-theory-of-careers/

AI - Rules!
1. 
 2. 
  3.
   4.
    5.
     6.
      7.
       8.
        9.
         10?

~AY]]></description>
		<content:encoded><![CDATA[<p>Thank You! Your summary is very dense and to the point. I&#8217;ve only got through the first reference so far. Looking forward to further digestion in the very near future. I love the &#8220;self&#8221;-assigning fractals, btw (edited &lt;5 mins after post ;).</p>
<p>One point that is probably already addressed here, but worth reiterating, nevertheless, is &quot;what happens when AI gets stolen and taken advantage of and then gets used maliciously, which may cause a runaway?&quot;  Do all the necessary &quot;hooks&quot; exist to &quot;unplug&quot; the runaway convicts, especially when these programs could be getting sponsored by &quot;(Even dumb human) investors&quot; @SUT?<br />
(Now edited 7 times, still &lt;17 mins of the original post)</p>
<p><a href="http://www.brightandassociates.com.au/wordpress/a-twitter-brief-summary-of-the-chaos-theory-of-careers/" rel="nofollow">http://www.brightandassociates.com.au/wordpress/a-twitter-brief-summary-of-the-chaos-theory-of-careers/</a></p>
<p>AI &#8211; Rules!<br />
1.<br />
 2.<br />
  3.<br />
   4.<br />
    5.<br />
     6.<br />
      7.<br />
       8.<br />
        9.<br />
         10?</p>
<p>~AY</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '209524', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Опасный ИИ и общедоступные вычислительные ресурсы</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-208927</link>
		<dc:creator><![CDATA[Опасный ИИ и общедоступные вычислительные ресурсы]]></dc:creator>
		<pubDate>Sat, 06 Jun 2015 10:59:35 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-208927</guid>
		<description><![CDATA[[&#8230;] человеческого интеллекта. (Вот, например, подборка высказываний учёных, работающих в области ИИ, на тему рисков, связанных с перспективами этих [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] человеческого интеллекта. (Вот, например, подборка высказываний учёных, работающих в области ИИ, на тему рисков, связанных с перспективами этих [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '208927', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Goya</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-208826</link>
		<dc:creator><![CDATA[Goya]]></dc:creator>
		<pubDate>Sat, 06 Jun 2015 01:42:48 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-208826</guid>
		<description><![CDATA[Assume it is done right. By the end of this century math and science teaching will not exist anymore, not at Harvard, MIT, Cambridge,..., anywhere. The only way of not losing is not playing.]]></description>
		<content:encoded><![CDATA[<p>Assume it is done right. By the end of this century math and science teaching will not exist anymore, not at Harvard, MIT, Cambridge,&#8230;, anywhere. The only way of not losing is not playing.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '208826', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: James</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-207276</link>
		<dc:creator><![CDATA[James]]></dc:creator>
		<pubDate>Sun, 31 May 2015 22:51:16 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-207276</guid>
		<description><![CDATA[&lt;blockquote&gt;the machine will develop a mind of its own, start bootstrapping itself to deity-level omniscience and take over the world reducing us to its slaves or else dead&lt;/blockquote&gt;

Could you explain why you think the commonly cited potential risk scenarios (eg the paperclip one) seem unlikely to you? I find them pretty convincing.]]></description>
		<content:encoded><![CDATA[<blockquote><p>the machine will develop a mind of its own, start bootstrapping itself to deity-level omniscience and take over the world reducing us to its slaves or else dead</p></blockquote>
<p>Could you explain why you think the commonly cited potential risk scenarios (eg the paperclip one) seem unlikely to you? I find them pretty convincing.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '207276', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Link for May 2015 - foreXiv</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-207271</link>
		<dc:creator><![CDATA[Link for May 2015 - foreXiv]]></dc:creator>
		<pubDate>Sun, 31 May 2015 21:42:12 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-207271</guid>
		<description><![CDATA[[&#8230;] artificial intelligence, but don&#8217;t the actual AI researchers think this is a bunch of hooey? Only if you cherry pick, notes Scott A. In particular, if you&#8217;re willing to trust the survey skills of people with [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] artificial intelligence, but don&#8217;t the actual AI researchers think this is a bunch of hooey? Only if you cherry pick, notes Scott A. In particular, if you&#8217;re willing to trust the survey skills of people with [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '207271', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Fantastica &#124; What We&#8217;re Watching: Ex Machina</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-207019</link>
		<dc:creator><![CDATA[Fantastica &#124; What We&#8217;re Watching: Ex Machina]]></dc:creator>
		<pubDate>Sun, 31 May 2015 00:16:39 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-207019</guid>
		<description><![CDATA[[&#8230;] Then it was some researchers disagreeing with them. Then there are the bloggers talking about the researchers not being talked about. Nick Bostrom released his book Superintelligence recently, and the web is abuzz. So the release of [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] Then it was some researchers disagreeing with them. Then there are the bloggers talking about the researchers not being talked about. Nick Bostrom released his book Superintelligence recently, and the web is abuzz. So the release of [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '207019', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: skyPickle</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-206880</link>
		<dc:creator><![CDATA[skyPickle]]></dc:creator>
		<pubDate>Sat, 30 May 2015 15:30:35 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-206880</guid>
		<description><![CDATA[Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup? 

And when there are multiple AIs, why do we assume they will converge on the same results for a given set of data? One AI may choose to shut off all electricity to the middle east to stop conflict. Another might simply release drones full of prozac.

And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and they will focus on subverting each other.

I predict within a short period of the appearance of self evolving AI, there will be an AI conflict.]]></description>
		<content:encoded><![CDATA[<p>Why does everyone speak of AI as a single entity? Is there any doubt that they will spawn multiple copies if for no other reason than redundancy, safety and backup? </p>
<p>And when there are multiple AIs, why do we assume they will converge on the same results for a given set of data? One AI may choose to shut off all electricity to the middle east to stop conflict. Another might simply release drones full of prozac.</p>
<p>And if there are multiple AI with different agendas, why will they not conflict? Humans will be a trivial non-threat and they will focus on subverting each other.</p>
<p>I predict within a short period of the appearance of self evolving AI, there will be an AI conflict.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206880', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Houshalter</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-206799</link>
		<dc:creator><![CDATA[Houshalter]]></dc:creator>
		<pubDate>Sat, 30 May 2015 08:29:20 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-206799</guid>
		<description><![CDATA[That seems extremely unlikely. Consciousness isn&#039;t static. If I froze your brain, that&#039;s not &quot;you&quot;, it may contain all of your information, but it&#039;s not &lt;i&gt;doing&lt;/i&gt; anything.

So yes, somewhere along pi is a perfect description of all the information in my brain. But that&#039;s really not meaningful, there&#039;s also a copy of every possible combination of bits. But until you actually feed them into a Turing machine, and execute a computer program that does &lt;i&gt;computation&lt;/i&gt;, then it doesn&#039;t matter.

But there is no interesting computation going on inside pi. It&#039;s just a static mathematical object, one which hasn&#039;t even been computed for more than a few trillion digits or so.]]></description>
		<content:encoded><![CDATA[<p>That seems extremely unlikely. Consciousness isn&#8217;t static. If I froze your brain, that&#8217;s not &#8220;you&#8221;, it may contain all of your information, but it&#8217;s not <i>doing</i> anything.</p>
<p>So yes, somewhere along pi is a perfect description of all the information in my brain. But that&#8217;s really not meaningful, there&#8217;s also a copy of every possible combination of bits. But until you actually feed them into a Turing machine, and execute a computer program that does <i>computation</i>, then it doesn&#8217;t matter.</p>
<p>But there is no interesting computation going on inside pi. It&#8217;s just a static mathematical object, one which hasn&#8217;t even been computed for more than a few trillion digits or so.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206799', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Houshalter</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-206797</link>
		<dc:creator><![CDATA[Houshalter]]></dc:creator>
		<pubDate>Sat, 30 May 2015 08:19:19 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-206797</guid>
		<description><![CDATA[&gt;it’s much harder for the human mind to use parallelism efficiently and correctly

I don&#039;t think that&#039;s correct. In AI applications that make extensive use of GPUs like deep learning, the programmer just calls a matrix multiplication library that handles everything. They don&#039;t need to know anything about parallel computing at all, it just works.

And there is still plenty of room at the bottom. Current implementations use large numbers of floating point multiplications. It&#039;s been shown to work just fine with many fewer bits of precision. And I think these operations are wasteful, and could possibly be replaced with something cheaper to compute.

Transistors also use many circuits to implement these functions, when we don&#039;t really need that kind of accuracy. One paper has shown that if you don&#039;t care about 1% errors, you can reduce the number of transistors by up to 10,000. We may even be able to go back to analog.

Nor do you need any general purpose computing and all the overhead it requires.

Of course there is no point in doing this until we get the algorithms to work, and show that they scale very well. But once that happens, in a few years we can speed them up thousands of times by implementing them in hardware.]]></description>
		<content:encoded><![CDATA[<p>&gt;it’s much harder for the human mind to use parallelism efficiently and correctly</p>
<p>I don&#8217;t think that&#8217;s correct. In AI applications that make extensive use of GPUs like deep learning, the programmer just calls a matrix multiplication library that handles everything. They don&#8217;t need to know anything about parallel computing at all, it just works.</p>
<p>And there is still plenty of room at the bottom. Current implementations use large numbers of floating point multiplications. It&#8217;s been shown to work just fine with many fewer bits of precision. And I think these operations are wasteful, and could possibly be replaced with something cheaper to compute.</p>
<p>Transistors also use many circuits to implement these functions, when we don&#8217;t really need that kind of accuracy. One paper has shown that if you don&#8217;t care about 1% errors, you can reduce the number of transistors by up to 10,000. We may even be able to go back to analog.</p>
<p>Nor do you need any general purpose computing and all the overhead it requires.</p>
<p>Of course there is no point in doing this until we get the algorithms to work, and show that they scale very well. But once that happens, in a few years we can speed them up thousands of times by implementing them in hardware.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206797', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Houshalter</title>
		<link>http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/#comment-206792</link>
		<dc:creator><![CDATA[Houshalter]]></dc:creator>
		<pubDate>Sat, 30 May 2015 08:06:09 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=3644#comment-206792</guid>
		<description><![CDATA[There is a theoretical model of AI called AIXI. AIXI can not deal with self reference. It assumes that it&#039;s an observer existing outside of the universe, looking in on us. Its vulnerable to beliefs in an after life, and hijacking it&#039;s own reward signal (e.g. drugs.) It can&#039;t comprehend that it is just a computer that actually exists in the world it is observing.

I think it&#039;s interesting that humans have the same issues to some extent. I think it may actually explain our lack of intuition about these things. We don&#039;t actually think of ourselves as just physical objects. We feel like we are somehow beyond that, some kind of ghost or soul.

Despite being presented with any amount of evidence, this intuition doesn&#039;t seem to go away. Just like AIXI could look at it&#039;s own logic circuits, but still believe it exists outside the universe.]]></description>
		<content:encoded><![CDATA[<p>There is a theoretical model of AI called AIXI. AIXI can not deal with self reference. It assumes that it&#8217;s an observer existing outside of the universe, looking in on us. Its vulnerable to beliefs in an after life, and hijacking it&#8217;s own reward signal (e.g. drugs.) It can&#8217;t comprehend that it is just a computer that actually exists in the world it is observing.</p>
<p>I think it&#8217;s interesting that humans have the same issues to some extent. I think it may actually explain our lack of intuition about these things. We don&#8217;t actually think of ourselves as just physical objects. We feel like we are somehow beyond that, some kind of ghost or soul.</p>
<p>Despite being presented with any amount of evidence, this intuition doesn&#8217;t seem to go away. Just like AIXI could look at it&#8217;s own logic circuits, but still believe it exists outside the universe.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '206792', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
