<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: If The Media Reported On Other Dangers Like It Does AI Risk</title>
	<atom:link href="http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 22:42:31 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: Matt C</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-143535</link>
		<dc:creator><![CDATA[Matt C]]></dc:creator>
		<pubDate>Sun, 07 Sep 2014 14:51:13 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-143535</guid>
		<description><![CDATA[Thanks for the reply, Daniel.

Part of what I&#039;m doing is trying to understand what the AI-is-coming folks mean by &quot;AI&quot;. 

What you&#039;re describing seems much more possible, and much less powerful and dangerous, than an actually conscious and near omniscient AI. 

I also don&#039;t see how this AI can be described as friendly or unfriendly, which I see attached a lot to these conversations.

I&#039;d describe what you&#039;re talking about as improved business planning software. I am sure that there are existing biz software packages existing today that will give you estimated profit projections based on revisions to your biz inputs. 

Of course these are only as good as the programming and the data (and the willingness to listen to them), but it seems believable they may get quite good over time.

I don&#039;t think the software that is used by shoe manufacturers will be the same as the software used by crop farmers, though it might have some pieces in common. If you agree that what you&#039;re talking about is plausibly the descendant of biz planning software, that would point to a variety of AIs, not one that gets control over everything.

I don&#039;t think anyone is going to hook up their entire production apparatus to the output of a planning package like these. Even if the software calculates my highest return is in switching to sorghum and finally buying the newest model of the Caterpillar automated tractor, I&#039;m not going to let it ship out the orders for me, I&#039;m going to want to review the options myself first.

I suppose there would be some decisions I would trust directly to the machine, and over time the zone of trust is likely to increase. I think the trusted zone is going to have to prove itself continually in order to stay where it is and/or expand, though.

I suppose you can imagine a disaster scenario where humans have trusted more and more of their production capability to software decision making, and one day it all suddenly goes berserk. Something like the analog of a flash crash from automated high frequency trading. Unless we have ceded complete control to our planning software (let&#039;s don&#039;t do that) even this sounds more like an expensive ugly mess than a global catastrophe.

Interested to hear where you disagree.]]></description>
		<content:encoded><![CDATA[<p>Thanks for the reply, Daniel.</p>
<p>Part of what I&#8217;m doing is trying to understand what the AI-is-coming folks mean by &#8220;AI&#8221;. </p>
<p>What you&#8217;re describing seems much more possible, and much less powerful and dangerous, than an actually conscious and near omniscient AI. </p>
<p>I also don&#8217;t see how this AI can be described as friendly or unfriendly, which I see attached a lot to these conversations.</p>
<p>I&#8217;d describe what you&#8217;re talking about as improved business planning software. I am sure that there are existing biz software packages existing today that will give you estimated profit projections based on revisions to your biz inputs. </p>
<p>Of course these are only as good as the programming and the data (and the willingness to listen to them), but it seems believable they may get quite good over time.</p>
<p>I don&#8217;t think the software that is used by shoe manufacturers will be the same as the software used by crop farmers, though it might have some pieces in common. If you agree that what you&#8217;re talking about is plausibly the descendant of biz planning software, that would point to a variety of AIs, not one that gets control over everything.</p>
<p>I don&#8217;t think anyone is going to hook up their entire production apparatus to the output of a planning package like these. Even if the software calculates my highest return is in switching to sorghum and finally buying the newest model of the Caterpillar automated tractor, I&#8217;m not going to let it ship out the orders for me, I&#8217;m going to want to review the options myself first.</p>
<p>I suppose there would be some decisions I would trust directly to the machine, and over time the zone of trust is likely to increase. I think the trusted zone is going to have to prove itself continually in order to stay where it is and/or expand, though.</p>
<p>I suppose you can imagine a disaster scenario where humans have trusted more and more of their production capability to software decision making, and one day it all suddenly goes berserk. Something like the analog of a flash crash from automated high frequency trading. Unless we have ceded complete control to our planning software (let&#8217;s don&#8217;t do that) even this sounds more like an expensive ugly mess than a global catastrophe.</p>
<p>Interested to hear where you disagree.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '143535', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Daniel H</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-143348</link>
		<dc:creator><![CDATA[Daniel H]]></dc:creator>
		<pubDate>Sat, 06 Sep 2014 03:12:10 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-143348</guid>
		<description><![CDATA[(I’m only responding to the first part of your comment here, before the “TL;DR: BRO DO YOU EVEN SOLOMONOFF?”, because I feel I have more useful to say about that.)

I believe Eli’s saying something slightly different than what you think. Specifically, I think he’s saying something close to, “We could (theoretically at some point in the future) write software that, given a utility function, will optimize for it; we don’t actually need to know the laws of physics for it to do this,” while I think you’re reading us as needing to also give the program the actual laws of physics for the world we run it in.

This still leaves the human doing the hard parts of identifying what they want and of writing the program, and it leaves the AI nonconscious. In general, when people use the term AI here, they don’t mean something conscious, but instead something that can solve problems.

The danger that people are worried about comes from thinking humans are likely to get the second problem I mentioned right, but fail at the first. Thus, we have a program that will optimize for what we tell it we want, but unfortunately it won’t actually optimize for what we really want. Another potential failure mode is where we tell it correctly what we want, but in the process of becoming powerful enough to do it the program in effect “forgets” what it’s trying to do. Both options leave us with a really powerful probably-nonconscious entity that does something we don’t want, where the typical example of such an entity is the “paperclipper”.]]></description>
		<content:encoded><![CDATA[<p>(I’m only responding to the first part of your comment here, before the “TL;DR: BRO DO YOU EVEN SOLOMONOFF?”, because I feel I have more useful to say about that.)</p>
<p>I believe Eli’s saying something slightly different than what you think. Specifically, I think he’s saying something close to, “We could (theoretically at some point in the future) write software that, given a utility function, will optimize for it; we don’t actually need to know the laws of physics for it to do this,” while I think you’re reading us as needing to also give the program the actual laws of physics for the world we run it in.</p>
<p>This still leaves the human doing the hard parts of identifying what they want and of writing the program, and it leaves the AI nonconscious. In general, when people use the term AI here, they don’t mean something conscious, but instead something that can solve problems.</p>
<p>The danger that people are worried about comes from thinking humans are likely to get the second problem I mentioned right, but fail at the first. Thus, we have a program that will optimize for what we tell it we want, but unfortunately it won’t actually optimize for what we really want. Another potential failure mode is where we tell it correctly what we want, but in the process of becoming powerful enough to do it the program in effect “forgets” what it’s trying to do. Both options leave us with a really powerful probably-nonconscious entity that does something we don’t want, where the typical example of such an entity is the “paperclipper”.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '143348', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt C</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-142965</link>
		<dc:creator><![CDATA[Matt C]]></dc:creator>
		<pubDate>Thu, 04 Sep 2014 02:57:08 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-142965</guid>
		<description><![CDATA[I don&#039;t understand this very well.

It sounds like you&#039;re saying, if you can describe everything that is important to you in the form of a utility function, and then express everything in the world in the form of data that can be applied to a utility function, we have reason to believe we can eventually write software that will complete those calculations.

Is that right? If so, it sounds like humans still have to do all the hard parts, and that software doesn&#039;t sound like anything that would (presumably) be conscious like AIs are typically described as being.

But I may have missed your meaning.

&gt; TL;DR: BRO DO YOU EVEN SOLOMONOFF?

No, I never have before.

I did look him up on Wikipedia, and from there skimmed &quot;The Time Scale of Artificial Intelligence: Reflections on Social Effects&quot;, which was quite readable for a layperson. I particularly appreciated his attempts to define milestones.

In that, he talks about a general theory of problem solving (Milestone B). To assume that such a thing is even possible seems like a reach to me. I can&#039;t help wondering if he means something different than what I take the phrase to mean, but his 1-4 seem to be talking about the same kind of stuff.

I wish he had put milestones marking out progress on this particular step.

I note he thought such a thing was most likely to appear in 2 to 25 years. That was in 1985. Is there still optimism that a general theory of problem solving (and learning) is going to be formulated?]]></description>
		<content:encoded><![CDATA[<p>I don&#8217;t understand this very well.</p>
<p>It sounds like you&#8217;re saying, if you can describe everything that is important to you in the form of a utility function, and then express everything in the world in the form of data that can be applied to a utility function, we have reason to believe we can eventually write software that will complete those calculations.</p>
<p>Is that right? If so, it sounds like humans still have to do all the hard parts, and that software doesn&#8217;t sound like anything that would (presumably) be conscious like AIs are typically described as being.</p>
<p>But I may have missed your meaning.</p>
<p>&gt; TL;DR: BRO DO YOU EVEN SOLOMONOFF?</p>
<p>No, I never have before.</p>
<p>I did look him up on Wikipedia, and from there skimmed &#8220;The Time Scale of Artificial Intelligence: Reflections on Social Effects&#8221;, which was quite readable for a layperson. I particularly appreciated his attempts to define milestones.</p>
<p>In that, he talks about a general theory of problem solving (Milestone B). To assume that such a thing is even possible seems like a reach to me. I can&#8217;t help wondering if he means something different than what I take the phrase to mean, but his 1-4 seem to be talking about the same kind of stuff.</p>
<p>I wish he had put milestones marking out progress on this particular step.</p>
<p>I note he thought such a thing was most likely to appear in 2 to 25 years. That was in 1985. Is there still optimism that a general theory of problem solving (and learning) is going to be formulated?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '142965', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: no one special</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-142379</link>
		<dc:creator><![CDATA[no one special]]></dc:creator>
		<pubDate>Tue, 02 Sep 2014 13:59:48 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-142379</guid>
		<description><![CDATA[Is there any kind of linkspam or overview of the academic papers that support the LW view of AI?]]></description>
		<content:encoded><![CDATA[<p>Is there any kind of linkspam or overview of the academic papers that support the LW view of AI?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '142379', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: MugaSofer</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-141519</link>
		<dc:creator><![CDATA[MugaSofer]]></dc:creator>
		<pubDate>Mon, 01 Sep 2014 08:47:35 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-141519</guid>
		<description><![CDATA[And ... you don&#039;t feel that a foomed AI would be that dangerous unless it had access to nanotech?]]></description>
		<content:encoded><![CDATA[<p>And &#8230; you don&#8217;t feel that a foomed AI would be that dangerous unless it had access to nanotech?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '141519', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael R</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-141207</link>
		<dc:creator><![CDATA[Michael R]]></dc:creator>
		<pubDate>Sun, 31 Aug 2014 14:08:58 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-141207</guid>
		<description><![CDATA[Fair question. If I&#039;d been alive at the time of the Manhattan project, and lived as a civilian and not a scientist, yes I probably would have doubted the power of such a thing, assuming I had heard of it. That really was a case of extraordinary claims being matched by extraordinary evidence.

But look at the difference between the two risks. The US govt. was so sure the atom bomb would work they devoted extraordinary resources to the project, employing tens of thousands of people for years to bring about the Trinity Test.

Even according to MIRI, AI is somewhere between decades away and never. Maybe if the US govt. started a project similar to the Manhattan, I would sit up and take notice. Until then, yawn.]]></description>
		<content:encoded><![CDATA[<p>Fair question. If I&#8217;d been alive at the time of the Manhattan project, and lived as a civilian and not a scientist, yes I probably would have doubted the power of such a thing, assuming I had heard of it. That really was a case of extraordinary claims being matched by extraordinary evidence.</p>
<p>But look at the difference between the two risks. The US govt. was so sure the atom bomb would work they devoted extraordinary resources to the project, employing tens of thousands of people for years to bring about the Trinity Test.</p>
<p>Even according to MIRI, AI is somewhere between decades away and never. Maybe if the US govt. started a project similar to the Manhattan, I would sit up and take notice. Until then, yawn.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '141207', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anonymous</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-141017</link>
		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Sun, 31 Aug 2014 01:43:09 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-141017</guid>
		<description><![CDATA[Would you also not be afraid of nuclear weapons in the time period between the idea first being proposed and the first successful test?]]></description>
		<content:encoded><![CDATA[<p>Would you also not be afraid of nuclear weapons in the time period between the idea first being proposed and the first successful test?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '141017', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael R</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-141006</link>
		<dc:creator><![CDATA[Michael R]]></dc:creator>
		<pubDate>Sun, 31 Aug 2014 00:16:20 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-141006</guid>
		<description><![CDATA[My reply to your question is this:

Why wouldn&#039;t a demon hurt us?
Why wouldn&#039;t malevolent aliens hurt us?
Why wouldn&#039;t Nazgul hurt us?

In order for people to take a risk seriously, the risk has to exist.

Until you can post a link to an AI that can converse with me like HAL from 2001, I&#039;m not going to be any more afraid of AI than I am of frickin Lord Voldemort.]]></description>
		<content:encoded><![CDATA[<p>My reply to your question is this:</p>
<p>Why wouldn&#8217;t a demon hurt us?<br />
Why wouldn&#8217;t malevolent aliens hurt us?<br />
Why wouldn&#8217;t Nazgul hurt us?</p>
<p>In order for people to take a risk seriously, the risk has to exist.</p>
<p>Until you can post a link to an AI that can converse with me like HAL from 2001, I&#8217;m not going to be any more afraid of AI than I am of frickin Lord Voldemort.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '141006', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eli</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-140775</link>
		<dc:creator><![CDATA[Eli]]></dc:creator>
		<pubDate>Sat, 30 Aug 2014 09:14:11 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-140775</guid>
		<description><![CDATA[My core question to people who disbelieve in AI risk is this: if humans have such an extremely easy time harming ourselves or each-other from both miscalibrated goal-systems and sheer incompetence, why wouldn&#039;t a suboptimally programmed AGI harm us?

Any argument against AI Risk that did not pose any problem for Hitler Risk is worthless.]]></description>
		<content:encoded><![CDATA[<p>My core question to people who disbelieve in AI risk is this: if humans have such an extremely easy time harming ourselves or each-other from both miscalibrated goal-systems and sheer incompetence, why wouldn&#8217;t a suboptimally programmed AGI harm us?</p>
<p>Any argument against AI Risk that did not pose any problem for Hitler Risk is worthless.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '140775', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Eli</title>
		<link>http://slatestarcodex.com/2014/08/26/if-the-media-reported-on-other-dangers-like-it-does-ai-risk/#comment-140757</link>
		<dc:creator><![CDATA[Eli]]></dc:creator>
		<pubDate>Sat, 30 Aug 2014 07:52:16 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=2688#comment-140757</guid>
		<description><![CDATA[If by &quot;an AI&quot; you mean &quot;computational optimization process which can optimize for any Turing-computable (or even Turing-semicomputable) utility function in any Turing-computable (or more likely: Turing semi-computable) possible world, by learning which possible-world it&#039;s in and deciding which actions to take via probabilistic reasoning&quot;... then yes, we do know what we&#039;re talking about.

But actually building one is quite difficult, and existing ML/narrow-AI applications range from having very little to do with &quot;an AI&quot; to having &lt;i&gt;almost nothing&lt;/i&gt; to do with &quot;an AI&quot;.

TL;DR: BRO DO YOU EVEN SOLOMONOFF?]]></description>
		<content:encoded><![CDATA[<p>If by &#8220;an AI&#8221; you mean &#8220;computational optimization process which can optimize for any Turing-computable (or even Turing-semicomputable) utility function in any Turing-computable (or more likely: Turing semi-computable) possible world, by learning which possible-world it&#8217;s in and deciding which actions to take via probabilistic reasoning&#8221;&#8230; then yes, we do know what we&#8217;re talking about.</p>
<p>But actually building one is quite difficult, and existing ML/narrow-AI applications range from having very little to do with &#8220;an AI&#8221; to having <i>almost nothing</i> to do with &#8220;an AI&#8221;.</p>
<p>TL;DR: BRO DO YOU EVEN SOLOMONOFF?</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '140757', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
