<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Poor Folks Do Smile&#8230;For Now</title>
	<atom:link href="http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/feed/" rel="self" type="application/rss+xml" />
	<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/</link>
	<description>In a mad world, all blogging is psychiatry blogging</description>
	<lastBuildDate>Fri, 24 Jul 2015 20:46:30 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.3</generator>
	<item>
		<title>By: &#187; Superintelligence Bayesian Investor Blog</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-127248</link>
		<dc:creator><![CDATA[&#187; Superintelligence Bayesian Investor Blog]]></dc:creator>
		<pubDate>Mon, 28 Jul 2014 20:09:28 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-127248</guid>
		<description><![CDATA[[&#8230;] about the risks of competitive pressures driving out human traits (discussed more fully/verbosely at Slate Star Codex)? If WBE and AGI happen close enough together in time that we can plausibly [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] about the risks of competitive pressures driving out human traits (discussed more fully/verbosely at Slate Star Codex)? If WBE and AGI happen close enough together in time that we can plausibly [&#8230;]</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '127248', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Multiheaded</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-121663</link>
		<dc:creator><![CDATA[Multiheaded]]></dc:creator>
		<pubDate>Mon, 14 Jul 2014 21:18:51 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-121663</guid>
		<description><![CDATA[Yay for emergent moral realism!]]></description>
		<content:encoded><![CDATA[<p>Yay for emergent moral realism!</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '121663', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ghatanathoah</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-121646</link>
		<dc:creator><![CDATA[Ghatanathoah]]></dc:creator>
		<pubDate>Mon, 14 Jul 2014 20:07:50 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-121646</guid>
		<description><![CDATA[I&#039;ve changed my mind about moral questions in the past.  I&#039;ve (somewhat more rarely) also changed my mind about questions of aesthetics.  In these cases the reason for my change has not been conformity or sociability.  It has been due to an increased understanding of what exactly my values are and what they imply.  It has also been due to other people pointing out things that I valued about something that I did not notice before.  Another big things has been mistakenly believing a heuristic I used was a terminal value, and realizing my mistake.

Some of the changes in the values of our society are probably due to fashion or drift.  But others are due to genuine moral progress.  Moral progress, in this case, is understood as having a more coherent, rigorous, and complete understanding of what exactly our values are.

So I think you&#039;re partly right.  There may be some changes in the future that will horrify us, and we will be right to be horrified.  But this is because these are changes to our fundamental values.

However, there will also be changes that will horrify us, but for which we&#039;ll be wrong to be horrified.  These changes won&#039;t be to our fundamental values.  They will be the adoption of new heuristics to help implement those values.  They will also be new, more clear and coherent understandings of our most fundamental values.  These will appear horrifying, but that is because we mistake them for a change in Fundamental Values, when really they are just clarifications or changes in instrumental heuristics.  These changes are &quot;moral progress.&quot;]]></description>
		<content:encoded><![CDATA[<p>I&#8217;ve changed my mind about moral questions in the past.  I&#8217;ve (somewhat more rarely) also changed my mind about questions of aesthetics.  In these cases the reason for my change has not been conformity or sociability.  It has been due to an increased understanding of what exactly my values are and what they imply.  It has also been due to other people pointing out things that I valued about something that I did not notice before.  Another big things has been mistakenly believing a heuristic I used was a terminal value, and realizing my mistake.</p>
<p>Some of the changes in the values of our society are probably due to fashion or drift.  But others are due to genuine moral progress.  Moral progress, in this case, is understood as having a more coherent, rigorous, and complete understanding of what exactly our values are.</p>
<p>So I think you&#8217;re partly right.  There may be some changes in the future that will horrify us, and we will be right to be horrified.  But this is because these are changes to our fundamental values.</p>
<p>However, there will also be changes that will horrify us, but for which we&#8217;ll be wrong to be horrified.  These changes won&#8217;t be to our fundamental values.  They will be the adoption of new heuristics to help implement those values.  They will also be new, more clear and coherent understandings of our most fundamental values.  These will appear horrifying, but that is because we mistake them for a change in Fundamental Values, when really they are just clarifications or changes in instrumental heuristics.  These changes are &#8220;moral progress.&#8221;</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '121646', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Robin Hanson</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2894</link>
		<dc:creator><![CDATA[Robin Hanson]]></dc:creator>
		<pubDate>Tue, 09 Apr 2013 21:42:56 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2894</guid>
		<description><![CDATA[Maybe you and I should do a podcast, and then we could get the transcript right. :)

I&#039;m happy to admit that things could get very strange on a time that is short, even when compared to ordinary human lives. If we try hard we might predict the next era. Even if that lasts a short absolute time duration it is the best basis for guessing the eras after that. And yes, we should probably try to let the next eras figure out how to deal with subsequent eras, and focus our efforts on dealing with the next era. 

We should have little confidence that very particular unusual features of our world will last to the very distant future, unless we try to coordinate to take control of the universe and change. I don&#039;t think we are up to doing that well now, but maybe someday, as we get better at coordinating. 

Knowing what is a particular unusual feature and what is a general robust feature requires knowing something about a natural distribution of over possible features. Given such a distribution you can make predictions about is unlikely or likely. Calling those &quot;anti-predictions&quot; seems a bit misleading to me. 

Human minds are now where most useful knowledge is held and reasoning is done. Future minds are quite unlikely to be designed from scratch without reference to humans. Instead, they will be designed initially to work well with humans and to inherit investments made in dealing with humans. After there will continue to be big gains from working within existing frameworks and standards rather than trying to start all over from scratch. In this way human mind design should have a lasting future legacy. And yes that includes aspects of love.]]></description>
		<content:encoded><![CDATA[<p>Maybe you and I should do a podcast, and then we could get the transcript right. <img src="http://slatestarcodex.com/wp-includes/images/smilies/simple-smile.png" alt=":)" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>I&#8217;m happy to admit that things could get very strange on a time that is short, even when compared to ordinary human lives. If we try hard we might predict the next era. Even if that lasts a short absolute time duration it is the best basis for guessing the eras after that. And yes, we should probably try to let the next eras figure out how to deal with subsequent eras, and focus our efforts on dealing with the next era. </p>
<p>We should have little confidence that very particular unusual features of our world will last to the very distant future, unless we try to coordinate to take control of the universe and change. I don&#8217;t think we are up to doing that well now, but maybe someday, as we get better at coordinating. </p>
<p>Knowing what is a particular unusual feature and what is a general robust feature requires knowing something about a natural distribution of over possible features. Given such a distribution you can make predictions about is unlikely or likely. Calling those &#8220;anti-predictions&#8221; seems a bit misleading to me. </p>
<p>Human minds are now where most useful knowledge is held and reasoning is done. Future minds are quite unlikely to be designed from scratch without reference to humans. Instead, they will be designed initially to work well with humans and to inherit investments made in dealing with humans. After there will continue to be big gains from working within existing frameworks and standards rather than trying to start all over from scratch. In this way human mind design should have a lasting future legacy. And yes that includes aspects of love.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2894', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Peter McCluskey</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2715</link>
		<dc:creator><![CDATA[Peter McCluskey]]></dc:creator>
		<pubDate>Mon, 08 Apr 2013 18:22:01 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2715</guid>
		<description><![CDATA[I agree that music is mostly expensive signaling, and would become insignificant after long periods of unrestrained Malthusian competition.

But emotions such as love consist of states of mind plus some signaling. It&#039;s not obvious to me that simply having a state of mind corresponding to a positive emotion causes inefficiency.

The signaling part of love serves a purpose related to exchange of information that results in descendants. I expect that the far future will contain exchanges of information that create minds that are sort of descendant-like, and that there will be something like signaling used to decide what to exchange. That will provide pressures which maintain something arguably similar to love, but not necessarily similar enough to satisfy what people want today.

Robin may believe in unrestrained competition, but that&#039;s not the only way to reach the total utilitarian goal of maximizing the amount of experience in the universe. If we&#039;re willing to accept some minor slowdown in the rate of expansion (no &lt;a href=&quot;hanson.gmu.edu/filluniv.pdf&quot; rel=&quot;nofollow&quot;&gt;Burning the Cosmic Commons&lt;/a&gt;), it ought to be possible (but hardly easy) to coordinate so that those who spend 99% of their time working don&#039;t get to colonize more solar systems than those who work 90% of the time working. I approve of such coordination.]]></description>
		<content:encoded><![CDATA[<p>I agree that music is mostly expensive signaling, and would become insignificant after long periods of unrestrained Malthusian competition.</p>
<p>But emotions such as love consist of states of mind plus some signaling. It&#8217;s not obvious to me that simply having a state of mind corresponding to a positive emotion causes inefficiency.</p>
<p>The signaling part of love serves a purpose related to exchange of information that results in descendants. I expect that the far future will contain exchanges of information that create minds that are sort of descendant-like, and that there will be something like signaling used to decide what to exchange. That will provide pressures which maintain something arguably similar to love, but not necessarily similar enough to satisfy what people want today.</p>
<p>Robin may believe in unrestrained competition, but that&#8217;s not the only way to reach the total utilitarian goal of maximizing the amount of experience in the universe. If we&#8217;re willing to accept some minor slowdown in the rate of expansion (no <a href="hanson.gmu.edu/filluniv.pdf" rel="nofollow">Burning the Cosmic Commons</a>), it ought to be possible (but hardly easy) to coordinate so that those who spend 99% of their time working don&#8217;t get to colonize more solar systems than those who work 90% of the time working. I approve of such coordination.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2715', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Anonymous</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2686</link>
		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Mon, 08 Apr 2013 13:50:34 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2686</guid>
		<description><![CDATA[I think the related conversation that would be more interesting is Robin Hanson&#039;s &lt;a href=&quot;http://lesswrong.com/lw/2zj/value_deathism/&quot; rel=&quot;nofollow&quot;&gt;pro-value deathism&lt;/a&gt; stance vs. Vladimir Nesov.]]></description>
		<content:encoded><![CDATA[<p>I think the related conversation that would be more interesting is Robin Hanson&#8217;s <a href="http://lesswrong.com/lw/2zj/value_deathism/" rel="nofollow">pro-value deathism</a> stance vs. Vladimir Nesov.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2686', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Deiseach</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2572</link>
		<dc:creator><![CDATA[Deiseach]]></dc:creator>
		<pubDate>Sun, 07 Apr 2013 19:41:01 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2572</guid>
		<description><![CDATA[It strikes me that in a way, we already &lt;em&gt;have&lt;/em&gt;&quot;ems&quot;.  This &#039;me&#039;, this &#039;Deiseach&#039;, that is typing right now exists only on the internet; here is where she lives and moves and has her being.  Real-space or meat-space me is very, very different.  There&#039;s four (four and a half, if I stretch it a bit) &quot;emulated mes&quot; out there in the ether.

So I&#039;ve created a copy of &#039;me&#039; with the bits I want or am willing to exhibit to other people, cranked up some attributes I hope will seem attractive or appealing, suppressed others (for this &#039;em&#039;; other &#039;ems&#039; have some or all of the suppressed bits on show), and maybe faked having some that I don&#039;t have in reality. I&#039;ve done the kind of tweaking we have discussed with my &quot;em&quot;.

Scott&#039;s &quot;em&quot; on here may be the nearest to a full-copy Scott.  I certainly do not consider that &quot;Deiseach&quot; is a separate life-form or successor to me, and if Scott or any other person decided to ban &quot;Deiseach&quot; from coming on here, or I decided to &#039;pull the plug&#039; on her, I don&#039;t and wouldn&#039;t think of it as the destruction of a new type of human life.

All of us on here are &quot;ems&quot; in some form or another; I would be very surprised if anyone said &quot;No, this &#039;me&#039; is the exact same as the &#039;me&#039; you&#039;d meet if you went round to my house right this minute&quot;.  Is it really so big a jump from this kind of &quot;em&quot; to the kind I&#039;ve considered, the model tweaked to a certain spec for a certain job and hired out or sold by the flesh original?  On the other hand, I think we&#039;re still very far from &quot;Deiseach&quot; being a &#039;real&#039; person and a New Human.  I think we will continue to be a long way from those kinds of new persons, not just because of tech, but because of human attitudes.]]></description>
		<content:encoded><![CDATA[<p>It strikes me that in a way, we already <em>have</em>&#8220;ems&#8221;.  This &#8216;me&#8217;, this &#8216;Deiseach&#8217;, that is typing right now exists only on the internet; here is where she lives and moves and has her being.  Real-space or meat-space me is very, very different.  There&#8217;s four (four and a half, if I stretch it a bit) &#8220;emulated mes&#8221; out there in the ether.</p>
<p>So I&#8217;ve created a copy of &#8216;me&#8217; with the bits I want or am willing to exhibit to other people, cranked up some attributes I hope will seem attractive or appealing, suppressed others (for this &#8217;em'; other &#8217;ems&#8217; have some or all of the suppressed bits on show), and maybe faked having some that I don&#8217;t have in reality. I&#8217;ve done the kind of tweaking we have discussed with my &#8220;em&#8221;.</p>
<p>Scott&#8217;s &#8220;em&#8221; on here may be the nearest to a full-copy Scott.  I certainly do not consider that &#8220;Deiseach&#8221; is a separate life-form or successor to me, and if Scott or any other person decided to ban &#8220;Deiseach&#8221; from coming on here, or I decided to &#8216;pull the plug&#8217; on her, I don&#8217;t and wouldn&#8217;t think of it as the destruction of a new type of human life.</p>
<p>All of us on here are &#8220;ems&#8221; in some form or another; I would be very surprised if anyone said &#8220;No, this &#8216;me&#8217; is the exact same as the &#8216;me&#8217; you&#8217;d meet if you went round to my house right this minute&#8221;.  Is it really so big a jump from this kind of &#8220;em&#8221; to the kind I&#8217;ve considered, the model tweaked to a certain spec for a certain job and hired out or sold by the flesh original?  On the other hand, I think we&#8217;re still very far from &#8220;Deiseach&#8221; being a &#8216;real&#8217; person and a New Human.  I think we will continue to be a long way from those kinds of new persons, not just because of tech, but because of human attitudes.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2572', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jaimeastorga2000</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2539</link>
		<dc:creator><![CDATA[jaimeastorga2000]]></dc:creator>
		<pubDate>Sun, 07 Apr 2013 17:21:41 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2539</guid>
		<description><![CDATA[I don&#039;t believe in moral progress, and so I find myself disagreeing with Dr. Hanson about value drift. It was to the past&#039;s perdition that they let their values drift into our values, and it will be just as much to our perdition if we let our current values drift into some weird future values none of us can currently imagine. It won&#039;t be because the future is &quot;better&quot; in any meaningful sense; it will be just because it has gone wrong, from our perspective.

Now, if you took a person from today and you sent them into the future, they might end up agreeing with those values eventually, just like I agree with Eliezer in &lt;a href=&quot;http://lesswrong.com/lw/xl/eutopia_is_scary/&quot; rel=&quot;nofollow&quot;&gt;expecting Benjamin Franklin to end up coming around to our way of life after a sufficiently long period of adjustment living in the 21st century.&lt;/a&gt; However, such a change in values will be because of human&#039;s social nature, and not because one set of values is objectively better; I expect that if you sent someone to a past, in such a way that they did not know it was the past and was thus not filled with ideas that it is an antiquated way of life his culture had grown up from, he would likewise come to accept their values eventually provided his quality of life did not greatly diminish. So, for example, if someone today were sent to medieval Europe to live among the noble caste, surrounded by noble friends, and not already set on the idea that his native time was a huge improvement on his present one, I expect that eventually he would come around to the idea that feudalism and the lot of serfs is the appropriate way of things.]]></description>
		<content:encoded><![CDATA[<p>I don&#8217;t believe in moral progress, and so I find myself disagreeing with Dr. Hanson about value drift. It was to the past&#8217;s perdition that they let their values drift into our values, and it will be just as much to our perdition if we let our current values drift into some weird future values none of us can currently imagine. It won&#8217;t be because the future is &#8220;better&#8221; in any meaningful sense; it will be just because it has gone wrong, from our perspective.</p>
<p>Now, if you took a person from today and you sent them into the future, they might end up agreeing with those values eventually, just like I agree with Eliezer in <a href="http://lesswrong.com/lw/xl/eutopia_is_scary/" rel="nofollow">expecting Benjamin Franklin to end up coming around to our way of life after a sufficiently long period of adjustment living in the 21st century.</a> However, such a change in values will be because of human&#8217;s social nature, and not because one set of values is objectively better; I expect that if you sent someone to a past, in such a way that they did not know it was the past and was thus not filled with ideas that it is an antiquated way of life his culture had grown up from, he would likewise come to accept their values eventually provided his quality of life did not greatly diminish. So, for example, if someone today were sent to medieval Europe to live among the noble caste, surrounded by noble friends, and not already set on the idea that his native time was a huge improvement on his present one, I expect that eventually he would come around to the idea that feudalism and the lot of serfs is the appropriate way of things.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2539', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Grognor</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2536</link>
		<dc:creator><![CDATA[Grognor]]></dc:creator>
		<pubDate>Sun, 07 Apr 2013 17:06:19 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2536</guid>
		<description><![CDATA[Poor folks do NOT smile. I&#039;ve seen them.]]></description>
		<content:encoded><![CDATA[<p>Poor folks do NOT smile. I&#8217;ve seen them.</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2536', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kaj Sotala</title>
		<link>http://slatestarcodex.com/2013/04/06/poor-folks-do-smile-for-now/#comment-2520</link>
		<dc:creator><![CDATA[Kaj Sotala]]></dc:creator>
		<pubDate>Sun, 07 Apr 2013 15:01:25 +0000</pubDate>
		<guid isPermaLink="false">http://slatestarcodex.com/?p=370#comment-2520</guid>
		<description><![CDATA[&lt;i&gt;You can’t just remove love from a human brain like that. There’s no one love module.&lt;/i&gt;

Right, but even ordinary old-fashioned selective breeding is very successful in influencing various traits in a relatively quick time, as discussed e.g. &lt;a HREF=&quot;http://www.overcomingbias.com/2012/12/breeding-happier-livestock-no-futuristic-tech-required.html&quot; rel=&quot;nofollow&quot;&gt;here&lt;/A&gt; and &lt;a HREF=&quot;http://lesswrong.com/lw/28k/the_psychological_diversity_of_mankind/&quot; rel=&quot;nofollow&quot;&gt;here&lt;/A&gt;. Those breeders don&#039;t even have access to any fancy self-modification techniques that uploading would enable. Robin himself has argued that a world of ems could lead to much stronger evolutionary pressures, and we already seem to have a bunch of existing variation in how likely it is for different people to experience love...]]></description>
		<content:encoded><![CDATA[<p><i>You can’t just remove love from a human brain like that. There’s no one love module.</i></p>
<p>Right, but even ordinary old-fashioned selective breeding is very successful in influencing various traits in a relatively quick time, as discussed e.g. <a HREF="http://www.overcomingbias.com/2012/12/breeding-happier-livestock-no-futuristic-tech-required.html" rel="nofollow">here</a> and <a HREF="http://lesswrong.com/lw/28k/the_psychological_diversity_of_mankind/" rel="nofollow">here</a>. Those breeders don&#8217;t even have access to any fancy self-modification techniques that uploading would enable. Robin himself has argued that a world of ems could lead to much stronger evolutionary pressures, and we already seem to have a bunch of existing variation in how likely it is for different people to experience love&#8230;</p>
<p><a href="javascript:void(0)" onclick="report_comments_flag(this, '2520', '3412210cfd')" class="report-comment">Report comment</a></p>
]]></content:encoded>
	</item>
</channel>
</rss>
