This is the bi-weekly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the SSC subreddit or the SSC Discord server – and also check out the SSC Podcast. Also:
1. The DC SSC meetup group is celebrating its second anniversary and wants to throw a party. It’s Saturday April 13 at 4 PM on 616 E Street NW, Washington DC. Anyone who reads this blog, even a little, is welcome to attend. Please see this link for very slightly more information.
2. Speaking of which, Claire helps arrange and guide a lot of meetups, and she would like meetup attendees to take a survey about their experiences.
3. New ad up for the Seattle Anxiety Specialists psychotherapy group. I was pretty impressed by their site and by talking to one of their therapists, so if you’re in their demographic (anxious people in Seattle), check them out.
4. Comment of the week is Simon Jester clearing up my confusion about to what degree “cultural Marxism” is vs. isn’t a real thing. And if that’s not enough Marxism for you, no_bear_so_low on the subreddit offers this guide to figuring out where communists are coming from.
It seems to me that it would be rather nice to see Edmund do a good deed, Lear die happy knowing that Cordelia lives and Cordelia look forward to a prosperous life at the end of the play.
Shakespeare was grimdark before 90s
comic booksgraphic novels writers, fantasy novelists and games designers even existed 🙂What is the status of the “Cyc” project?
What do AGI researchers think of Cyc?
From the comment on the last link:
I find this very revealing, and also a pretty striking (if somewhat unintentional) refutation of everything that came before it. I’m sure there’s variety among different hunter-gatherer cultures, but I remember hearing about this “teasing of successful hunters” thing before. I think this goes a long way toward explaining why hunter-gatherer cultures are so technologically stagnant and can go thousands of years without changing or innovating much. In an environment where any form of innovation or success is actively punished, well, of course no one’s going to accomplish much of anything beyond just surviving. (And sure they still have art, culture, music and all the hallmarks of humanity, but their output of this stuff is comparatively small as well.)
And it’s hard for me not to draw some parallels to certain modern leftist subcultures.
I don’t believe human nature is inherently hierarchical. I mean, for most of history we’ve been hunter-gatherers, which–if anything–suggests we have an innate tendency toward anti-hierarchy. It does seem like even in most modern cultures people have an innate tendency to resent successful people and to try to keep others’ status in check through things like gossip, shaming, etc. But I do think that “hierarchical” cultures (i.e. ones where success is not actively punished and may even be rewarded) tend to be more successful and innovative and become more powerful for obvious reasons. And so those are the ones that spread. Granted, “capitalism” doesn’t necessarily equal “cultures that reward innovation and success” but there seems to be a correlation. And the proposed alternative, it seems, is a cultural norm that involves actively and constantly beating down (and sometimes outright killing) anyone who tries to accomplish anything beyond the bare minimum.
Certainly such a cultural norm is possible; it’s existed before. But that sounds like a pretty awful world to me. Why would I want that? And it’s hard for me to imagine a communist world that doesn’t involve some variation of this “punish the successful” norm. I guess there’s some comfort in this if you’re an unsuccessful person (and yes, many people are unsuccessful for reasons that are not their fault), but even as a relatively low-status person myself this seems like a pretty grim and petty form of comfort. And while relatively non-complex, small scale societies like hunter-gatherers can at least survive this way, I suspect it would be pretty difficult to maintain a complex industrial society while discouraging any form of innovation.
If I were to provide a snarky definition of capitalism/libertarianism based on this, it would be, “The radical idea that we should stop murdering people for being too smart.”
I think both modes have failure states. Hyper egalitarianism shunts innovation but extreme competitiveness can undermine the interpersonal social structures that make a lot of the wealth accumulation possible.
I guess instead of “people naturally love hierarchies” or “people naturally hate hierarchies”, it is more like people have instincts that tell them that some hierarchies are okay and some are not; or that some people deserve higher status, but if they try to take too much, they need to be punished. Also that some kinds of achievements can translate to higher status, and some cannot.
(Now, these instincts may be miscalibrated for modern era. They may interfere with various other instincts, for example we want to support our allies, so our answer to “how much status a person deserves for doing X” may depend on whether we perceive the person as a friend or an enemy. Also, let’s avoid the naturalistic fallacy: understanding why we have instincts and how exactly they work, is a description of the natural state, not a recipe for a moral society.)
For example, I suppose that the hunter-gatherer society that teases their successful hunters, also has some kind of chieftain, and you probably wouldn’t tease the chieftain in the same way. Just because they don’t give high status to successful hunters specifically, doesn’t mean they don’t give higher status to anyone else. Perhaps successful hunters are the relative “nerds” of such tribes, and chieftains are the relative “jocks”.
My guess is that most (neurotypical) people have an instinct for how much status anyone “deserves”, which is correlated with “how much could such person hurt me (in the ancient evolutionary environment) if I wouldn’t treat them with sufficient respect”. If someone is treated more respectfully than they “deserve”, an average person fells an urge to slap them down, to prevent a greater conflict in the tribe: between its powerful members, and its respected but not quite powerful members. Because an unopposed respect would be perceived as a claim to power; and respecting people who don’t “deserve” it could be perceived as joining their (most likely losing) side.
For example, you respect your leaders, e. g. the people who demonstrated ability to gain followers, and who could simply tell those followers to kick your ass. (You may disrespect enemy leaders, but that’s because they are enemies anyway, and because you do not expect them to take an action against you specifically.) You respect physically strong people; or successful sportsmen, which is a proxy for physical strength. Less respect goes to skilled or smart-but-not-powerful people; they may be useful allies, but they are not very dangerous as enemies if you have the strong people on your side.
(The modern leftist subcultures definitely have their own hierarchies whom they respect. It’s not the smartest people; it’s the leaders, whether they happen to be teachers or students.)
In (hypothetical, pure) capitalism, a person is considered powerful if they have money, because that means they have the power to decide how that money will be used. In a hunter-gatherer / feudal / communist society, merely rich person would be low-status, because their property can be easily taken away by force. In a hunter-gatherer society, you need physical strength and a few strong loyal allies. In a feudal society, you need an army. In a communist society, you need power within the communist party. In crony capitalism, you need to have money and friends in the government.
I think I would argue that existentialism is the philosophical movement most nearly represented in Lear, and that its closest dramatic cousins are the works of Beckett. It’s speaking to the same things that Helen hears in Beethoven at the start of Howard’s End (“panic and emptiness”, etc.) It’s about the transience and precariousness of the human spark in a world where God is absent or uncaring and all authority is a sham. That said, I think it is marked out from other works in that thematic vein by the value and beauty Shakespeare finds in that spark before it goes out.
Dramatically, structurally, does it have problems? You bet. I’ve never seen a production that really overcame them, and I don’t think it is a match for the best of his other works (notably Othello and Macbeth) strictly as a play. But on the page, I think it’s the best of the lot (and the one I most want to direct, because obviously I’m a complete egomaniac).
I am a high school senior, and I have to choose where to go to college. I live in California, and applied only to California public schools, although I got rejected or waitlisted from all of the schools I really wanted to go to. So now I have to choose between:
* UC Merced, a fairly good school, but not one I’m enthused about
* Community College for 2 years, then transferring into a UC (it’s a lot easier to get in this way)
* Skipping college entirely and probably getting a job through Triplebyte
I don’t yet know exactly what I want to do with my life (although I’m 90% certain I’ll get a STEM career) so my main priority is aiming to get lots of opportunities. I think a large, prestigious school would be best for that, but I don’t know if spending 2 years in community college is worth it.
Anyways, advice is appreciated.
It should factor in your financial situation. Imo, the community college option is a good one as that will cut your expenses for those years and likely the quality of education will be similar. Make sure your courses transfer and meet requirements for the university you want your degree to be from.
If you already qualify for the job you want coding or whatever, that may be even better. Four years of experience won’t open all the same doors as a degree, but it is something.
Is that your bearded dragon, btw?
My parents can probably pay most of the cost of college, although it’s still good to save money.
The profile pic is of a Tuatara (https://en.m.wikipedia.org/wiki/Tuatara), a type of reptile native to New Zealand which is taxonomically separate from lizards.
If UC Merced was the best place that would take you you’re probably decently capable, but not a star, based on what you have accomplished in life so far. That means you would do well to build skills and credibility before setting off into the working world. UC Merced, as an actual UC school, is probably a place good enough to do both, and you’ll emerge with the standard B.Sc. credential that most STEM employers are looking for in entry-level employees. Unless you really hate school or a four-year degree presents an overwhelming financial burden, I think your best bet is to go to UC Merced.
As you can see, I specialize in distinctly idiosyncratic career advice.
As for what to study, let me assume you are not headed for engineering school; you’re going to do this through Arts & Sciences, or whatever it’s called. Since you aren’t quite sure what you want to do, but probably STEM, I suggest you keep four doors open: mathematics, statistics (assuming it’s distinct from math), your favorite science, and computer science. Carefully consider the requirements, which should be heavily overlapping, and fulfill the freshman and sophomore requirements for all of them. Then at the start of junior year decide which one or two appeal to you most, and fulfill the major requirements for those. And if you decide against majoring in computer science, try to pick up at least a minor in it.
What part of California do you live in? If you live in an area with more internship or volunteer opportunities than Merced, then community college is the way to go.
Also, community college might expose you to new fields that you might never have considered. Merced would, too, but is it easy to transfer between schools at Merced (eg from Engineering to Natural Sciences, or vice versa)
Finally, community college = saving money on dorms. Talk to your parents; would they be willing to use that saved $ to pay for you to do some sort of volunteer work abroad that relates to your field of interest? That will boost both your CV and your transfer application, and of course will be an interesting experience in its own right. See eg https://www.projects-abroad.org/university-students/
I live in Los Angeles.
LAPC was pretty good 10 years ago.
You should apply to jobs through TripleByte, but be choosy; if you get a great offer take it, otherwise go to UC Merced.
Reasoning: now is a great time to get a tech job, and that might not be true in four years. You might luck into something awesome. Secondly, interviewing is educational and motivating (at least it was for me). A couple hours of talking with interviewers about their companies and their jobs may tell you as much about whether you want to be a software engineer as four years of CS classes will. Thirdly, interviewing takes practice. Having a few interviews under your belt will give you a significant leg up on your competition when you’re applying for internships or jobs later on.
Read “The Case Against Education”, and then make an honest assessment of your personality and scholastic talents, and then run through the decision algorithm he lays out in the back of the book.
Do not borrow money that you do no have a direct actionable implementable plan to be able to pay back, and to be able to pay back even if you do not finish school.
While going to school, make sure to also actually learn things.
No matter what you do: learn how to write, learn how to do a little coding, learn math up to algebra, and learn statistics. You can learn those from the library, from online classes, and Khan.
If you want to learn things in classrooms from professors and TAs, but do not want to pay for school, you can generally just sit in classes and take advantage of the library, without being actually enrolled. For most undergrad level classes, the professors don’t care if you audit their class without showing up on their enrollment sheet, and if you are actually engaged as an active learning student, they will be overjoyed. And you can get a university library card for free or cheap. (I wish Plumber had known that when he was that age.)
If you just want to go to school for the networking, social fun, and parties, you can do all that with the students of the school you want to hang out at without actually enrolling. You don’t even have to tell your peers that is what you are doing.
If you do want the diploma, and you want the “lecture in classroom” experience, knock out the first two years in the cheapest decent small regional college to you, and then transfer to a bigger university.
Consider the Western Governors University, if you want any of the degrees they offer. You can combine that with the social life of some other university (see above).
Consider a trade school. They are always looking for students, and employers are desperate for skilled trained certified tradesmen. There is nothing wrong with the skilled trades, and they can pay very very well. You can be a tradesman, AND be a well-read scholar.
Travel as much as you can afford the time. Travel is cheap if you let it be.
Write. A lot. Write about everything. Publish some of it. You get good at writing by writing, and no matter what you do with your career, you need to be good at writing. You don’t get good at writing from college writing classes.
Do not party hard. Do not smoke (both tobacco and cannabis), only drink in minute moderation (two standard drinks a week, tops), treat psychedelics with extreme care, and for gods sake don’t take “study aid” pills unless you actually have an honest Rx that you actually need.
My experiences are about 20 years out-of-date, but I went the Community College transfer route in California (Foothill College in Los Altos, transferring to Cal Poly San Luis Obispo as a junior), and it worked very well for me.
When I was in school, California community college were academically solid within their scope, and lower-division (freshmen/sophomore-level) classes are well within their scope. Probably better than comparable classes at a UC or CSU: UCs and CSUs tend to have lower-division classes taught either in a lecture-hall format (~100 students/class) or they’re taught by TAs (grad students with teaching responsibilities as part of their scholarship/fellowship package), while similar Community College courses are more often taught by full-time faculty (professors or lecturers) in a smaller-class (20-30 students) format.
Another benefit of community colleges is that they’re a softer intro to college life than going straight to a four-year college: the classes are run the same way as four-year college classes, which can be a significant transition from high school (less hand-holding and softer requirements around homework and attendance, so there’s less feedback when you’re on track to fail due to distraction and bad study habits), and it can be easier to make that transition if you aren’t simultaneously transitioning away from living directly under your parents’ care.
And then there’s the cost factor Randy M brought up: in-state tuition for UCs and CSUs is heavily subsidized, but the after-subsidy costs are a lot more than they used to be: a year’s worth of tuition and fees as a full-time student at UC Merced is currently $11,502, compared to $1,515 at Foothill College (and I think other California CCs would be very close to that). Living expenses may also be a lot lower: at Merced, you’d need to live in the dorms or rent off-campus housing, while at your local community college you might be able to still live at home. Add it all up, and you’ll be spending ~$5k/year at CC (tuition, books, and commuting costs) vs ~$35k/year at Merced.
The financial benefits are magnified if you’re still figuring out your major: if you start down one path and then change your mind, you might wind up adding half a year to a year to your college career since you’ll have spent some time taking courses that don’t count towards your new major. It’s a lot more cost-effective to do that extra year at a CC than a four-year school.
Whether you go to Merced or community college, if you go the college route you’ll probably need to declare a major up front. STEM covers a lot of ground, but the three main buckets are computer stuff (Computer Science, Software Engineering, Computer Engineering, and Electrical Engineering), classic engineering (Mechanical Eng, Aerospace Eng, Civil Eng, Chemical Eng, etc), and science (Physics, Chemistry, Biology, etc). I’d try to pick which of those buckets you’re most likely to want to pursue, then declare a major that’s relatively central to the cluster.
For computer stuff, I’d recommend Computer Engineering as a starting point, which has a big overlap with Computer Science and Software Engineering (both programming-based, with the latter sacrificing some theory and algorithm coursework to make room for a deeper treatment of software architecture and development processes), but also has a fair amount of coursework in embedded/real-time systems, chip layouts, and circuit design.
For classic engineering, I’d recommend Mechanical Engineering. It’s not the most technically rigorous of the classic engineering majors (that would be Aerospace), but it’s pretty far up the hierarchy, and it has both good employment prospects in its own right and a lot of overlap with other majors that should make switching easier if you change your mind.
Not sure about the best starting major for the science bucket. Maybe others can chime in.
One suggestion—try before you buy.
Audit a class or two at your local community college, if only for a couple of weeks, assuming they don’t object. Audit a class or two at UCLA. Try hanging around UCLA talking with students, and do the same, so far as you can, at the community college–presumably most students live at home, but there is probably socializing at lunchtime and such. That should give you some feel for both the academic and the social side of life in both places. Auditing at Merced would be better still, but it’s a fair ways from L.A.
Here’s my shoot-from-the-hip advice:
Make some mistakes. Be poor and hungry for a while. Meet lots of weird and dangerous people. (Although you live in LA, so maybe you’ve already done that one.) Experience life outside your home state. Make friends with types you don’t get a lot of in LA. Ride public transportation to work. Work a blue-collar job for at least 6 months. Stay off the internet as much as you can.
Do those things for a couple years — in other words, build up your character — before you go to college or start your career in earnest.
Well, I think by now we have given this high school senior every possible type of advice except for, “Join the navy. See the world.” We’re saving that one for bean, maybe?
Don’t join the navy.
Do see the world.
I think that old advice was up until the mid/late 20C, the *only* way that “average joe” was going to be able to travel far from their home country was to enlist in the navy, or the merchant marine service.
Today, travel is so much cheaper, no military service needed.
But then you don’t get the GI bill or veteran status.
Nor do you get half-jokingly dry-humped all the time, or made to slurp a grape or whatever out of some fat guy’s navel when you cross the equator, and those experiences surely build character.
Besides, I can’t imagine there’s any better time to join the Navy than right now.
If you’re desperate for the experience, there’s probably somebody outside the military that still does line-crossing ceremonies.
At one point even airlines did it (and I’ve got the certificate from Pan Am to prove it), but I doubt any do now.
Looks like some cruise ships do a form of the ceremony, though obviously milder, and limited to volunteers.
https://blog.onlinevacationcenter.com/2018/12/11/equator-crossing-rituals-on-cruise-ships/
If you ask the SSC peanut gallery for advice and they give you every kind of advice, are you still better off than if you’d just asked the Internet?
STUDENT: I asked for advice.
MASTER: Did you get any?
STUDENT: I got every imaginable type of advice! Most of them contradictory.
MASTER: What conclusion may we draw from this?
Honestly, you need to ask people who are currently doing what you want to do, or currently managing people who do what you want to do. I don’t know jack squat about software engineering.
If you want to get into accounting, skipping college is not a good option. And if you aren’t getting into a top-tier school, you have an uphill battle getting into a good job, which means you need to land good internships.
The ideal first step would be to figure out what job you want to do and check to see how heavily its industry is regulated. If it’s regulated pretty heavily (e.g. accounting, medicine, clinical psychology) then anticipate needing to get a degree at some point. Also have a clear idea of which industries (still) offer viable career paths without degrees (e.g. some software engineering, the arts if you’re extremely talented and driven, skilled trades, military, sales & retail, etc.)
Of course, as long as we’re talking ideal, the ideal thing would be to be 18 years old and have a realistic vision of what you want to do, instead of the unfocused, borderline satirical, hormone-drenched cartoon that I and I suspect most others had at that age.
That’s why I’m suggesting what I think we might need: a sort of Rumspringe for the English.
@Epistemic_Ian
You could apply for an apprenticeship instead.
1. Look at the career interests and see what kinds of majors they typically are looking for, as well as the skill sets. This can be as broad or as narrow as you see fit but i recommend focusing on a relatively broad range of career options that have overlapping skill sets with job growth prospects and decent entry level pay.
2. Do 2 years at the cheapest school that you can transfer credits to to get basic core credits out of the way
3. Transfer to a maybe not so cheap school that has a decent reputation for the program you need related to what you picked in #1
___________
Not the best approach necessarily but a good approach if you *must* take the uni route. The worst thing is to do tons of course work that you don’t need at a uni you can’t afford.
If you want a STEM career, go to UC Merced.
The path to graduate school begins with doing research with a professor during undergrad and getting a recommendation from them. This path will be closed off if you skip school or go to a community college.
The path to one of the big FAANG tech companies begins with a degree from a good school, like a UC.
A big part of life is getting the right credentials and knowing the right people so that you’ll even be able to try to do something. Big organizations are risk-averse and would rather let many good candidates pass by because they don’t have a degree from a good school, than risk hiring someone who would screw up and cause significant problems.
Find what you want to do and get really good at it. Then get the right credentials that prove to people that you are really good at it and not just good at debating it on the internet.
Take the minimum number of humanities courses. The main thing it broadens is the hole in your wallet. History, philosophy, etc. are best learned on your own from books. It’s relaxing to read that sort of thing anyway, so you don’t need the discipline of school to force you to do it.
Is that really true? Merced is the newest UC campus, and I don’t know that its reputation is yet very well established as a place that STEM employers want to hire out of. It is no accident that the original poster is reluctant to go there.
I think the poster’s plan is to attend community college for two years, then transfer to a four-year school. If he has decent grades in community college, he is almost certain to get into a better UC than UC Merced. Eg in 2017 Berkeley admitted 18% of freshman applicants, but 27% of transfer applicants. UCLA admitted 28% of transfer applicants, adn the transfer acceptance rates for the other UC campuses were in the 50s and 60s. http://www.dailycal.org/2017/07/06/uc-berkeley-releases-2017-18-admissions-data/ and
I taught high school seniors for many years, and in my experience, most don’t know enough about fields of study to really know what the best choice for them is. Eg: I have had 3-4 students tell me, “I like math, but also like politics. Which should I major in?” They had no clue that much poli sci is highly quantitative. So, a student who has only gotten into a low tier UC campus is probably better off going to a community college and taking a wide range of courses, including the humanities and social science (and sciences – I took a very interesting geophysics class in college, and a very interesting forestry class, as well. Had I taken them as a freshman, I might well have changed my major).
Taking the minimum number of humanities and taking a wide range of courses at a junior college aren’t mutually exclusive. Unless a lot has changed since I went to college in California, you are required to take a whole bunch of humanities to get a degree. I think that languages are the only humanities department I didn’t take a class from, and that was just because I sweet talked my way out of it by convincing the front office that C++ was a language.
I think that is true of the CSUs, but not so much the UCs. Here are the breadth requirements of UC Merced’s School of Social Studies, Humanities and Arts:
Lower Division General Education Requirements
CORE 1: The World at Home
WRI 10: College Reading and Composition
Two Natural Science / Engineering Introductory Courses with or without Laboratory, Field or Studio *
Mathematical / Quantitative Reasoning Course
Humanities, Arts or Foreign Language Course *
Social Science Course *
Upper Division General Education Requirements
Four Non-Major Upper Division General Education Courses *
Here are UC Merced’s School of Engineering requirements:
General Education Requirements [At Least 42 Units]
School of Engineering students are required to complete the following list of general education courses:
Lower Division General Education Requirements:
CORE 001: The World at Home [4 units]
WRI 010: College Reading and Composition [4 units]
MATH 021: Calculus I for Physical Sciences and Engineering [4 units]
PHYS 008: Introductory Physics I for Physical Sciences [4 units]
BIO 001: Contemporary Biology [4 units] or equivalent*
MATH 032: Probability and Statistics [4 units]
CSE 020: Introduction to Computing I [2 units] and
CSE 021: Introduction to Computing II [2 units] or
ME 021: Engineering Computing [4 units]
Additional General Education Requirements:
General Education Electives (selected from a list of acceptable courses; at least three units should be a recognized upper division writing course):
Humanities or Arts [4 units]
Social Sciences [4 units]
An upper division writing course [3 units]
Either 3 Service Learning units, or 3 additional Humanities or Arts or Social Sciences units; these units can be upper division or lower division [3 units]
No, that’s basically what I had to take as well. That’s 22 units of GE, which is like 20% of the units you’ll need to graduate.
At my alma mater, the Faculty of Mathematics requires you to take 5.0 units of non-math courses (out of 20.0 for the entire degree). Two of those have to be from a list of communications courses, which look like they were put in place because some people manage to get through high school with really terrible spoken English or weak written English. There is particular emphasis on helping immigrant students, with the faculty has lots of, although I think they’re mostly native-born at this point.
I don’t have a problem with the requirement. The business pages and the financial side of politics make a lot more sense with a couple of econ courses under my belt. And courses in English and history have big term papers that really level you up in writing and library research, both of which are really useful. I wish I’d taken a few more, actually.
Personally, I see little reason to go to UC Merced over a cheaper CSU, especially if you are going to go the computer science route (which I’m most familiar with). Yes, on paper, a UC is better than a CSU, but in reality, nobody is going to give you extra points for going to Merced. I do a lot of hiring, and that’s a school that wouldn’t even register on my radar.
I did the community college -> CSU route (getting CS and math degrees), and I’ve been asked about my university once in an interview over my 15 year career. I do think that have a college degree still helps, even in the tech sector (I’ve worked with companies that would pass over all resumes that didn’t list a degree), but unless you are getting into a top ten university, I don’t think where you get your degree from matters at all.
If you can get a good job right out of high school, you should take it, and also do night and online classes from your community college. After the first two years, you’ll have a good idea of how your career is looking, and whether or not you want to continue on to get your degree.
I haven’t seen many adaptations of it, I liked the Olivier one in the BBC adaptation of 1983.
Seeing it played you really understand why Edmund was driven to villainy; if every single time your dad met a stranger he introduced you with “Yeah, he’s my bastard, may as well get the dirty laundry out of the way first”, I don’t think it would conduce to filial respect 🙂
Quick question about copyright, IP, etc.
I have recently started watching Elementary and noticed that, while characters often refer to real-life websites (eBay, Instagram, etc) in dialogue, whenever we actually see a website onscreen it is a fictional one. For instance, rather than Facebook, there is a social network called Friendlounger.
Is this because of rules against showing trademarked material on screen, the fact that the websites haven’t paid for product placement (or laws against it in some countries), or something else?
Probably the former. See eg https://saperlaw.com/2007/06/13/film-clearance-basics/
All the big sites have a legal clearance process for being displayed in a movie or a TV show.
They are usually not onerous, but the production industry often will want to bypass the process entirely, because each one is different, and it’s just friction in their own production process.
There are now a small handful of webdev shops who’s entire gig is making such fake sites for TV shows and movies.
Game discussion: what’s a game (you could pick a computer/video game, or a tabletop game, or whatever) that you thought was really good, that nevertheless you would never play again?
For the former: Arcanum. Interesting setting, beautiful music, just generally a great feel to it. Also, bad combat system, really broken, long, buggy. Where would I find the time? Some things are best left in high school.
For the latter: The Fungi from Yuggoth/Day of the Beast campaign (the original name makes no sense, because the Fungi don’t show up in it as far as I recall, though they do in one chapter of the revised version) for Call of Cthulhu. It’s got some great bits, I had a blast running it, and everyone seemed to enjoy playing it. But, it’s fairly linear, and requires more than a bit of railroading or editing in some places.
I’ve played all the way through Arcanum three times, most recently about ten years back, and I have been tempted to do so again. The thing holding me back the most is the march of technology, not the game system.
All of those criticisms are also completely accurate, however, and I can understand not wanting to go back through all the problems it had.
But all of those problems can be solved by the simple application of Tempus Fugit and a horde of followers.
Replay value is hurt by the brokenness, though – as I recall, magic is waaay better than tech.
It is, but you can be powerful enough with either to be unstoppable, especially when you get Dog.
I played once as a good tech run not really knowing what I was doing, once as an evil magic run to try out the alternate storylines and one final time with all spoilers in a charismatic good magic run with maximum companions where I just busted the game wide open.
Arcanum was a great video game, except for the video and game parts. The graphics were poor and the color palette bland, and the combat system wonky to say the best.
But I played through it a lot because the character progression was engrossing and tied in wonderfully to the environmental interaction. If you were a tech character and a battle was too hard, head to the slums, scavenge material from the trash cans for molotov cocktails, and bam-bang there you go.
Magic was slightly less interesting, if only maybe because less novel.
The Legend of Zelda: Breath of the Wild. Love it to bits, but the only time I’ll be able to enjoy it again is if I lose most of my memory. So much of the game is about the discovery, and if I remember where everything is it takes a good chunk of the fun away.
I feel like I still barely know where everything is, and I’ve been through almost all the game.
I’ve mostly lost interest in it, though I could pick it up again after it fades from memory. I don’t really enjoy the divine beast missions, so I probably wouldn’t start over from scratch either.
You could always do a randomizer run.
Oooh, do tell. Would that involve running it on CEMU? I could use an excuse to upgrade my rig.
First off, there’s some DLC out now, as well.
Secondly, I’m seeing conflicting information on if a randomizer is out for the game. These imply that there is one out:
https://twitter.com/zants/status/989641030365536256?lang=en
https://www.youtube.com/watch?v=En3VkH-uZqA
https://github.com/lepelog/botw-msrta-randomizer
https://www.twitchmetrics.net/c/172741635-specsnstats/streams
But most randomizers are based on patching an emulation ROM, and this list (updated last Saturday) doesn’t have one for BotW.
https://www.debigare.com/randomizers/
I played Twilight Struggle one time, am thankful for the experience, but never want to play it again. It does an amazingly good job of capturing the zeitgeist of the Cold War: everything is shit, most every option you have is shit, and you’re trying to prevent the world from exploding.
I didn’t like that, because of the requirement to use almost all your cards each turn, and some would always be very good for your enemy, it was difficult to plan a strategy. It became a game of crisis management. Everything is awful, how can you finagle the least awful outcome? Great job of presenting the experience of Cold War strategy and diplomacy! But not a fun game to play. I’ll stick with Star Wars: Rebellion, thank you. I like making and executing plans, not just handling grease fires.
As for video games, man, everybody loves Celeste, but I thought it was boring as hell. The graphics are meh, the sound’s okay, the story was predictable, and I get that it’s
a metaphorexplicitly about depression, but I can’t relate to that as I’ve never experienced depression. But what I saw in this seemed really, really on the nose. So I didn’t care about the girl, didn’t care about the story, and so the only good thing was the platforming itself. I was glad I got it for free one month as part of my Xbox Live Gold subscription. I rushed though it, said “well there it is” and have zero desire to play it ever again.Tangent: HOLY CRAP Salt and Sanctuary is SO GOOD. I know it’s a three year old game but I only got it last week when it went on sale on the Switch. It is basically 2D Dark Souls. And I don’t mean, “it’s hard, like Dark Souls!” I mean every game mechanic is cribbed from Dark Souls. Instead of collecting “souls” to level up at “bonfires” you collect “salt” to level up at “sanctuaries.” The skill tree is extremely, extremely large so you can basically design your own classes for infinite replayability. You dodgeroll, you block, you parry, you watch your stamina meter, all the enemies and bosses are punishing but fair. If you died there’s a simple fix: git gud.
I hurt my back and was laid up all weekend, so I did nothing but play Salt and Sanctuary. I’ve beaten it twice and am on my third playthrough. Cannot recommend this game highly enough. It’s available for all major platforms and is cheap (I got it for $14 on sale), and if you like Dark Souls and you like beautiful 2D games, buy this. 11/10 Honcho Points.
As someone who grew up with things like Chuckie Egg on the Spectrum, early Super Mario Brothers and Megaman games on the NES, Lemmings and Worms on the Amiga, Mario Kart on the N64 and most recently Braid* on Steam etc, it is kind of deeply alien to me that anyone should be bothered by whether a video game has a story at all. If the gameplay is engrossing enough, surely having to make the game have to also pretend to something akin to literature would only be a distraction from enjoying the game?
*Yes I am aware that Braid was a long time ago, and that it did kind of have a story. The story did in fact kind of detract from enjoying the game as an abstract playing experience. Also, get off of my lawn.
But Super Mario has a story: Bowser has kidnapped the princess and you need to save her. Mega Man has a story: Dr. Wily has created / reprogrammed robots to enslave people and you need to stop him.
Very few games have no story at all, so to call caring about the story “deeply alien”…does that mean you don’t like the vast, vast majority of games? And I agree, gameplay is paramount, but a good story is nice too. And if the story is really good, I can overlook shitty gameplay (Mass Effect 1 I’m looking at you). A lot of people praised Celeste because they were able to relate the protagonist’s struggle so well to their own struggles with depression, but as a person who does not suffer from depression, that all falls flat for me. I’m left with only the gameplay, which I thought was “okay.”
I think this was kind of the point of Celeste: it’s really just a well-designed, smooth platformer with a tiny bit of story thrown into it. It’s not really about the story in any meaningful way, but it provides a little window dressing for some of the mechanics.
The main character sort of mirrors the player’s strategy, as well. The only way to play Celeste is to just keep throwing yourself at the room until you figure out the perfect path, and that’s exactly what the main character does: just keeps pushing until she’s made it. The joy you feel when you finally perfectly transition a hard room is the main character’s joy at reaching the top of the mountain.
And a reminder that Celeste has B and C-sides which are much harder than the normal levels, so if you wanted more of that it’s there.
Yes, I know about the B and C sides but I did not want more.
Maybe it was the lack of combat that bothered me. I love platformers, but in basically every other platformer I both have to navigate the jumping puzzles and shoot/stab bad guys. In Celeste you’re just jumping, so it only seemed like half a game.
Well there is combat: both the slime-stuff in the hotel and the charging aliens in the ruins are basically fighting you. Maybe the blocks that get mad when you dash into them too. You just always lose!
Something like Mario is an action-platformer: combining the “platformer” genre with an action game. Celeste is more of a puzzle-platformer: its other half is just trying to figure out how to use the elements of a room to get to the end. Mario doesn’t really have that: sometimes you have to navigate around a few elements, but there’s much less of a “ok first I do this, then that gets me to this bit, which lets me jump over and hit this platform”.
Good review of Twilight Struggle! It’s one of my favorite games, but it’s because I explicitly enjoy that “your options are shit, now triage this crisis” gameplay. Even at work, I tend to thrive when someone else broke something, we’re losing money hand over fist, and I have to whip up something clever and fast to solve the problem- sure it’s stressful, but if I did it well it’s very satisfying.
After a few games of TS, by the way, you do get a lot better and figuring out what “grease fires” can be somewhat ignored, and that turns the game into a bit more of a plan-execution sort of game. Learning the rhythm of the different eras helps a lot too.
Twilight Struggle is definitely a game that requires repeat, especially if you are playing as the Americans. The game out-of-the-box leans pretty hard against the US, and the digital version has some tweaks and additional IP to help the US win. Even so, I got rocked probably 3 or 4 times before I could finally win as the US against the AI, and the AI ain’t even all the good.
Also, the game has many powerful cards (like Wargames) that a new player will have nooooooo idea about, and will entirely screw up your strategy once they hit.
I didn’t even know there was an electronic version. One of my coworkers is really into tabletop games so we play them over lunch sometimes. In case anyone was confused, I played the board game version.
I think it’s a mindset thing. I love Twilight Struggle, but it’s very divisive among my friends: I think some people are genuinely just defensively skewed, and that’s a trait that lends itself to enjoying that kind of game. I play the same way in other games, too – I’m psychotically defensive in chess, lock down sections systematically in Catan, and play football basically by aggressively marking whoever has the ball and then passing it away if I wind up with possession.
Even if you play well, you do spend a lot of your time putting out grease fires – I think that’s the nature of two-player zero sum games. The better player lights more fires, puts out enough, and knows when to abandon areas.
Worse than that, too many of the strategies you can plan, and must defend against, are based on “If I force the other guy to blow up the world, he takes the blame and I win”.
I’d probably be up for another replay, with the right players. But generally speaking, for those of us who actually remember the Cold War, they’ve got the card right there, explicitly labeled “Wargames”, so we all know what the only winning move is.
To quote Yahtzee: “I’m not sure how much of my absorption came from the game itself and how much came from it reminding me of my happy place but who gives a shit it’s Dark Souls with Symphony of the Night plugged into the gaps and I like both games so I’m having a blow job while snacking on fun sized Mars bars”.
What builds did you use? I ran a whip gish on my first playthrough because there clearly wasn’t enough Castlevania DNA in there already. I’ve seen people rush Iron Rampart IV to get through the harder sections, but that always seemed like it took the fun out of dying repeatedly.
Most video rpgs probably qualify after one or two play-throughs, at least the more story focused ones. If it’s an rpg with an open world or has a great resolution system, then it could be worth returning to periodically despite not getting much from the story sections. I played through chapter 1 of FFT last weekend, still a great game. But I didn’t finish my emulated Chrono Trigger run a couple years ago. The gameplay was good for the time, but now the only enjoyment I got was seeing it through my daughter’s eyes.
There’s plenty of really good games I probably won’t play again because I don’t have the hardware anymore, but I would play them if given the chance, so I think that’s beside the point.
As for board games, I might have a couple here–I don’t know if I will return or not. Doom town and Game of Thrones are both good games, but they aren’t being brought out for two reasons. One, I’m the only one who has them that I know about, so so there’s not a lot of excitement among my friends to put the time into learning them, and partly as a result of that, I don’t have much cardpool so the deck construction element doesn’t hold too much interest. Cards are pretty though.
There’s other games I own that I don’t have much interest in playing, but I’d call them “good games for their time” rather than really good games in general.
Grandia II
All Half-Life games
TES: Skyrim
For me this is most computer games I ever thought were good (Worms Armageddon, the original GTA that was entirely birds-eye-view, Myst, EA Sports NHL series, just to name a few). I liked the games themselves and enjoyed playing them when I was a yoot, but I have no interest in spending my time that way anymore. This applies to console games too.
With board and card games, all the ones I ever really liked (chess, Pente, Monopoly, poker, Scrabble, etc.) I would still play and very occasionally do if only I have the opportunity and people to play with.
I would play pente with you! That game is woefully obscure.
There’s an online chess.com-style site for it, or at least there was. I used to think I was really good because I’d always win against anyone I played IRL, but then I went on that site and found out I have no clue what I’m doing.
For computer games, I’d say Amnesia: The Dark Descent. It’s an outstanding horror game, one of the few that’s actually managed to scare me, but in actual play it basically breaks down to linear puzzle/exploration mechanics: if you know where to go and what to do, it’s trivial, and a lot of the horror payload comes from time and information constraints.
Similarly, Fallout: New Vegas. In gameplay terms it’s a mediocre shooter; the fun comes from exploring the setting and getting to know the characters. And now that I’ve hit every site on the map and spent some time playing with all the companions, there’s not much there for me. If I had a gun to my head I could probably stretch it with mods and character gimmicks (3 INT run? Luddite run? Explosives run?), but there’s no compelling reason to when I can do the same thing with more modern games.
Turok for N64 comes to mind. It was fun, but tedious, and the PVP was woeful compared to other shooters of its era. Beat it once, never felt like touching it again.
As mentioned an OT ago, Crusader Kings 2 and anything else by Paradox now falls into that category on the grounds that, first, played the way I like they become a time sink that I can’t really justify even with their high entertainment value and, second, played the way I like there is an unacceptably high chance that Paradox will force a game-breaking rule change onto me in the middle of a game.
Big long form RPGs, I just don’t have the time and inclination to sink in them the way I enjoyed them as a kid. My favorite two being Exile 3 and Planescape Torment.
I doubt I’m ever going to replay any of the Disgaea games I’ve finished. They’re just so… long.
I’m at a sloggy part in the middle of D5 right now, and just can’t be bothered to figure out which of the 1000 random slightly-useful abilities turns out to be game-breakingly useful.
How much would it cost to make a fully-automated machine that makes 10,000 Big Macs per day? Is doing so even possible today?
By fully-automated, I mean that humans are not part of the process of turning input components into finished products. They supply inputs (buns, patties, cardboard boxes, electricity, …) in bulk, they remove finished Big Macs, and they dispose of whatever waste stream is produced. They may also be needed to fix any breakdowns and to do inspections and preventative maintenance, but all of these are rare enough that no one is on duty for these purposes during regular operations. Let’s suppose the design target is 10 hours of scheduled downtime and 10 hours of unscheduled stoppages per year, not including any logistics problems with the supply of inputs or ability to remove outputs, including waste.
A burger making robot already exists.
Whatever it costs to scale that robot, I guess.
I don’t think this is possible. You’re talking about a machine that has 99.8% OEE. This type of machine would be considered world-class if it high the high 80s.
That seems only true if the machine can only produce one burger at a time. One could imagine a larger machine with several “lines” capable of creating many burgers at a time (unless you would define each line as a separate machine, but I’m not sure that would be consistent with other usages).
The limiting factor is probably cook time for some of the ingredients.
I think ADBG is pointing to the fact that the stated criteria (10 hr scheduled maintenance/yr, 10 hr unscheduled maintenance/yr) is completely bonkers for manufacturing. This isn’t a server farm, it’s a factory, and that kind of reliability for manufacturing equipment isn’t something the designers or operators would see in their wildest, drunkest dreams.
What would be high but plausible requirements for a piece of complex food-processing equipment?
Ah, yeah I kind of skimmed over (entirely missed) that part of the requirements.
…This has never happened before I swear.
@johan_larson
Are we to assume that this machine runs 24/7 outside of scheduled/unscheduled downtime?
I would guess you would need to increase total downtime by about 60-80x to feel comfortable. You can probably do better, but it will cost you.
Yes.
Well, the budget is a question, not a requirement.
Ran, by Kurusawa, is widely considered to be an excellent adaptation.
Kurosawa builds in a very strong live by the sword, die by the sword theme which is absent (or at least much more subtle) in the original. Which doesn’t necessarily make it worse, but that and some other lesser changes combine to make it quite different from the original.
the best adaptation of King Lear is Kosintsev’s 1971 version. Has an original score by Shostakovich. It can be found on YouTube easily; very hard to buy a legitimate copy.
Covers the last line of your comment too, in a very moving scene that doesn’t add any lines to Shakespeare’s text.
I haven’t read the play, but I did really enjoy a discussion of it by Stefan Molyneaux and an English professor that can be found on youtube.
Hello, and welcome to the twentieth instalment of my Biblical scholarship effortpost series, and the final one covering the Hebrew Bible. Last time we covered three short stories about Jewish women, one noncanonical to Jews. This time we’re going to look at the book of Daniel, a combination of stories and apocalyptic visions. In connection to that, I’ll talk a little about the apocalyptic genre.
The caveats: while I studied this in school, I’m not a real expert. I’m aiming for about a 100/200 level of coverage, but if people have more questions, I can try to answer them – with the caveat that I don’t currently have access to my books. The focus here is on secular scholarship, not on theology. I won’t be providing a significant summary due to space constraints.
The book of Daniel is divided by most scholars into two separate parts, with the general consensus theory being that the two halves were composed separately, at different times, and later edited together. I’m going to discuss them separately, because even if they were originally composed together, they’re very different in character – although they share one key feature; both are messages, in their way, of hope and faith.
The first six chapters concern young Jewish noblemen taken to Babylon and educated for court service there. The stories have a folksy, humourous side to them reminiscent of Esther, another tale set at the court of a foreign ruler. Primarily, they concern Daniel, who is wiser at interpreting dreams and so forth than the Babylonians but also a keenly observant Jew. He is a faithful part of the Babylonian system, but at the same time a Jew who has not abandoned his identity or practices, and who will not do things forbidden by the Jewish religion.
Based on historical inaccuracies in the text, the way it presents the Babylonian court, and its depiction of foreign rulers (as bumbling, foolish, or impulsive rather than evil, although possibly dangerous due to their bumbling, foolishness, or impulsivity), scholars think that it dates later – from the 4th to the 2nd century, during the Persian or Hellenistic periods. Its context is Jews in the diaspora who are simultaneously members of their immediate societies and loyal to their own people and religion. Daniel is an exemplary figure: he is loyal to his immediate master but also ultimately to God, who remains in authority over the world – including foreign rulers and their realms. The book’s message is a hopeful one: Jews can live and even thrive in the diaspora without having to compromise their religion. As noted, the stories have a folkloric character, and may originally have circulated orally.
Meanwhile, chapters 7 through 12 are apocalyptic visions narrated in the first person. Daniel is the only Jewish apocalyptic material to have survived in the Hebrew Bible; other Jewish apocalyptic material can be found in other canons, and of course, in the New Testament. We should take a moment to discuss the apocalyptic genre in general.
An apocalypse, from a Greek word meaning “revealing”, could have various meanings, and scholars argue over what exactly an apocalypse is. If you take a broad definition, you have two subcategories: one sort of apocalypse is full of the narrator being transported to otherwise unreachable realms, given visions of cosmological significance, and often shown some indication of life after death, including judgment. This sort of apocalyptic material provides information about the nature of reality.
Daniel’s apocalyptic material is the other sort: an angelic interpreter (not a uniform feature, but not unique to Daniel) explains a series of weird, symbolism-filled visions. The visions generally take a historic cast: they describe past or current events (scholars think that apocalyptic predictions are, like many prophetic predictions, a mix of actual predictions and “predictions” of past events to give legitimacy). The historical narrative in Daniel describes an understanding of history in which God is ultimately in control, but in which for the time being things are very unpleasant. Eventually, however, God will intrude into history and change the world for the better, not just by righting wrongs, but by really altering the nature of things.
Daniel also features the resurrection and judgment of the dead – while the exact nature (physical or spiritual) of this resurrection isn’t 100% clear, the text does present individual resurrection and individual judgment. It’s the clearest example of this in the Hebrew Bible, in that it is very unlikely to be metaphorical; we know beliefs like this were contemporary in Judaism and continued to exist as a school of thought.
If one focuses primarily on the more historically-themed apocalyptic materials, apocalypticism seems to be a different answer to the question answered in some of the prophetic books (similar, perhaps, to how Ecclesiastes and Job are different takes on the subject matter of Proverbs). Many prophetic works take the view that negative historical events are God’s judgment, and that if people get their act together, they will be judged in a positive way. The group needs to behave itself so it can be rewarded instead of punished.
Apocalypticism is a later development, and perhaps spawns from the history of continued disappointments and misfortunes. After the sack of Jerusalem and the Babylonian exile, God’s people were restored, but as a Persian province. Later, they fell under the rule of other empires. The horrible, traumatic event came and went… and things didn’t get very much better, unlike what many thought would happen, and then there were more traumatic events, of one sort or another. Apocalypticism responds to this and posits a world where for whatever reason, in the here and now, the bad guys are winning. God’s favoured people suffer not because they are unruly, but because the world is evil, and possibly even because they are good in an evil world. With time, however, God will put everything right, and the world will be changed – not just to a place where bad things don’t happen, but to a place where they can’t happen.
Scholars think – more on this later – that Daniel dates from the rebellion against Antiochus IV Epiphanes, who seems (stating things conservatively) to have vigorously promoted Hellenization, and whose rule was (again, speaking conservatively) understood by rigorously observant Jews as serious aggression against them. The correspondence between real-world events and the symbolic “predictions” in Daniel seem fairly obvious (the actual predictions of the future are wrong). Daniel conveys the fairly typical apocalyptic message, showing how history fits into a narrative where current suffering will be reversed when God intervenes decisively in history.
The general consensus on dating Daniel is that by looking at where the “predictions” corresponding to historical events are most detailed, and where they cease to be accurate, the “current day” corresponds to around 11:40-45, around 164 BCE. It was probably written down from the start; if the book of Daniel as we have it dates to this period or after, then it’s the latest composition in the Hebrew Bible.
So, the scholarly consensus is that two separate books have been combined. There’s one major problem with the two-book theory, namely, there’s some material in the first six chapters that’s at least got a whiff of apocalypticism. The clearest example is the dream of the statue, symbolically mapping to four successive empires, with the final prediction that one day a kingdom of God will be in place instead. Apocalyptic elements (and the statue isn’t necessarily apocalyptic; understanding successive empires or ages of humanity as metals of declining value appears elsewhere in the ancient world) could be explained away as later changes, made to bring the first six chapters in line with the second half – but this isn’t exactly falsifiable.
Another factor that complicates theories about the book’s provenance is that it’s in two different languages, Hebrew and Aramaic (another Semitic language which had become a regional common language due to the history of empire). The linguistic division, furthermore, doesn’t follow the 1-6/7-12 division: up to the first half of 2:4, it’s in Hebrew, then in Aramaic until the beginning of chapter eight, where Hebrew picks up again for the rest of the book. Whether or not the book of Daniel is one document originally or two, either there originally was mixing of languages, or some translation has happened. There are theories about how the different parts of it were put together, and you can break it into more than two documents if you like.
So, in conclusion: the book of Daniel is made up, first, of stories about an exemplary figure who serves gentile rulers without compromising his loyalty to God, and second, apocalyptic visions. Scholars tend to think it was multiple documents originally, but the division in tone and genre is obvious even if it was written as one document. Both, however, share messages of hope: in the one case, that Jews can live a good life in the diaspora and that God remains in control, in the other case, that God would eventually change the world for the better.
Especially important is apocalypticism – because with Jewish apocalypticism begins the other major chunk of the Biblical canon, the New Testament. Once I’m back with my books, we’ll start looking at the New Testament – but before that, I’ll try to find some time to write something summing up a bit about the Hebrew Bible.
(As always, if I’ve made mistakes, let me know – hopefully within around 55 minutes so I can edit)
One note: this is the book of Daniel as it is in most current texts. In the Septuagint, and in the Catholic version, there are two more chapters (13 and 14 in Catholic numbering, but I think at the beginning in the Septuagint) that are stories about Daniel, rather than apocalyptic.
I should have mentioned these – I made some early cuts for space, and then ended up not needing the space, but by that point I was away from my books. I think you’re right with regard to the placement. There’s also some stuff inserted into one of the preexisting stories in Daniel.
The Song of the Three Young Men, IIRC, which Shadrach and Meshach and Abednego sing in the fiery furnace, closely echoing a Psalm?
Isaiah doesn’t qualify?
Isaiah is… complicated. I’m away from my books right now, but looking at the post, it’s complicated whether the relevant bits are considered apocalyptic or not. I think it’s the sort of thing scholars argue over. The stuff in Daniel is unquestionably apocalyptic.
I thought that Ezekiel qualified as ‘apocalyptic’, but I don’t remember thinking that Isaiah did.
To flesh out my thoughts:
A. Daniel is not the only prophet to report seeing a ‘vision’ of something that is revelatory.
Ranging from Isaiah’s vision of God surrounded by worshipping angels in the temple, to Ezekiel’s theophany, to Zechariah’s visions of craftsmen/horns/man-with-measuring-line, to Joel’s vision of an army of locusts, there are more than a few prophets who report revelation by ‘vision’.
B. Daniel isn’t the only prophet to report a special angelic messenger as part of the vision. He shares that with Ezekiel. Ezekiel’s vision is as unusual as Daniel’s, but the messenger has a different focus. And Ezekiel’s vision(s) with a special angelic messenger focus more on the present and future of the Temple in Jerusalem, than on an symbolic presentation of the broader sweep of history.
C. Thus, the prophecies of Daniel are in a special category as ‘apocalyptic’, especially if we use this description: …[Y]ou have two subcategories: one sort of apocalypse is full of the narrator being transported to otherwise unreachable realms, given visions of cosmological significance, and often shown some indication of life after death, including judgment….Daniel’s apocalyptic material is the other sort: an angelic interpreter (not a uniform feature, but not unique to Daniel) explains a series of weird, symbolism-filled visions.
It’s possible to squeeze parts of Ezekiel’s theophany into that first category, but Ezekiel doesn’t say much about life after death. Ezekiel also has a vision of the Temple detailed enough for him to provide measurements of every part of the structure…but that’s not in the category of transported to otherwise unreachable realms.
So I think I agree, Daniel is the only ‘apocalyptic’ prophet in the Judaic canon.
It should be clarified that in my post, I’m trying to provide a sort of bland consensus view of what apocalypticism is – but there’s disagreements and so forth. I’d bet there’s scholars out there saying Ezekiel isn’t just reminiscent of apocalypticism, but is apocalyptic itself.
My personal bias is towards apocalypticism centred around “God’s gonna come and fix this broken world, and then things will stop sucking so bad” but that’s because that’s the motif that is most strongly expressed in the canonical early Christian stuff.
Unfortunately for this post series, the really out-there “transported on a fabulous vision” stuff is outside our scope. Enoch is the one that jumps most readily to my head – but I might be misremembering; I haven’t even glanced at it in the best part of a decade. It might be worth a look but I don’t know if I have any books covering it.
The style of Isaiah is extremely different from that of Daniel. The former consists mostly of direct words of God, and the only clear theophany/vision in chapter 6 has a relatively straight-forward message of commissioning the prophet.
Daniel is full of cryptic allegorical visions about historical empires and events, which require angelic interpreters and wisdom to make any sense out of. Parts of Zechariah and Ezekiel are quite similar (which could point towards an earlier dating of Daniel) but Daniel is the apocalyptic book par excellence which kicked off a whole slew of copy-cat Jewish apocalyptic literature. The Book of Revelation is the only other canonical example (being inspired by God in my theology) but there are a bunch of other examples like 4th Esdras and Enoch that only scholars know about today. (Although, 4th Esdras was widely taken as canonical in the medieval Catholic church, before Luther and Trent).
Ecclesiastes is the bad cop to the rest of scripture’s good cop.
You think God is asking too much of you, do you? What else do you have going for you? Money? Power? Love? Food & drink? Meaningless! Vanity! You’ve got nothing.
Traditionally (I don’t recall dndrsn’s scholarly take on the subject) the same author wrote Ecclesiastes and (much of) proverbs, David’s son Solomon–as well as writing Song of Songs, which celebrates romantic love.
On the other hand, I didn’t mean to imply that it was written with that purpose. I suspect the author legitimately was going through some depression or a dark time philosophically. But it was regarded as wisdom for being honest about the human experience, which must be reconciled with any true religion.
That is, Christianity/Judaism is truer for having books like Ecclesiastes and Job that say “yeah, good people can get screwed over in life” even if it doesn’t give a satisfactory explanation than if it pretended everything is sunshine and roses for it’s adherents (like it can come off as sometimes).
Ecclesiastes is here and Proverbs here, Song of Songs here. Modern scholarship has largely discarded the traditional accreditation of those books.
I think Randy M is right about why it’s in the canon. The Hebrew Bible and the Christian canon both have room for multiple interpretations of why the world is the way it is, whether on a large scale (prophetic vs apocalyptic) or on a small scale (Proverbs vs Ecclesiastes).
Is Ecclesiastes a stand-alone work? In the broader sense, is anything a stand-alone work? Nothing stands entirely separately from the society in which it is produced.
Thanks!
Ecclesiastes is, yeah, “bad cop” is maybe a good way of putting it. It’s a response to the problem implicit in the message of Proverbs – the wise and good don’t always prosper, the wicked and foolish don’t always suffer, what’s up with that? And its response boils down in large part to “well, we don’t know, and we can’t know.”
What’s your take on the alleged parallel between Ancient of Days bestowing power upon the Messiah in Daniel 7 and El bestowing power upon Ba’al in Ugaritic texts?
I’m afraid this is out of my wheelhouse and I’m away from my books – bug me about this again when I start posting the New Testament stuff, and I’ll see what I can dig up. Off the top of my head, the influence of the general culture/religion of the region isn’t hard to find examples of. This is interesting, though, because Daniel is rather later than a lot of the stuff influence can be seen in.
A few thoughts:
The first section of the Book of Daniel appears to be composed from folk tales about Daniel-and-friends in the court of the Kings of Babylon. The opening chapter, and the framing narrative for chapter 2, are in Hebrew. The rest are in Aramaic: and the switch is after the first quote from the King of Babylon in Chapter 2.
If most of these folk tales were mostly remembered and transmitted in Aramaic, that might be a reason why the language switches to Aramaic in that section. That might be an indication that the stories were pulled from a larger pool of Aramaic stories about foreign exiles in the royal court of Babylon. But it’s a bit strange that the opening of chapter 2 is in Hebrew, and the story switches to Aramaic after that opening.
The first chapter of the apocalyptic section are also in Aramaic. The remainder of the visions are in Hebrew. This may mean that the tradition of apocalyptic visions was also linked with the legendary-or-historic character of Daniel seen in the earlier folk tales. It might also mean that most of these stories were remembered/transmitted in Hebrew, but at least one was remembered/transmitted in Aramaic.
There’s a reference, somewhere in the prophecies of Ezekiel, to a man named Daniel. Daniel is compared to Noah and Job as an example of righteous men who were distinctive in a world full of unrighteous people. Another piece of evidence that there was a legendary-or-historical figure who had a reputation of being a righteous man in an environment that was not conducive to such behavior.
Kind of like the earlier discussion about the historicity of Moses, the historicity of Daniel is an interesting challenge. I’d argue that some of the children-of-Exile, raised in the court of the Kings of Babylon, did distinguish themselves as faithful to their Jewish heritage. The biggest set of such stories centered around Daniel.
I was finishing a grant proposal so I’m sorry I couldn’t reply to this earlier.
But I’d like to mention (as usual) that Daniel can be dated considerably earlier if you believe that divinely inspired predictive prophecy is possible. (Josephus claims that the Book of Daniel existed in 332 BC when Alexander the Great came to Jerusalem, and was shown to him as a way of appeasing him.)
In particular, in chapters 8 and 11 the book clearly predicts the Selucid ruler Antiochus IV Epiphanes, the villain of the Hanukkah story, as well as the archetype for the later Christian concept of an “antichrist”. This dude basically made Judaism (including circumcision and keeping kosher) illegal, sacrificed a pig in the Temple, and demanded worship as the manifestation (“epiphanes”) of Zeus, so that was a pretty big deal from the Jewish perspective. He was opposed by the guerilla forces of Judas Maccabees, whose (priestly) family later became the Hasmodean dynasty during a brief restoration of Israel’s sovereignty prior to their annexation by Rome.
The traditional Christian interpretation of the 4 kingdoms in chapters 2 and 7 (which are clearly intended to match with each other) are:
Gold Head / Lion with Eagle’s wings -> BABYLON (famous for its gold artifacts)
Silver Chest and Arms / Lopsided Bear -> MEDES and PERSIANS (dual kingdom with Persians dominant, famous for their silver artifacts).
Bronze Thighs / Leopord with 4 Wings -> GREEKS (famous for their bronze weapons, Alexander the Great’s kingdom was split between 4 of his generals after his death; the conflict between two of these [Ptolomies and Selucids] played a major historical role in the intertestamental period)
Iron(Clay) Legs(Feet) / Strange and Terrifying Beast -> ROMANS (famous for their iron weapons and clay statues, vastly more powerful and extensive than previous empires)
Stone that becomes eternal Mountain / “Son of Man” (i.e Human Being) who rules an eternal kingdom (GUESS WHO!)
This progression seems to fit the details of the visions considerably better than rival interpretations, but makes the predictions of the book extend past even the later scholarly date of composition. (The 69 sevens calculation also seems to work out to around the early first century, although the exact year depends on the details of how you do the calculation.)
What’s stopping someone from creating an online payment “adapter” site? IE I want to send my friend $50, but I only have PayPal and she only has Venmo, so I PayPal the adapter site $50 (plus some transaction fee), and it Venmos $50 to my friend.
Who would use it?
Someone who doesn’t want to set up multiple accounts and just have one rather than signing up for PayPal and Venmo.
The problem is that in order to use the “adapter” you have to…set up another account (for the adapter service). At that point, why not just go ahead and create the Venmo account and (literally) cut out the middle man?
Such a person might well exist (but as you say, xkcd 927). However, I doubt they’d be willing to pay any fees for the privilege.
Side note: PayPal owns Venmo.
Because you only have to create one more account, not 3 or 4 or 5 more.
I don’t think Paypal/Venmo would want to take your payments, and I don’t think you could force them to. If you could, running any kind of financial services company implies some pretty significant costs, regulatory and otherwise, and it may be that so far everyone who has been willing to pay those costs did so with dreams of outcompeting Paypal/Venmo/etc rather than working with them.
I will add that several cryptocurrencies are explicitly designed to solve this problem, which I think includes Stellar and Ripple. The former is a fork of the latter and it’s not clear to me which is better or more likely to gain traction.
Yeah, this. Lots of people have had this idea for competing services (why not one front end that polls both Uber and Lyft and sees which has the best price? Why not a common front end for all the various food deliver services?), and the individual vendors fight it tooth and nail because they understand that they have become disposable commodities if these aggregators succeed.
And they can fight it pretty effectively, it turns out, both legally and technically.
My understanding is that transaction fees would irk consumers too much for this to ultimately be very successful. Look at the fee a government charges you if you pay with a credit card instead of check. That is similar to what this service would charge. Seems to me people would not be happy paying $3.00 to transfer $150 . And I’m not even sure that covers profit for your app, that’s just what mastercard and visa charge.
from the perspective of one payment processor alone this would look a lot like money laundering, and might trip their fraud-detection systems, and would probably just generally be annoying to deal with; they would likely just ban the adapter from having an account. a lot of the value in payment systems is theoretically in network effects too so they have an additional business incentive to not allow this.
in your specific example, though, PayPal owns Venmo, so it’s kind of weird this doesn’t already work.
The main reason you don’t see a proliferation of services like this is because margins are too low. Sure, Stripe and Venmo exist. But, to give you an example, I know the largest second party billing company on Shopify. They process literally hundreds of millions of dollars worth of transactions and they’re still reliant on VC money/have a dozen people.
Think about the math. Let’s say the adapter site charges 1% (and they probably can’t get more because their fees are on top of PayPal/Venmo). They made $.50 on your transaction. How much money would they need to move to pay one developer’s salary? $15,000,000. Which is fine if you can get those numbers: most of your costs are fixed. But it’s a huge, gaping chasm between startup and that.
Anyway, nothing is stopping it if the companies allow it. Well, a huge red tape mess of regulation is stopping it. But if there was real money in it, it could be done. There isn’t, though. While over a trillion dollars move through the mobile payments space, the people facilitating it only get a total of about 30 billion dollars. Plus it’s a strong network effect market, leading to natural monopolies.
Are there any practical/acoustical reasons (so, not tradition or visual aesthetics) why acoustic violins, violas, and cellos are still made with peg tuners and a peg box rather than machine-head tuners and a solid electric guitar-style headstock?
Classical musicians are extremely conservative when it comes to instrument modifications. There’s a decent amount of woo involved along with tradition and just “doing what works.” But also, changing the mass and/or stiffness of the tuning mechanisms (or any part of the instrument, really) will change the vibration modes and damping of the system and effect the sound to some extent.
I grew up hearing that (one parent is a classical musician) but in recent years I’ve also seen it somewhat debunked as overblown at least.
You might think it’s overblown – but remember, these people can be quite high strung.
I see what you did there
Yeah, it would be interesting to see if people could really tell the difference in a blind test. But there is a lot that goes into mastering a an instrument – part of that is feel, weight and balance, etc. Something feeling weird or different could effect the person’s ability to play or even more likely, their perception of how they play/sound.
If everyone switched to better tuning mechanisms, the world might be a better place, but getting someone who’s spent 10,000+ hours on a $100,000+ instrument, who’s already successful, to change can be hard. I know for my instrument, which I played professionally in orchestras for some time, technological advances came along now and then but were not widely adopted. We had trained ourselves to compensate for the instrument’s limitations so well that not doing that thing seemed harder than just continuing to do it.
And a student might avoid the innovation because “nobody with a big orchestra job uses that,” so not using that is one piece of the huge, often subjective and unknowable mish-mash that differentiates success from failure.
Also, “if it ain’t broke…”
Doesn’t seem totally rational, but those the reasons as far as I can tell.
The quote is from Jack White, so not exactly a classical musician, but I think that idea or a version of it is present in the minds of a lot of musicians of any specialty/style whether consciously or unconsciously, and is a big part of music culture generally. The idea that it isn’t supposed to be easy and that working harder is somehow important, or that the more difficult something is or the more care that it takes to do it the more valuable it is.
*Note he isn’t talking about “technology” as anything electronic (obviously, the guy uses electric guitars and tons of effects), but “technology” that just makes things easier without any other benefit to the music making process (making new/unique sounds, making something impossible possible, etc.). Easier tuning gears would be a mild example of this.
Has anyone done a blind test of $10k violins vs $100k violins?
@Douglas Knight
There would two components to that…can the player tell a difference while playing it and can the listener tell a difference while listening.
My intuition:
Player: Top end players most likely yes. Very good players maybe.
Listener: Highly trained musicians maybe, average listeners almost certainly not.
Originally I was going to call for a moratorium on calls for blind tests until people better propagate the results of existing blind tests.
I don’t know if people can tell the difference between $10,000 violins and $100,000 violins, but it appears they cannot tell the difference between $100,000 violins and $1,000,000+ violins.
@Douglas Knight
Anecdotal experience suggests that the pricing of very expensive instruments is based on collectibility factors, not musical factors. In particular, documented proof of provenance and chain of ownership is much more valuable than how the instrument plays or sounds.
One of my friends is an accomplished violinist who made a lucky find of an amazing instrument. When he solicited value estimates from several dealers, he was shocked to learn that none were even interested in the quality of the instrument or hearing it played.
Something like this was illustrated in a cousin domain with the sale of Da Vinci’s Salvator Mundi: huge price based on a great origin story, even though it is (was?) not well regarded artistically.
“If I miss one day of practice, I notice it. If I miss two days, the critics notice it. If I miss three days, the audience notices it.” – Some famous pianist. And it matches my observations, too.
Someone without a lot of experience playing or listening to classical instruments probably would not be able to tell a $10k violin from a $100k violin, but someone with a lot of experience might be able to. Also, a performer might find the more expensive instrument easier to play (easier to get the sound quality they want, easier to play in tune, easier to project into a hall, etc.), but can sound just as good on the cheaper one with more (different) work than they want or are used to.
There are some YouTube videos where people compare very expensive and cheap musical instruments. I’ve seen flute, piano, violin. Of course it’s going to be harder to tell listening to a compressed, spat through speakers version than hearing in a concert hall. Some instruments sound kinda crappy up close but sound great from a distance in a big hall — better than some that sound great up close. Acoustics can get complicated and sound quality can be subjective (although there are broad points of agreement, some maybe biological, much definitely cultural, on “what sounds good.”)
Also, acymetric is right – part of the artistry of music making is the challenge and the amount of work put into it. I’m not sure this is a worthwhile mindset, or if it’s counterproductive, but that’s how it is for many.
That’s one of the things that has actually been tested in a blind test. Strads sound just as lousy far away.
I seem to recall there being different tuning systems, where the notes correspond to slightly different frequencies depending on which key you’re tuning against. Standard piano tuning uses an average of common keys, and string instruments have “standard” tuning systems probably developed along similar lines. But the key-specific tuning systems do exist, and while I don’t know how often they’re used, I’d expect classical stringed instruments to be one of the areas where it would come up the most.
With manual analogue tuning via pegboard, you can tune to any values you want (within the range of the string’s physical ability to support), but a machine-head tuning system would probably be build around a specific tuning system or a specified set of such.
This is definitely true, but I don’t think the type of tuners impacts ability to tune this way. I know it comes up a lot in smaller ensembles (I can’t remember specific examples because it was a long time ago, but things like “whoever has the 3rd on this chord needs to be just a little flat” based on what key/chord was involved.
Yep, the Major third needs to be flat relative to equal temperament found on pianos. Minor third sharp. Other intervals need different adjustments because “equal temperament” is a big compromise that sounds equally out of tune in all keys.
This is called Just Intonation and orchestral instruments use it when possible (winds and strings and tunable percussion) along with choirs (naturally). When playing in an orchestra with a piano or harp or guitar, you have to go along with that instrument since it can’t adjust in real-time.
I don’t see your point here. Machine heads can also be tuned to any value you want, and what’s more is they are easier to turn and they don’t slip.
To add to Well…’s comment, ‘machine head tuners’ just means geared tuning pegs with a worm gear like on pretty much all guitars (and indeed double basses), or planetary gear tuners like on a lot of modern banjos, which are all just as capable of a theoretically infinite degree of tuning freedom, and are not locked into any specific tuning system like, say, a piano or an organ is. Indeed, you can get more accuracy, since the gear leverage turns the movement of your clumsy hand into a smaller change in the tuning.
And in fact many violinists do in fact use a type of geared fine-tuner that attaches to the tailpiece, at least on the highest string. You tune to approximately the right note using the main tuning peg, then adjust the pitch to perfection by turning the screw on the fine tuner.
That said, you can now get geared planetary tuners for the violin family – they look just like traditional friction pegs, so presumably have no effect on the resonance of the instrument that any normal human could detect.
Got it. I wasn’t familiar with those style of tuners and was assuming it was some kind of automatic self-tuner.
If the main advantage of “machine head tuners” is mechanical advantage (so each turn is a smaller adjustment in tune), I can see why they haven’t been widely adopted for the violin family: when I played viola in grade school, my instrument and all my peers’ violins, violas, cellos, and basses all had fine tuners on the base of each string below the bridge.
Well, unless I really don’t understand how geared planetary tuners work, they have unlimited tuning range (well, it’s limited by the strings), while fine tuners have a limited tuning range based on the length of the screw. There’s also the fact that most instruments past the beginner level only have a fine tuner for the highest string (the viola I use as violist music minor only has a fine tuner on the A string, for instance), so if those things don’t have the reported tone issues with fine tuners, they would be better in general.
Plus, ergonomically, fine-tuners are sort of impractical. At least on the cello, where you have to do this bizarre reach-around-the-bow thing.
My electric cello solves this problem: it has no fine tuners, but instead has bass-guitar-style machine head tuners on a solid bass-guitar-style headstock! Ashamedly, that thing has mostly sat in my closet for like 4 years, but each time I take it out, it’s (basically) in tune! My old acoustic cello used to go out of tune from one part of the day to the next.
I need animal protein to hit my dietary goals. From an EA perspective, what is the most effective dollar-for-dollar way to reduce animal suffering and environmental impact? Yes, those two goals are often at odds. Density also matters, too: if I had infinite calories to eat a day, I could get all my protein from peanut butter. I am on a bulk so I have more leeway in calorie budget; if I’m ever going to do it, now is the time.
I read https://slatestarcodex.com/2015/09/23/vegetarianism-for-meat-eaters/ and the comments point out greenhouse gas is much worse with beef. How much worse? Many people point out percentages, but those don’t mean anything to me, because I don’t know how much my food contributes to global warming. I have no idea of the scale. How does it compare to my total carbon output as a middle-class American? Is it like driving 10 miles? 100? Is there a better place to spend money to reduce my carbon output?
I see my options as these, and they may be incorrect:
0. Soy is out because is reduced testosterone, which would not have bothered me when younger but at my age I don’t want to lose what I have.
1. Replace chicken with beef. (See above for questions about trade-offs.)
2. Buy chicken livers when on clearance, which they seem to be often. (100 grams of protein for a dollar for what is essentially waste material is pretty good. I am learning how to cook them so I can stomach them, too.)
3. Milk (and therefore cheese and whey protein) is better than meat. It is cheap on a dollar-per-gram basis, and decent on the protein-density front. Whey protein powders are decent prices. Is getting grass-fed whey powder worth the drop from 34 grams/$ to 25 grams/$?
4. Grass-fed better than CAFO for cows, free-range better than factory-farmed for chicken. Possibly zero ethical cost in my view, if the a cow gets to live like a cow before becoming my food. Costs go up a lot, though.
5. Back off on eggs. (Scott suggests eggs are bad on a calorie-for-calorie basis, but his citation is now paywalled https://blogs.scientificamerican.com/guest-blog/want-to-kill-fewer-animals-give-up-eggs-not-meat/ ) However, the price differential on free-range eggs is pretty small. Again, if a chicken gets to live like a chicken — which I’m not sure is true, but if it is — this becomes ethically neutral for me.
6. I have no idea how other meats like pork fit in here.
7. For fish, wild caught are better than farm raised. Fish is still expensive and my wife doesn’t eat it at all so I have to respect her by not pouring too much of the family budget into my diet. I don’t think of fish as very smart but there is probably a lot of differences between species. Is one kind of fish especially dumb and not mind being in a farm?
8. Plant protein powders (pea protein, etc) are better, but can get expensive really fast. I don’t see them getting me more than 20 grams per dollar.
9. But I found wheat gluten at 93 grams of protein per dollar, which is almost too good to be true. What is the catch? Assuming I don’t have a gluten sensitivity.
I know this is all over the place, but I am looking at making marginal changes. Where should they be?
Any reason you have to take the plant proteins out of the plant first?
i.e. make dahl, falafel, beans etc. depending on what ethnic shops are nearby.
This number isn’t set in stone, but, roughly, I need to get about 1 gram of protein per 13 calories in a day. And with some no-protein fruits eating up part of my budget, that can push the ratio down to 12 or 11.
Chickpeas are great, one of the things that I wish I’d discovered 30 years ago, but the calorie:protein ratio is around 20. (Lower is better.) Kidney beans, which I don’t like as much (but can eat a bowl of, if I split them with something else) only get to around 16.
I don’t know how the protein measures up against fish, birds, or mammals, but you could look into mussels, clams, etc. which are primitive nuerologically and a good source of nutrients.
Also, depending on where you live, wild game may be an option.
Beware of a shellfish-heavy diet if you have any family history of gout. Gout is not fun.
Good to know–thanks for the correction.
My brother gets gout every time he visits certain parts of the world, so thanks for the warning.
I haven’t liked mussels and oysters the few times I’ve tried them, but that is something I could probably change with deliberate trying. I see shrimp count, and shrimp are pretty dumb, and I really like shrimp. There are issues with wild caught shrimp so I’d try to find farmed shrimp.
The paywalled Galef piece is presumably a reprint of this from 2 months earlier. It says that eggs are more efficient (per life) than chicken meat. When he says “one of the worst” he means by comparison to red meat, not to chicken. But you should also ask yourself why you care about chicken deaths, and not, say, days of chicken lives.
By comparison with chickens, pigs are identical to cows. They’re smaller, so per life they’re less efficient than cows. But they’re also smarter. A lot of people think that farming is torture and torture matters more for smarter animals, so they prefer beef over pork for that reason.
Thank you for that chart! It helps put some numbers to things.
But you should also ask yourself why you care about chicken deaths, and not, say, days of chicken lives
This is more my concern. 10 chickens raised and killed for one day each is about equal to 1 chicken raised and killed after 10 days, assuming suffering. The chart works off of “lives” but gives some numbers to get down to “animal-days” but of course some animals are less equal.
A dairy cow makes milk for 3 years. I’m not sure whether they live for 3 or 6, but that still makes it produce 8K-16K calories per cow day.
Slaughter cows live for ~2 years, which is 554 calories per cow-day.
Chicks, post-hatch, live for 42 days, which is 71 calories per chicken-day. Even if cows are twice as smart, beef is still better on the total-suffering front.
See also Tomasik’s editable table, aimed at meat/day of life. The first two columns are objective and presumably correct, but the rest depends on moral choices. Columns 4 and 6 don’t do much, because he doesn’t use a big range. But column 5 does matter. For a first pass, set column 5 to all 1, to get meat/day of life. Then bring back column 5.
It seems crazy to me that Tomasik puts only a 2x range on the sentience column, while he puts a 4x range on the suffering column. Whereas surveys put a 10x ratio for pig:chicken, where he put a 1.2x ratio. [Moral value is not the same as sentience, but the point of the link is that it is.]
About 8 years ago we bought a chest freezer and started buying our meat from a small farm ~45 min drive away. This resolved the bulk of our ethical concerns about eating meat, we can see many of the animals on visits and have toured the farm. The animals are clearly not suffering outside of the slaughter process where we are taking a little bit of a leap of faith. We also (for 3 years) have been raising most of our own eggs. I haven’t seen any argument that is close to dissuading me that well eating well raised animals is ethical, outside of the important note that male chicks have a pretty crap life when culled from future egg laying populations.
Wheat gluten is an incomplete protein and has a very low bioavailability.
I’m not planning to go vegan and will still get proteins from a variety of sources, so the incompleteness is not that high a concern. (I can’t imagine eating more than one serving of wheat gluten a day anyway.)
I don’t know much about bioavailability, and googling online I see people fighting about it back and forth, each insisting they are right, although not referencing actual studies. At worst, it seems to mean that I will only get 50% of the protein listed on the container, but at 90 grams per dollar I can afford to mark that down by a factor of 2.
I tried to read “seeing it like a communist” and I appreciated the author’s approach but I didn’t follow the logic past 1.6. The following example is given there:
“Suppose, for example, that you held that a family who has lived in a house for generations has a better claim to own it than the landlord.”
I have a _lot_ of issues with that.
First one — this example doesn’t seem any logically different from (more contrived) ‘suppose you held that you, an avid ferrari fan, have a better claim to your neighbour’s ferrari than your neighbour (who isnt a car buff)’. I don’t think that the approach to communism through the ferrari statement would be palatable to most, so is the tenant example just a nice dressing on a wrong idea?
Second one — suppose for some reason that we accept the tenant-landlord thing. We declare that starting next month all families that ‘lived in the house for geerations’ will own that house. What will happen? I’d imagine a lot of long-term tenant contracts being urgently interrupted and all landlords adding a rule that no single tenant can rent the same space for ‘generations’, even if the arrangement is otherwise perfectly fine for everyone involved. So accepting the premise in tenant example leads us exactly where we were only with a stupid rule added on top. Is communism just a bunch of stupid rules on top of everyday life then? (tracks my experience in 1980s Russia).
So what I would really like is for someone to help me find a better version of ‘tenant’ example, which does not devolve to the ‘ferrari’ example. My thinking is that a better version already exist and I just don’t know about it…
My understanding (and I am not a communist) is that communists discount the value of ownership over long-term usage.
There are essentially two ways to live in a house. One is to purchase the entire house, and hence have “free” future use forever. The other is to pay the owner a usage fee and live in it for a set time based on that usage fee. Monthly is traditional, but other timelines exist.
It is entirely possible to live in a house on a month-to-month basis for so long that you have paid more in rent than the value of the house, especially if your tenancy is very long term or the house relatively inexpensive.
In that case, it can seem unfair (I have paid $400,000 in lifetime rent to live in a $300,000 house!), and in fact there are rent-to-own contracts and other forms of lien ownership (the mortgage is the most common).
But there are substantial risks to ownership that don’t exist with long-term rental, and you are in fact paying for them. For example, the housing market may crater, in which case the renter can just find another cheaper rental or negotiate a lower rate. The owner has to eat the loss. In addition, the owner needs to upkeep the house, etc.
Now, it’s possible those aren’t perfectly priced because of market distortions, but the main communist thesis is “the guy who uses a thing should own the thing, not the guy who created/purchased the thing” versus the main capitalist thesis of “the guy who owns a thing should continue to own a thing regardless of usage.”
There are a lot of reasons this collapses when instituted in the real world.
This is only tangentially related (in that I don’t think it is related to communism, but that is only true if you see the house as an investment intended to pay out value, correct? So, if you treat a house primarily as a consumable good (like you would a rented residence, or, say, a car) and less like an investment the drop in value is less of a problem? Even a house which loses significant value has some value for the owner, whereas a rental never does (for the tenant).
Is it interesting that this does not apply as well to things like trademarks (and maybe other IP)?
No. Even if you are a homeowner and see it as entirely a consumable, you are getting a passive income in equity gain that you can consume to fund other consumption. This is a very common pattern in America, although I don’t know about Europe as much. Losing that passive income is a blow when the market crashes even if housing is entirely “consumption” on your part.
Depending on the “capitalist/communist” view, it certainly may. There are differing views on that, of course.
No, I understand how equity works and how it is used.
I am questioning whether treating a home purely as an asset/investment/income stream when comparing home-ownership to renting makes any sense at all. The housing market cratering is only a risk for home owners if they are leveraging the value of that house for other things (I understand this is common, I’m suggesting maybe it shouldn’t be as common). If the value of your house craters, and you aren’t leveraging it or relying on value of equity, you can just keep making your mortgage payments and will still eventually fully own a building that you can live in. Right?
To illustrate: I would suggest that someone who owns a home/pays a mortgage for 10 years and then defaults on the mortgage might well still be better off than someone who rents for 10 years and then gets evicted, or at least no worse off financially unless their finances were all predicated on the assumption that they would have a house and the house would retain or increase in value.
@acymetric
Yes, although you would be paying way too much for it. You would end up paying substantially more for housing over that period than the renter would, likely enough more to offset the benefit gained by ownership at the end, depending on the collapse.
It would depend on the underlying cost of the house.
If they had perfectly zero equity so that the default caused the repossession but no other effects (bankruptcy), then they would likely be exactly the same as a renter, perhaps slightly worse due to the credit hit.
But the reason they would be better off afterwards is almost certainly due to equity, rather than any underlying magic that ownership gives.
This seems like it would only be true under two conditions:
1) You live somewhere where housing prices are extremely high (big cities/nearby suburbs, mostly)
2) Rent never increases
Increase in rent over 10 years is likely going to exceed the increase in property taxes during that span, and of course (assuming you didn’t get a variable rate mortgage) the mortgage payment would stay the same. Ownership actually protects you from risk in that sense (the risk of ever rising housing costs).
No, what I mean is that say there are two men. Man 1 rents, man 2 buys a house in the same region.
For the first year, their payments are roughly comparable. Over time, man 1 pays gradually more than man 2 as rents rise if housing prices are going up. Man 2 has a great deal.
But if housing prices crash, the opposite happens. Man 2 is locked into his mortgage payment while Man 1 can renegotiate based on the lower rates or move out of the blighted area. This can be substantial if the event is severe, like 2008.
In an imaginary world where both pay $1000 a month for ten years before a 50% housing price loss, Man 2 is now tens of thousands of dollars behind Man 1 even when we are looking at buying houses, because of his negative equity if he tried to sell.
The author is not endorsing that claim (hence the “suppose”), they’re giving it as an example of a non-capitalist perspective. The intention isn’t to make you think about what would happen if we bolted the perspective onto a capitalist system, but rather convince you that capitalist definitions of property etc. are not necessarily correct — they just seem natural because they are the default in our current society.
It doesn’t work under 90% of communist frameworks either, you have rewrite it along the lines of “suppose your family had lived in a house for generations and done the vast majority of maintenance and upgrades on the house”. Under Marx a family of renters who lived in a house where someone else was preforming labor to keep the house those renters would be damn dirty capitalists stealing from the electrician/plumber/landscape architect and those people would have a higher claim to the house than the renters who were just consuming its shelter.
1.6 really stood out to me too: as the path to which communists internally justify ‘revolution’. Specifically, the author hints that that arrangement may not be ‘just’ because the property chain that led to the landlord owning the property might be tainted by some past act of injustice and the notion of property rights being arbitrary anyways. Combine that with the idea that capitalism doesn’t necessarily allocate resources to the needy, and you start to get a sense of “well, if they don’t rightfully own that, the rules of why they own that are arbitrary, and the proletariat need it, the proletariat should just take it”. I think, perhaps unwittingly, the author has highlighted why communism fails to create societies we would consider desirable: it’s based on a politics of envy that gives incentives to perpetuate further injustice (taking ‘property’ by force and the violence that will inevitably follow that course of action). I think this dovetails nicely their point about capitalism creating a sort of perverse meta-social incentive towards maximizing production/consumption by suggesting communism’s perverse meta-social optimization is maximizing taking from others based on collective perceived need (insert obligatory reference to ‘the greater good’).
I think the strongest area of argument for 1.6 is actually land ownership, because land is a finite resource that exists naturally, is generally unreasonably difficult to manufacture, and that is largely parceled up by arbitrary ideas of who got there first and or who took it by force/money sometime in the past. The current paradigms of state ownership, or continued ownership by whoever currently owns it can be easily shown to be unfair on some level. It’s also easy to take land arguments to absurd endpoints, such as a few wealthy landlords owning all land on earth and everybody else has to pay rent in the dystopian capitalist future, which could be argued by pointing to housing issues on the West coast as real life evidence of where we could be headed.
The current Israel/Palestine conflict also makes another good modern example since it has largely occurred in recent history such that a large volume of accurate historical evidence is available on the topic (a bunch of people decided to move to the middle east and kick out the people living their based on some notion of original cultural ownership), but is mature enough that the answer isn’t super obvious as to what a just solution is. Do the original (and now dead) Israelite’s have a valid claim to that land? Can their descendants claim that right for themselves? Did the Zionists have sufficient ground to claim they are valid descendants? What about the inhabitants that lived there before the original Israel? Does the Palestinian claim have more validity because it was more recent? If recency counts, what about the descendants of the Israelis currently living there? What about the Palestinians who’d like to kick Israel out but who never lived in Israel because they are descendants of the people who got kicked out by the Zionists. There isn’t really enough historical evidence to create a strong chain of who ‘owns’ that land, and the records of exchange are fraught with violence, so I don’t think anyone can claim to have an objective or neutral answer: your personal opinion is inevitably tainted by your biases. If you side with Israel, you are likely assuming the biblical Israel had a just claim to the land, you are assuming claiming past ownership is valid justification for current ownership generation removed, and that the Zionists that created the Modern Israel had sufficient cultural/genetic ties to the original Israel to justify creation of the new one. Alternatively, if you side with the Palestinians, you are likely basing it on recent ownership is a more valid claim than vague historic/cultural claims while ignoring the fact that most of the Israeli population is removed from the original Zionists that formed Israel and thus now has a stronger recency claim than the generationally removed Palestinians, and possibly some notion of debt incurred from injustice being an inheritable attribute. I don’t think there’s ever going to be an objectively correct answer to the problem which is why it tends to default to some variation on ‘might makes right’. Currently the Israelis have the might so they have right, and the Palestinians would like to have the might to make it their right instead. Likewise, capitalists have the might and thus the right in modern capitalist societies, and communists would like to have the might to make it their right instead.
Ultimately I think the author is right that there exists a large space of possible economic models that works like some sort of optimization problem, what I think they miss is that not all models are going to be stable, or that being non-capitalist is axiomatically a good thing (which they don’t state but seem to heavily imply). While I would agree that it’s not clear capitalism optimizes for human health/happiness, and certainly doesn’t optimize for making people identically equal, I think it lands at a pretty good spot of making everyone involved relatively materially wealthy to alternative systems while being stable (at least as far as the near-term historical record has shown, and certainly when compared to past instances of attempts to shift into a communist/socialist part of the function-space). It’s also not clear that communism would optimize for human health/happiness either though obviously proponents think it does. I think they also overstate their case that people can’t envision other economic modes, they invoke Soviet Russia as an example but gloss over the suffering and loss of human life as well as the ultimate collapse of that economy. “It takes a lot of death to get it, but it can rival capitalism in a few areas of accomplishment for a little while before it implodes” is hardly a resounding endorsement of alternative economic models let alone communism. As well, China is a glaring instance of some weird communist/capitalist/authoritarian economic state bastard child that seems to be stable for the moment and looms large in many Western minds as a rival power. I think it is however obvious to most Westerner’s that given a choice they would prefer our current system to what China has. Ditto on historical economies such as feudalism or hunter-gatherer (however egalitarian it might have been, people like their toilet paper and antibiotics).
This is not a fair reading; feudalism < capitalism < communism is a pretty standard communist view.
Fascism is the word for that economic state.
I don’t think that’s necessary. I certainly don’t feel it describes my views. I simply feel that both groups have a strong enough claim that they should be allowed to live there. In the same way that I for exemple think that both wanting to drive white Americans back to Europe or wanting to restrict Native Americans to the reservations would be wrong. As Israel is the side with the power to prevent the Palestinians from doing so I must either try to persuade them to relent (which seems hopeless to me) or support the Palestinians against them. If the power were to shift and the Palestinians chose to use that power to drive out the Jews in turn then I would have to persuade or oppose them instead.
Under whose ownership laws? If you use the Israelis’, then most Palestinians aren’t getting what they want (the “Right of Return” is the biggest sticking point in negotiations in almost all cases). If you use the Palestinians’, you are taking the real property of millions of Israelis.
This is only one of a HUGE number of issues with the idea that “they both can live there”.
My favored solution would be that the Palestinians are given housing in the general area they were driven from in exchange for giving up the claim to recover specific property. That wouldn’t give the Palestinians everything they want, but it would give them a lot and it would be the parts that I consider most important.
But that solution is suicidal for the Israelis…
Yes, but it’s not what you consider important that determines whether this solution turns into Holocaust v2.0
Your proposed solution results in about ten million Palestinians having the opportunity to live in poverty next to people who are and will remain conspicuously an order of magnitude richer than they are, when said Palestinians have been basically trained from birth to blame Israelis for everything bad about their life. That’s going to be pretty damn important to them in ways that don’t go away just because you point out that their street address is approximately what it “should” be.
You would also give the Palestinians the same subsidies and benefits given to Haredi Jews, obviously.
If you are destroying your economy to give free money to religious fanatics, other people will point out that that’s fine, as long as you do the same for their preferred group.
There are roughly an order of magnitude more Palestinians than there are Haredi, and the Haredi have a much better relationship with the people of Israel than “we have graciously agreed to stop trying to kill you all, so long as you keep giving us lots of money”.
So who is this “you” that is imagined to be giving the Palestinians these benefits, and how much is it going to cost them, and how much are you planning to kick in yourself?
That would be the price for having Palestinians peacefully living among their much richer neighbors.
This wouldn’t work, but then there is probably no peace solution that would work while keeping Israel as a Jewish state (as opposed to a country that just happens to have a very big jewish population), short of genocide.
I don’t find that a plausible outcome at all. Even when undergoing enormous suffering at Israeli hands support for that among the Palestinians is still strictly limited. And it would certainly decrease if they were treated better. The minority of the Arab population that was given equal rights by Israel, the best indicator of what the Palestinians might be like if treated well by Israel, almost unanomously support coexistance.
But I meant what was important in determining whether the Palestinian demand to be able to return to their homeland had been met. There are absolutely other factors that would be important in determing whether a peace agreement would work out. Ensuring social mobility, changing Palestinian culture, limiting the number allowed to return at any one time to give them time to integrate into Israeli society (and the other way around), and so on.
And it would certainly decrease if they were treated better.
Would it? What was the reaction when control of the Gaza Strip was returned to them, at great internal cost to Israel?
Did things move towards peace, or was it a new place to shoot rockets from?
Was Israel willing to let things move further towards peace? My impression was that Israel was mainly motivated by fear that the Palestinians would start pushing for a one state solution on the grounds that the majority of people directly ruled by Israel were Arabs. I never got any impression that the withdrawl from Gaza constituted any change in Israel’s attitudes towards the Palestinians but rather that it was a continuation of the policy that Israel has always had of forcibly confining the Palestinians to ghettos in the name of promoting Jewish ethnic supremacy.
Better version of “tenant” example would be peasant owing rent and services to a feudal lord.
I do not endorse handing over ownership of rented housing to tenants, economic consequences of that would be disastrous and would only increase inequality, as you have rightly observed. But my support for continuing existence of rental housing is based on purely consequentialist grounds. I do not believe that landlords have intrinsic moral right to real estate they own.
I did get to the peasant owing rent and services to the lord example myself. This is what I thought next: the lord is the lord because he or his progenitors conquered the land. Either by making it habitable (conquering the nature) or by literal conquering from some other holders. The first case looks very much like the ‘tenant’ argument — the peasant comes after the land is conquered and now he owes rent so it’s not interesting.
In the second case, the peasant pays rent to the lord because the alternative is for the lord to conquer again. He proved he can do it before, so presumably he can do it again. The rent is once again an arrangement that is preferable to both parties because it prevents them from expending the resources unnecessary.
Underpinning the property rights is violence, always. If you, as you write “do not believe that landlords have intrinsic moral right to real estate they own” you basically are calling for war. If you win this war, you will presumably have another way of allocating the property rights and you will have to defend it with violence. So why is your way superior?
@Wanderer2323
This is not my view. I evidently did not express my views well, sorry. I´ll try again.
My view is that laws establishing property rights are justified only on consequentialist grounds. That means insofar as they lead to better society than would exist without them. I realize that “better society” is in itself unclear concept but in this late hour (it is midnight here in Europe) I am not going to attempt to explain/invent my whole political philosophy. Certainly current society with its property arrangements is better than perpetual war.
I was not particularly convinced by the comment on cultural marxism. Among other things, the promoters of the idea of gramscian damage point out that the intermediate goal is to damage our society. Which means that intellectual movements which don’t directly promote marxism can still be an instance of cultural marxism as long as they support that goal. For instance, the critique of patriarchy which Simon Jester mentioned would still undermine existing cultural norms.
The comment looks to be working with either a straw man or weak man definition of cultural marxism. For whatever reason, when I look for definitions of the phrase, post-modernism is not included as an example. I didn’t look very hard, which is why I can’t rule out the weak man option.
Might make sense to start by proposing your steelman definition of cultural marxism?
Well, the link would be a start, but I’m not sure I can adequately steel man it myself. Nor am I required to in order to point out that the comment laid out a false description of it. The link does NOT use the phrase cultural marxism, but refers the efforts of soviets like Antonio Gramsci which Scott referenced in the initial post.
The following is an example the link provided.
“…in the 1930s members of CPUSA (the Communist Party of the USA) got instructions from Moscow to promote non-representational art so that the US’s public spaces would become arid and ugly.”
This is closer to what Simon presented as the second framing, but it’s outside of it. And it’s clearly outside what he was arguing against, but is a straight example of the sort of argument believers are actually making.
Er, you can hardly convince people that someone’s definition of a word is wrong without telling us what the right definition would be. Wikipedia addresses the matter here.
Funny, I thought modern art was a CIA plot.
It would be funny, and about what I’d expect from the Cold War, if the CIA and the KGB were both promoting it to screw with each other.
@Nornagest
You mean to foil each other. The object of the screwing there would be the American people, though.
Cultural marxism is marxist tools applied to culture.
Marx believed that economics was everything and that culture was the superstructure that the capitalist class built to continue their dominance over the working class. For example he believed religion was built by the capitalists to keep the masses believing that the hierarchy was created by God instead the capitalists to serve their class interests.. Thus it is impossible to understand religion without understanding class conflict. The same paradigm applies to literally everything. Class conflict was the decoder ring for all society, you had to have it to understand what was really going on. Marx was the ultimate conflict theory believer, everything was about the conflict between classes.
Cultural Marxism rejects that the conflict is about class and keeps everything else. So for a feminist cultural marxist societal rules are creating by men to keep women down, for a gay cultural marxist they are created by straight people to keep gays down, for a black cultural marxist they are created by white people to keep minorities down, and an intersectional cultural marxist would say that they are created by white ces straight males to keep everybody down. Thus they are just as convinced by the conflict theory, they just believe in a different conflict.
Thus a real marxist looks at cultural marxists and sees nothing in common. Who cares if a person is male or female, black or white, gay or straight, what matters is are they a capitalist or worker?
It’s the difference between ideology and meta-ideology, with ‘cultural Marxism’ being a claim that the meta-ideology is largely the same between Marxism and (parts of) Social Justice.
Yes, this is also how I see it.
Take Marxism, remove the first part (economic theories up to the conclusion that capitalists are oppressing the proletariat), and keep the second part (class fight, revolution, glorious new society). Now you can replace economical oppression with racism, sexism, heterosexism, cissexism, ableism, etc.
For a Marxist, removing the economical theory would ruin the entire thing, because the idea is that Marx was the Einstein of economics. But the college kids today are not economics nerds; they want to talk about revolution. And for the trust fund kids this is a wonderful opportunity to LARP fighting heroically against oppression.
By this logic, PETA are vegetarian Marxists, Rush Limbaugh is a small-government Marxist, etc. This just seems like a clever way to call your outgroup Marxists.
I think a lot of the memes your link cites are more associated with postmodernism than Marxism. As a rule of thumb, if ‘class’ isn’t mentioned, it’s not Marxist.
You’re correct that they both “damage our society,” given a certain definition of societal health. But basically any ideas that are both significant and wrong will do that. It doesn’t make them the same movement necessarily.
Nor does an adversary’s support for a movement necessarily make it wrong. In the USSR, the authorities could have said that individualism, capitalism, civil rights, democracy, free speech, etc. were ideas promoted by the US in order to weaken the USSR’s society, and it probably would have been true.
(As an aside, I’ve heard the claim that the CIA pushed nonrepresentational art. Dunno if it’s true, just funny that both sides are blaming the other for this. Now my new headcanon has Rothko pocketing checks from both the CIA and the KGB)
They may not represent orthodox Marxist beliefs, but they are exactly the sort of thing they care claiming the soviets pushed for us to believe. I don’t think the claim that AIDS originated as an anti-black race bomb would qualify as Marxist, but the KGB spread the idea anyways. I suppose Stalinism and Marxism aren’t quite the same thing, but that would turn into an argument that the label is just misleading, not that it’s not a real phenomena.
@ADifferentAnonymous
I thought “postmodernism” was a bunch of architects in the ’60’s and ’70’s who were saying “Hey, let’s start building things other than glass boxes again”, and that the “cultural marxism” mentioned upthread was a group of Weimer republic era German academics who wrote impenetrable world salad (and also that American Jazz music is bad for some reason) that may mean pretty much whatever the reader decides to infer (like the writings of Eliezer Yudkowsky, except even more opaque because it has to be translated from German).
From Wikipedia both terms also seem related to deeply academic literary criticism and not much relevant to anyone not writing term papers.
So what about them?
An interesting article about postmodern architecture.
Pinning down an exact definition of postmodernism is tough, but Simon Jester seemed to think it covered a large swath of what people mean by “cultural Marxism”. So I’m going with my “I know it when I see it” definition… Sorry if it sounded like I had a better idea what I’m talking about.
As for the actual self-described ‘cultural Marxists”, the Frankfurt School, I suspect they have basically nothing to do with what the term evokes to most people today.
@ADifferentAnonymous
What does “cultural Marxistm evoke to most people?
As far as I can tell from context, it seems to be used to mean capitalist modernity, and social liberalism, which seems a strange use of the word “Marxism”, or to put it another way: I’m very doubtful that the Gillette and Nike corporations are advocating any ‘Dictatorship of the Proletariat”, so a translation would be appreciated please.
Phlinn’s link has a pretty good set of examples (scroll til you see the bulleted list).
@ADifferentAnonymous
"Phlinn’s link has a pretty good set of examples (scroll til you see the bulleted list)"
So “Cultural Marxism” is a set of beliefs supposedly spread by Soviet agents long ago that leave the U.S.A. vulnerable to radical Islamist or the author otherwise doesn’t like?
I now declare that fans of new Doctor Who and Star Trek: Discovery are “Cultural Anti-free Televisionists” and that ‘movement’ must be opposed!
A mixture of:
a) Pretty much what the wikipedia article describes, the conspiracy theory that some/most/all of the progressive policy ideas (feminism, affirmative action, gay rights, etc) that have been ascendant in the last fifty years are not what they seem (the result of lots of people thinking those are good policies) but rather conspiracies led by (one or more of the usual right-wing bogeyman suspects). This seems to be a recurring idea that has waxed and waned in popularity over the last thirty years but never gotten really mainstream-popular; the specific conspirators vary over time (sometimes Jews, sometimes Russkies, etc) but the rest of it is pretty static.
b) It seems like some people who generally don’t like social justice activism see “cultural marxism” used to describe it, and think “Yeah, that’s a good term for that thing I don’t like!” and use it, just based on the meanings of those two words, without being aware of its conspiracy theory origins. I think this is what Aapje meant above when he said that cultural Marxism is, “a claim that the meta-ideology is largely the same between Marxism and (parts of) Social Justice.”
@dick
No, what I meant is that ideologies can be similar or dissimilar in methodology and similar or dissimilar in their object-level aims.
For example, a person who considers black people degenerates who need to be cast out of their whitish community is in one way 100% opposed to a person who considers white people degenerates who need to be cast out of their blackish community. However, the basic mechanisms between these ideologies can be nearly identical.
So then a claim that these people are 100% opposed makes sense, but a claim that they are similar also makes sense. It’s all about what you consider relevant. Do you consider it relevant who they oppose, in which case a person who is critical of black culture and advocates assimilating in white culture by blacks, including through intermarriage and mixed living, is closer to the white separatist than to the black separatist. Or do you focus on the meta-ideology: giving up a race as unsalvageable and seeking segregation. In that case, the white and black separatists are birds of a feather, with the pro-integration dude being less similar to each than they are to each other.
Aapje:
I think a lot of diametrically opposed political movements work this way–most famously, communism and fascism.
@Aapje – Perhaps I phrased it badly, but that’s what I meant: that you’re using the phrase to mean a similarity between SJ activism and marxists, as opposed to using it to mean the thing you would see described if you googled that phrase (a literal conspiracy to change culture, by people who are Marxists).
@dick
All ideologies are a conspiracy in that they cause people to coordinate behavior by interpreting the world in a certain way and seeing certain solutions as valid. Very often, both the interpretation and perceived efficacy of the solutions are (partly) not shown to be correct or are even (partially) shown to be incorrect by science. Rejecting an ideological interpretation or opposing the ideology-based solutions often results in what people perceive as sanctions, ranging from withholding approval to being shunned to violence.
Pointing out this subjectivity is an attack on those ideologies, as it will undermine their claims of objectivity, correctness, universality, etc.
Of course, pointing out the subjectivity selectively can itself be used as an rhetorical and ideological weapon.
However, the denial that one’s own ideology has subjective elements and results in coordinating that subjectivity is no less an rhetorical and ideological weapon.
The accusation of ‘conspiracy’ often primarily reflects the Overton Window of that person. Falsehoods that fall outside of the Overton Window and thus are considered a threat, are called a conspiracy. Falsehoods that fall within the Overton Window are honest mistakes by people who are not at all conspiracy thinkers.
It’s a double standard.
Pretty much all ideologies can be called a literal conspiracy to change culture. Christians want others to believe in God, start praying, etc. Feminists obviously want to change culture to change how women are treated. Etc, etc. It’s quite common for people with ideologies to explicitly discuss and coordinate how to change culture.
So the only interesting question is whether ‘Marxist’ is a good label. The difficulty there is that ideologies/culture can shift, but the labels can remain. The English of today have a very different culture from the English in Shakespeare’s time. Yet we label both the same.
The opposite also happens, where people try to get rid of a negative stigma by rebranding. See Blackwater, nay Xe Services, nay Academi.
The question of what labels are not (too) pejorative, but also not whitewashing, fairly describing the ideology and/or history and/or approach, etc is a complicated one. It is extra difficult when talking about meta-ideologies (or other kinds of similarities), because we don’t have a very good vocabulary to distinguish between similarities in methodology vs similarity of concerns vs similarity in ingroups/outgroups vs …
Aapje:
This is a really cool, interesting comment. I suppose ideologies evolve partly on whether they’re effective at enabling cooperation among people following the ideology–a kind of “green beard” effect, but also something a bit like having a shared language or a shared culture that makes it easy to figure out how to coordinate.
There’s a time for “X can be seen as a type of Y if you squint” semantic word games, and that time is not when you’re trying to disambiguate two meanings of the same phrase.
Imagine a group of self-proclaimed feminists that don’t actually believe in feminism. They’re going around talking about smashing the patriarchy and equal pay for equal work, but they’re not actually doing it to achieve those goals, they have some other secret goal in mind, e.g. destroying Western democracy to pave the way for Communism. Make sense?
That’s a literal conspiracy – a group of people plotting in secret to do something bad. And that is what “cultural marxism” used to mean before it got adopted as a sneer word against progressives generally – a literal, actual, conspiracy of people-who-are-Marxists claiming to be who pretend to be social justice activists but are actually fighting for something else. That is very much not the same thing as calling social just activists Marxists because they resemble Marxists in some way when you squint.
@dick
You are proving my point here.
It’s actually a fairly widespread belief that many mainstream feminists have real goals (like benevolent sexism) that are different from their purported goals (like gender equality). Christina Hoff Sommers believes that she actually favors real/proper (‘equity’) feminism. In her view, the majority of self-proclaimed feminists don’t actually believe in feminism.
What is real, honest feminism to you, may be seen as a conspiracy by others and vice versa.
Ultimately, “plotting in secret to do something bad” is as subjective a judgment as they come. What you see as bad, others see as good and vice versa. So where you will see a conspiracy to do bad, others see a conspiracy to do good & vice versa.
Although of course the bias extends to how things are framed. Plotting to do good (in the view of the beholder) is called activism or other positive words. Plotting do to bad (in the view of the beholder) is called a conspiracy or other negative words.
Key in moving past tribalism is recognizing this kind of manipulative framing, where very similar behaviors are judged very differently, depending on whether they are in service of a ‘good’ or ‘bad’ goal.
PS. Note that in a strict sense, very few people seem to actually be principled. Most people seem to let their intuitions override their principles.
I didn’t accuse anyone of “plotting in secret to do something bad”, that was just me telling you what “conspiracy” means. I didn’t claim that cultural marxism is a conspiracy, that is again just me reporting what the phrase means. I understand you disagree with that definition, that’s why I said there are two definitions of it.
I know English isn’t your first language, but I promise you, “conspiracy” implies secretiveness. An ideology is not a conspiracy, and two people disagreeing about what “feminist” means is not a conspiracy. A secret plot to destroy America is a conspiracy.
I have two very different thoughts on this comment. They both run parallel to yours.
Firstly, my question is not ‘what a person purports to believe’ but ‘what will she vote for’. If a ‘post-modernist critic of patriarchy’ as Simon_Jester puts is given a ballot, will she vote for socialist policies or against them? From what I’ve seen, it’s going to be 100% socialism. So then why should I care that she as a post-modernist is supposedly following an ideology that is a ‘reaction’ to marxism? Her action (voting socialist policies in) further marxist goals therefore she is a marxist QED.
The second thought is looking at the Scott’s question and Simon_Jester’s answer like this:
Scott: why anyone labeling {societal phenomenae} as ‘cultural marxism’ gets dismissed as conspiracy theorist crackpot?
Simon_Jester: because of lack of evidence for any connection to marxism and insufficient rigour in distinguishing between different leftist ideologies.
Fair enough. Now then.
Me: why then anyone labeling {different societal phenomenae} as ‘nazism’ does _not_ gets dismissed as conspiracy theorist crackpot? Nazi-labeling suffer from similar issues — no connection of accused to ‘nazi party’ and really no attempt to analyze whether the accused have anything to do with nazism politically speaking.
To summarize, the second thought is that Simon_Jester does not answer Scott’s question at all. The reasons he gives cannot explain most of the observable reality.
Your first thought argues that it’s okay to call someone a Marxist as long as they vote for the same candidate that a Marxist would vote for, and your second thought argues that it’s terrible to call someone a Nazi just because they voted for the same candidate that a Nazi would vote for.
If I am wrong by enterntaining both of these thoughts does that mean that Simon_Jester is also wrong for enterntaining both thoughts that are opposing to these?
I don’t think he did. You can still go reply to his comment, FYI…
@Wanderer2323
The few I’ve encountered who’ve called themselves “Marxists” mostly seem to me to fall into two broad groups:
Grey-haired (usually 70+ years old) and not much distinguishable from those who call themselves “progressives” in terms of near goals, they may be found occasionally protesting in front of Chase Bank branches in nicer neighborhoods, and except for the buttons they wear and bumper stickers on their cars they’re hard to tell from the senior citizens who organize church bake sales (and sometimes they are the same!), they are outnumbered by another group that calls themselves “Marxists”:
Both young and old (but mostly younger) who’s immediate goals are indistinguishable (to me) of those who usually call themselves “Anarchists”, they talk a lot about “the revolution”, “direct action”, and “consciousness raising” (by which they seem to mean start a protest or join a protest, break some windows, and when the cops come and put handcuffs on everyone near the broken windows, those who didn’t break the windows themselves will hate the cops and their “consciousness” will be raised).
This type hates “liberals” and “moderates” and describe as “fascist” almost everyone not in black hoodies with bandanas on their faces, often they came up through the “punk rock scene” (they’re the mirror reflection of right-wing”skinheads” in that). This type doesn’t vote and will say things like “If voting changed anything they’d make it illegal”. Often they’re white but have “dreadlock” hair, and seem to avoid soap (same as anarchists). Look to find them at “bookstores” near college campuses with posters of Lenin, Mao, and sometimes even Stalin.
Speaking of Stalin, here’s his words on “liberals”, “progressives”, and moderate “socialists”:
So pretty clear that most of “The Left” is “going up against the wall” “come the revolution” as well.
Anyone who thinks Avakian, Sanders, and Pelosi are the same really isn’t paying attention!
I should add that while, except for favoring red bandanas instead of black, “Revolutionary Communists” look and smell the same to me as the “Long Haul” anarchists, but their man Stalin had this to say:
I think a big “mosh pit” for the anarchists, RCP, and skinheads to schedule fights with each other and leave windows and gravestones alone may be helpful in terms of the rest of us being left in peace.
To be fair to Stalin (and there’s a phrase I thought I’d never say!), social democracy and fascism in Europe evolved around the same time and for some of the same reasons: they were both trying to stake out a viable alternative to Gilded Age-style capitalism (in either aristocratic or republican flavors, but they look pretty similar through Stalinist eyes) and Soviet-style communism, and they both took kind of a smorgasbord approach to policy. Especially for a guy as paranoid as Stalin was, it’s not too much of a stretch for an observer in the mid-Thirties to think they might end up in some of the same places.
As it happened, of course, fascism ended up sitting on top of a giant pile of skulls, and social democracy didn’t, but it wouldn’t have been obvious that would happen. At the time, everyone had a militant wing, and the only real example of fascist atrocity if you read books from the era was the “Abyssinians” (viz. the Second Italo-Ethopian War, now overshadowed first by the Spanish Civil War and then by WWII, but at the time quite a hot topic).
NPR piece on a proposed change in regulations on light bulbs. The whole article is written from the assumption that the right way to raise energy efficiency is to ban lower-efficiency technology. This misses a big point–when users get a choice, they can decide whether the energy efficiency is worth other tradeoffs–like having worse color resolution, or being more effectively dimmable. Nobody making a top-down decision about what to ban and what to allow knows enough to make those tradeoffs for all the individual users–which may include people who really, really need the spectrum they get from an incandescent (now halogen) bulb, or very dimmable bulbs, or just really like the appearance of the incandescent bulbs, or maybe some other thing I don’t even know about. (FWIW, I’ve switched over to LED bulbs pretty-much exclusively in my house. They’re just better IMO. But that’s a choice people should be able to make themselves.)
This is an interesting example of a blind spot–I suspect the reporter had never even thought of the issue in this way, so there was no awareness of that perspective in his article.
Well, clearly there’s a point at which it shouldn’t be the individual’s choice, correct? I mean, if someone wants to fuel their hummer with leaded gas and can find/afford it, I’m happy that they don’t have that choice to make.
It then turns into the tradeoff between individual user’s wants and the benefit to society for restricting those wants. Assuming that the quote from the article where an expert says “Now we’re going to have to generate about 25 large coal-burning power plants’ worth of extra electricity if this rollback goes through” is even remotely in the ballpark of being true, I’m totally fine continuing to ban the inefficient lightbulbs.
That’s “Think of the children” slippery slope. It’s equating real, measurable risks with the mere chance of harm. Which sounds reasonable, until you actually measure trade-offs and realize you’re usually killing somebody down the road.
The libertarian way of solving this is “I’m free to do anything that doesn’t harm somebody else”. Allowing you to drive drunk is out. Smoke in cafes, out. Use leaded gasoline, out. What they have in common is they’re clearly, statistically correlated to measurable deaths. You can pull out the number yourself, and in 20 minutes reach a reasonable estimate.
Consumption of electricity isn’t, not by a longshot. There are a number of maybes involved, unknowns (what’s the source of the electricity?), scale issues, taking people on their word etc. Regretfully, it’s very bad bayesian evidence.
Combine this with the knowledge that there are memeplexes that survive by promoting climate hysteria unrelated to actual evidence, and the updating value of “incandescent bulbs kill people” becomes effectively zero.
There is a lot of ambiguity between “real, measurable risks” vs “mere chance of harm”. Climate change is a real risk and banning incandescent light bulbs doesn’t harm anyone. Unfortunately for libertarians, most things you do affect others. We can’t just pretend otherwise.
Certainly it does. It harms those who prefer incandescent bulbs over the alternative.
Fortunately for libertarians, the price system transmits the overwhelming majority of such effects back to those who cause them.
My having someone cut my lawn is a cost to him, but I can only get him to do it by offering a price which at least balances that cost. If the price he charges is much more than enough to balance the cost, I can probably find someone else to do it at a lower price.
The system isn’t perfect, for a variety of reasons explored in the economic literature—I don’t have to pay for my neighbors not enjoying the sound of the lawn mower, or get paid for the fact that my neatly mowed lawn provides a benefit to the neighbors. But it covers most of the costs and benefits of our actions. So situations where someone takes an action which imposes net costs or fails to take an action which imposes benefits due to the existence of externalities are the exception, not the rule.
The alternative is to have decisions made through the political process. In that process, individual actors, voters, politicians, lobbyists, bureaucrats, rarely bear most of the cost, or receive most of the benefits, of their action. Hence there is little reason to expect them to take those actions that produce net benefits, refrain from those that produce net costs. The same problem that is the exception on the private market is the rule on the political market.
Which suggests that shifting decisions to the political market is likely to increase, not decrease, the problem.
A more detailed version of the argument is here.
This reminds me of the whole plastic straw debacle. 99.999% virtue signaling, and 0.0001 actual result. And I’m pretty sure I left out a few 9s.
For an incandescent bulb ban to make a significant difference, you have to go through a number of gates:
– how much of climate change is due to electricity consumption
– how much is due to consumer consumption
– how much is due to lightning
– what is the difference between people buying more efficient bulbs because they’re, well, more efficient, and what a ban would achieve.
Each one is about 1-2 orders of magnitude.
Now, express what’s left in currency*, and subtract the cost of the ban: inconvenience, enforcement, cost of using much more expensive replacement where replacements are needed etc. Is the difference worth us having this conversation? I strongly doubt it.
*If you balk at the idea of expressing climate change in currency, I’d guess you’re using the sanctity/degradation moral foundation. I can’t debate you here, since we’d be debating one of your primary goals. If you’re aware of that, great, we all need primary goals. If you’re not, you might want to think a bit about it.
> Unfortunately for libertarians, most things you do affect others.
I describe myself as libertarian only very reluctantly, because it gives a lot of information in one label. But I’m really really not – I just think that 1. economics is mostly libertarian and 2. Chesterton’s fence (and URSS) says to avoid creating rules unless the need is very obvious, because you can never guess all the side effects.
That’s it. There are a lot of problems that are best solved by non-libertarian solutions, like technical inspections for cars (and I’d include here many things others here wouldn’t, like police, minimum pensions, minimum health care etc). Commons are generally solved too slowly or not at all with free markets.
I thought Chesterson’s fence was about not eliminating rules…I’m not sure it has much to say about creating new ones.
Whether or not you want to measure the badness of global warming in dollars, the cost of preventing it or mitigating it is mostly measured in dollars–you’ve got to retire that coal-fired power plant 20 years early and pay to replace it with a nuclear plant, and that costs a big pile of dollars.
If banning lightbulbs didn’t hurt anyone, then you wouldn’t need to ban them. The only reason to ban them is that some people want to continue using them, and those people are being hurt by your decision–at the very least, you’re taking away a choice they wanted to make. Some people find that LED bulbs don’t render colors right, and among them are (for example) artists and crafters who care a whole lot about getting their colors right. Dimmable LED bulbs also don’t dim as far down as incandescent bulbs–for most people, this isn’t very important, but there are surely people for whom this is a significant issue.
There’s nothing in the world easier than declaring that nobody really needs or wants (or should want) whatever you’re trying to ban. But it’s nonsense–if you propose banning something, you ought to at least be able to acknowledge that some people will be hurt by the ban. Maybe the ban is still worthwhile–some gun hobbyists are seriously inconvenienced by the restrictions on fully automatic weapons, but there’s a pretty good case for this ban being worth the inconvenienced people. But you have to make that case.
In the case of incandescent bulbs, that case is extremely weak.
Or alternatively, you could be a utilitarian who cares about reducing human suffering.
Currency, in such contexts, is an imperfect measure of utility. If someone says “the effect of three degrees of warming is equivalent to reducing world income by 3%,” what he means is that it has the same effect on utility as reducing world income by 3% would be expected to have. It isn’t a precise statement, since one can imagine different ways of reducing world income that would have different effects, but it is a statement about utility.
I suppose in the same way, I could say:
And you would not find this objectionable? Since really, I’m doing the same thing: “global warming of 3 degrees, would be the equivalent utility loss of losing x number of houses for poor people”.
I do think this sort of comparison can be a sleight of hand, though. Because what are the utility-consequences of more houses for poor people? Is it positive? Perhaps negative, if there are trade-offs involved? The same issue goes for measuring utility in dollars: its pretty easy to imagine a scenario where aggregate-dollars decreases, but aggregate-utility increases.
It’s the same old ground we’ve tread over in the past: the problems with aggregating utility using a standard quantifiable measure.
I’ve read your chapter on this by the way. My take is that there’s no reason to assume that Marshall Improvements will be evenly distributed, and every reason to suspect (based on the actually-existing distribution of property) that they won’t be.
As long as everyone benefits there is net gain to utility, even if most of the benefit goes to rich people. To avoid that you need some reason to think that the net benefit will be combined with a redistribution that leaves the poor people, or whoever has a high marginal utility for money, absolutely and not just relatively worse off.
I do think this is the case that poor people will regularly be made worse-off, even in net-positive Marshall Improvements. The reason is simple: due to the lopsided distribution of wealth, rich people can “win” Marshall Improvements over poor people at orders of magnitude more frequency than they lose.
When stepping back and talking-out the implications of your theory, removing away the abstraction, I doubt the average person would agree that Marshall Improvements are even a measure of utility. If we take Marshall Improvements as the guidelines for maximizing utility, then a rich person could pay $100 a day to kick a homeless man every day for the rest of his life, assuming the homeless man couldn’t raise sufficient funds to counter him. This isn’t merely an “imperfect measure” of utility- its a non-measure that grinds against all of our evolutionarily derived instincts of what constitutes human well-being.
It feels like a good time to reference your book on Price Theory, Chapter 15:
I’m glad you acknowledge that utility-logic Marshall Improvements would break when presented with the scenario of one rich man and one poor man. However, the explanation (Marshall’s explanation, but repeated approvingly by yourself) of this problem going away with a “large and diverse groups of people” doesn’t make sense. If its 1 rich man and 2 poor men, the problem still persists. If the wealth disparity is great enough (as in, our real-world), then you could easily imagine one billionaire vs. 10,000 near-penniless people in Africa, and the problem still remains.
Really, what you seem to be hinting at here (but not explicitly stated), is that only with significant wealth equality among the participants, can a net-gain of Marshall Improvements be taken as indicative as a net-utility gain. But since we don’t have that, and instead have the “one poor and one rich” disparity maximized over an entire population, I think objection remains far more convincing than the argument.
Smoking in cafes only harms those who voluntarily enter that cafe. The libertarian policy is to allow the cafe owner to decide whether to allow smoking. If people want to avoid second-hand smoke, there will be a lot of demand for non-smoking cafes, so there will be enough cafes that ban it.
When your mandate is to optimize for only one parameter, it’s easy to ignore the others. It’s obvious with light bulbs, when they were pushing for terrible compact fluorescents for years. You see it with clothes washing machines where they take much more time and also have mold issues — the regulators don’t have a mandate to preserve the cycle time, only water consumption. You see it with dishwashers, same thing, only apparently Whirlpool has said it can’t meet the proposed standards at any cycle time. Many probably remember the earlier mandated low-flow toilets you had to flush twice.
Re: Optimizing for one thing
I have LEDs all over my house, love them generally, but many of them are NOT designed to last the 50k hours claimed. Oh, I’m sure the light source will last that long, but I’ve already had a decorative bulb break off in my hand while cleaning it. An exposed bulb in a fixture is going to take more physical wear and tear over its lifetime and they are not building them beefy enough to do so. Right now my LED ‘fake filament’ for that light fixture is just hanging in open air instead of enclosed in a protective bulb.
While I agree that pricing in externalities is always far superior to banning something, what’s always bothered me about the incandescent light bulb ban in particular is that in many cases they aren’t wasteful.
In any climate in which you would already have the heating system turned on, an incandescent light bulb is no more wasteful than electric heat. All of the light bulb’s “waste” is simply heat energy. Said heat displaces the need for other heat generation from your heating system.
In cold weather, the inefficiency of an incandescent light bulb isn’t 95%, it’s zero (or at the most the delta between the efficiency of heating via non-electric sources and an electric one).
Interesting point, I’d never thought of that!
Sure, but in hot weather you need to spend even more energy to move the heat outside.
But lightbulbs are just as likely to be used in air-conditioned environments. In the aggregate, it’s still inefficient. Also, light bulbs are likely to be high up or on the ceiling, which is an inefficient way to heat a room (since heat rises).
However, maybe we let individual homeowners make that decision? If you’re living somewhere with electric heat but no air conditioning (as I am now), maybe you can just decide for yourself what makes sense.
Electric heat can include heat pumps.
If everything is from resistive heat, or you are cold that the marginal BTU of heat energy needs to come from resistive heat, then this objection to your objection goes away.
If you want people to use less of something, tax it more.
It is too bad the populace rejects this.
This is true-for most things, most of the time, on the margin–but it’s not entirely useful.
There are two cases where it fails: one is durable goods with low marginal costs (like light bulbs); the other is exclusive alternatives (long-haul trucks vs long-haul trains, for example).
The problem with durable goods is that the uncertainty over lifetime, and the relatively small costs, often makes a collectively reasonable decision (spend an extra $10,000/year on light bulbs, save $1000/month on electric bills) individually unreasonable (what if you spend the $100 that’s “your share” on light bulbs, but move out in 6 months, or get the ones where the bugs weren’t worked out and half of them quit working after 3 months?)
A breakpoint example is commuting methods:
I have a 4-mile commute to work, and it takes me 15-20 minutes to drive. Increasing the cost of gas by $2.50 a gallon wouldn’t get me to stop driving. Having a bus route that got me there in under 30 minutes would. But if everyone drives, there’s not enough demand for buses to have such a route-so everyone drives. There are a lot of things that work this way: there’s a breakpoint at which behaviors switch, and cost isn’t the key driver. For this kind of items, subsidies or regulations will get more of the desired behavior more cheaply than taxes will.
The trick isn’t that you tax the light bulbs. It’s that you tax the electricity. Then natural preferences and optimizations can kick in.
The problem never was (to my knowledge) about the waste generated from light bulbs. It was about the energy “wastage”. Given that, provide incentives for people to economize their energy usage.
The irritations caused by alternatives — high initial cost, high premature failure rate, low-quality light (color, flicker), poor performance in cold, delay before turning on, size/shape limitations, poor dimmability, etc — were not worth the energy savings. The government didn’t want people to pay the extra money to not deal with these problems, so they had to resort to a ban.
Which government are you referring to? Those are definitely problems with LEDs historically, but they seem to have gotten much better in the last couple years or possibly be solved, and I still see incandescents on the shelves.
@dick
Where is this? Last time I went shopping for bulbs it was LEDs. Which was really irritating, because I have a fixture way up at the top of a vaulted ceiling that I use for about five hours per year that had gone out. I wanted to replace it with, preferentially, an incandescent, but barring that a CFL, because I have to leave the bulb behind (per my lease) when I move out; I don’t want to leave an expensive LED bulb there if I can help it. The next lessor can do that shit if he wants.
Any hardware store in Oregon, amazon.com, lowes.com, etc. My local Home Depot has both, but waaay more selection on LEDs. Also they’re not THAT expensive, if you just buy an LED you’re wasting like 8 bucks. (And if your plans change and you stay longer, potentially not having to replace that annoying bulb again, since even cheap LEDs last a lot longer)
The United States government. For a long time there’s been an effective ban on incandescent bulbs (a watts/lumen limit that such bulbs cannot achieve) in standard sizes. You still find them because though Congress would not repeal the ban, they made another law refusing to allow spending of Federal money on enforcing the ban.
Around the time of the ban, GE made a big production about how they were coming up with some new-tech incandescent filament that would meet the new standards. Shortly thereafter they dropped all such research and outsourced their remaining lighting business (which they’re now trying to sell, apparently). I assume the new-tech filaments never existed and GE got some political favor for pretending they did. (Note how the treehugger blog is trying to pretend CFLs don’t have the problems they have; this was before LEDs were at all practical for most home lighting)
The limits were scheduled to apply to decorative bulbs, three-ways, and also to be tightened to the point where halogen reflectors wouldn’t qualify either; it appears that at least some of these measures have been suspended.
Even if currently everybody drives, an entrepreneur can estimate how many people would switch to taking the bus if it was available, and then, if enough people do so, start a bus line.
They do that with cigarettes in the US.
As others have pointed out, the effects of energy use aren’t limited to the end-user. So why would we want them to be the only ones who get to decide to use incandescent lightbulbs or not?
This is starting to verge on culture war, but: as others have pointed out, if the goal is to limit energy use, it makes more sense to raise energy prices than to ban random things you think use too much energy and hope it’ll all work out. When CAFE standards came online, people switched from station wagons to SUVs; when gas hit four bucks a gallon, people switched from SUVs to Priuses.
People are generally pretty good at conserving resources when being wasteful hits them in the pocketbook, but if you’re really concerned that users won’t be able to do the math, you can do things like put estimated yearly energy prices on the packaging.
And, again as others have pointed out, the goals isn’t to simply limit energy use, but to prevent wasteful energy use.
Incandescent lightbulbs were targeted because utility cost/benefit ratio for them was so off. Nothing random about it.
No one buys anything thinking that it’s wasteful. You can point at anything and say that the utility cost/benefit ratio is off, but that’s basically an aesthetic statement, not a substantive one.
What the buyer thinks is irrelevant to the question of whether society deems it waste.
Society isn’t an agent.
I mean, all utility, human goals, and normative statements are derived from from subjective feelings (aka “aesthetics”). I could also say that “banning lightbulbs is bad” is an empty aesthetic utterance.
A human is just a collective concentration of cells, it can’t actually “deem” anything.
Or alternatively, we could recognize that separate entities acting in concert (e.g. through democratic consent) have agent-like power.
Guy in TN:
So, how do you decide whether it’s wasteful for me to (say) keep incandescents in some room of my house where I really want to be able to dim the light from the switch down a lot lower than dimmable LEDs will go? I can see how you might be able to make that judgment on a case by case basis talking with me, but there’s no way a ban on incandescent bulbs can do so.
Raising electricity prices by enough to force me to internalize the externality forces me to adjust my choices based on the size of the externality. (In this case, coal-powered electricity should cost a whole lot more and nuclear-generated electricity whole lot less than what they cost in our world.)
Fine. We’re probably going to run into axiomatic differences sooner than later, and I’m not trying to say anything too radical here anyway.
Thing is, “society” does not often ban stuff just because it’s wasteful, and when it has (see CAFE example above), it’s ended up with perverse incentives more often than not. It does often ban stuff because it has clear negative externalities, but “uses too much electricity” is a lot more subjective than “interferes with eggshell formation in condors” or “all the neighbors are in the hospital with PCB poisoning”. I just don’t buy a compelling societal interest in obsoleting incandescent lightbulbs here. I can buy one for lowering total energy use, but the way you do that is by making energy use more expensive, i.e. taxing it: that captures stuff like setting your thermostat to 75 instead of 65 in the summer, which is going to make more difference than any number of light bulbs.
I mean, I totally agree that incandescents aren’t worth it for me — I use LED bulbs for most everything. I just don’t want to make that decision for everyone.
@albatross11
It’s the same sort of cost/benefit decision I might make in regards to lead in gasoline, nuclear power plants, or fluoride in the water, just on a more miniature scale.
Making such a judgement on a case-by-case basis would be superior if there were no costs associated with analyzing and voting on each individual action. But since the costs associated with this would be astronomical, instead we make blanket-decisions on such matters.
Externalities, which are defined in strictly monetary terms, are not the concern. Its the harms your action has on third parties which are the concern.
Externalities are not strictly a money thing, but that’s less of an issue in context than this:
The harm does not come from using light bulbs. It comes from using electricity. So tax the electricity and let revealed preference sort out what people want to use their expensive electricity for.
But waste is harm. All resources are physically limited, so if you use a resource in a wasteful way, no one else can use that resource in a more useful manner.
Take gasoline. The goal of mandating mpg requirements isn’t just reduce the total amount of gasoline a person can use. That could have been achieved by either 1. capping how much gasoline a person could buy a year or 2. Raising the gasoline tax.
Reducing gasoline consumption wasn’t strictly the goal, because driving (and in effect, using gasoline) has benefits. But there’s only so much gasoline on the planet, and we want it to use it wisely. We want people to keep using gasoline, we just want them to do it in a less wasteful way.
Letting the “market decide” does not acheive this objective. Raising the gasoline tax, especially enough such that it would mirror the effects of an mpg mandate, would achieve this objective, but it has significant negative societal effects that mpg requirements do not. So, mpg mandate is the logical choice.
And its not like we’re pretending that mpg requirements are costless (they aren’t!) or that they are no unintended consequences (of course there are- that’s just life!). We’ve soberly taken all this into consideration and decided that, and mpg mandate is better than a high gasoline tax, because it reduces waste.
From my perspective this is almost 100% wrong. We don’t want people to keep using gasoline as such — the whole point of this exercise is to limit carbon emissions and other pollution, and that’s linear in gas. That doesn’t mean that we want to tax it into oblivion, because we also have a competing interest in people getting around and we don’t have any good alternatives, but we only care about “waste” — which, again, is a perniciously slippery concept anyway — insofar as it affects the amount of gasoline people are using.
It seems like you’re thinking about this as a moral issue — that burning gas or using electricity is virtuous as long as you only use as much as you need (as determined, apparently, by “society”). Maybe this is a common perspective, but it looks like an appallingly bad basis for policy to me.
There is almost no way that a MPG mandate is less socially damaging than an equivalent gasoline tax, and the ACTUAL MPG laws are far, far worse.
>Maybe this is a common perspective
Fossil Fuels are Unclean. Nuclear is Unclean. Incandescent is Unclean.
It’s Deuteronomy all over again.
Yes, its competing interests. We have the question: how can we minimize amount of gasoline we can use, but not create a decrease in the level of driving?
A high gasoline tax achieves the goal of reducing gasoline, but it fails the goal of maintaining the same level of driving. An mpg mandate, however, achieves both goals (and may even increase the amount of driving, as people are willing to travel more).
Likewise, we have the competing interests of: “how can we reduce amount of energy we use for lighting, without significantly raising the cost of your home energy bill?”
Its not clear an energy tax would achieve the second goal, since there is a minimum amount of lighting necessary for a home. Most likely, the tax would cause energy usage to decline, but energy bills would increase a lot. A lightbulb ban, however, achieves both goals: the amount you spend on lighting decreases, and your home electricity costs only go up by the cost of the new bulbs (minimal).
@Lambert
“Maximize economic value” is a normative goal. Don’t mock me for bringing morality into the picture: OP did the very moment he said the word “should”.
If you are upset at all about the lightbulb ban, or the mpg requirements, its ultimately because it offends your core moral objectives. So please, go on and attempt to explain why you think these policies are “wrong” without referencing your moral values.
Don’t feel like this conversation’s going anywhere, but I want to comment on this before I flounce:
…which, to the extent that it does, means it’s self-defeating. If Alice is limited by her gas budget, and so she drives half again as much in her 30 MPG Camry as she did in her now-illegal 20 MPG Explorer, then your MPG mandate has done exactly nothing w.r.t. Alice’s carbon emissions.
If the goal is to reduce carbon emissions, then holding miles driven constant as a criterion of policy makes no sense. We don’t want to cripple people’s ability to get from A to B, but a policy that nudges people into e.g. living closer to work is just as good as one that nudges them into buying more efficient vehicles — better, actually, since there’s fewer externalities involved. The biggest advantage of Pigovian taxes in this context is that they do that sort of encouragement in a very general, flexible way. They’re also inherently regressive, but that’s manageable if you aim for them to be revenue-neutral.
Again, reducing carbon emissions is not the only goal. Another goal is to reduce waste of resources. Viewing this through the lens of externalities/Pigovian taxes misses the point- the purpose of these regulations is not to maximize economic value.
I realize that e.g. the incandescent ban exists largely because its backers believe the bulbs to be wasteful. I just don’t think that can be justified on any grounds other than “well, society thinks it’s wasteful, so it must be bad”, which should be self-evidently stupid.
Even from the perspective of conserving resources, it makes more sense to tax the resource than to ban uses that you or “society” believe to be wasteful. If they really are wasteful, they’ll go away anyway. If they don’t go away despite getting substantially more expensive, then it’s very likely that they’re serving roles that you haven’t accounted for in your analysis. Allowing for this case is a feature, not a bug. And in the meantime you’ll discourage all sorts of marginal uses that you couldn’t enforce with heavy-handed bans and mandates or can’t make a convincing case for.
Again, there’s the competing goal that we do want the resource to continue to be used. We just want it to be used in a different fashion. A tax reduces the good kind of use and well as reducing the bad. A ban does not.
That such behaviors currently exist, should be evidence that they don’t actually go away. It’s like “if driving drunk was bad, people wouldn’t drive drunk”.
And what’s your alternative- “if I think its good, it must be good?”
My hot take is that democracy is not, in fact, stupid.
Conserving a resource means using it less. Not using it in an officially approved fashion, not using it virtuously, using it less. Discouraging “good” uses as well as “bad” ones is what we should want to do.
Democracy may or may not be stupid, but justifying a policy by saying that it is a policy definitely is.
I disagree that conserving resources is the primary goal. We could achieve such an outcome by simply banning the use of electricity, or gasoline.
That we don’t do this, and that hardly anyone is advocating for it, should be evidence that we have other goals as well.
I think the argument “its what most people want” is a more convincing one that “its what I want”, which appears to be your unspoken alternative.
Okay, if that’s what we’re down to, then I’m out.
Notice the asymmetry. If a property owner chose to dispose of a resource that he owned in a way you disagreed with, you aren’t asking for a justification behind his reasoning. You just assume he has his personal reasons for his weird behavior.
But if the government exercises its legal control over the resource, all of a sudden you need justifications. Reasons! Explanations! Rationales! “They just feel like it” isn’t enough anymore.
If your whole argument is that anything the government feels like doing is fully justified on its face, you could have said that a while ago and spared both of us a lot of trouble.
Albartoss’s OP argument wasn’t merely that the public came to wrong decision. It was that the public shouldn’t have input on this matter at all.
The debate here is one over process, not object-level outcomes. I’m not saying whatever the government does is “just”- the public consensus is often wrong, and needs to be updated. But the public can and should have a debate about this, whether we should deem it “waste” or not.
Humans can sort-of meet the criteria of having an ordered set of preferences over what’s possible which is most of what you need to model them as agents making decisions based upon some mix of morality and economics (your choice of morality and economics).
Governments are so far from meeting this criteria that it makes no sense to model them as an agent. You can kind of squint and gouge out an eye and imagine companies as agents if you think of them as replicators in some sort of evolutionary game, but companies are a lot smaller and die pretty often.
But governments die so rarely that doesn’t make sense either. You’re not even guaranteed to be mathematically able to create a scheme that orders the preferences of a group of people in a vaguely reasonable way. Thinking of large bodies of people especially governments as agents doesn’t even work in theory, much less in practice.
Governments may have “agent-like power” but in some sense so does a hurricane.
@Guy in TN
Mandating high MPG encourages people to live further from their work, which can also be considered a waste of resources.
Laws like this are illiberal in the sense that they very much dictate that a certain way of living is legitimate, while another way of living is illegitimate. Some externalities are OK to burden others with, while other externalities are not.
This doesn’t make these laws necessarily wrong or bad, but it does mean that you will make an enemy out of people with a different preferred lifestyle and/or culture. The more laws you make like these, the more you are enforcing and dependent upon cultural homogeneity.
@Aapje
Of course, there’s unexpected feedback effects, as with all laws. But there’s no economic rule that says the counter-effect much always outweigh the effect. The law does have the ability to change human behavior- people are not infinitely malleable such that they will necessarily revert to expending the same amount of resources as they did before.
Well, yes. I am not a liberal, and do not support liberalism, in the classic sense of the word. I think the claims of liberalism as “neutrality” is largely a myth. There’s no escaping using the law to shape human behavior at the content-level- even liberalism does thus, just under a different set of rules.
The old quote from Anatole France describes my thoughts on the matter well: “In its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets and steal loaves of bread.“
A principle that explains this is the following:
(*) If Alice has the right to do an action A, and an action B doesn’t make anyone worse off than action B, then she should have the right to do B instead.
(*) implies that if she has the right to dispose of her property in any one way that neither helps nor hurts anyone other than her, then she should have the right to dispense it in any other way that doesn’t hurt anyone else.
While few people say (*) explicitely, I think most libertarians would agree with it, and most others would agree with it at least in a limited form of
(**) If Alice has the right to do an action A, and an action B doesn’t make anyone worse off than action B, then she should have the right to do B instead, unless we have a good reason to not allow her.
Thus property owners doesn’t need justification for disposing of their property in any way that doesn’t hurt others, while government at least needs to justify restricting it.
Most people would have just responded to you with “because they are the property owner, duh, having the right to dispose of your property is part of what owning it means”. My post explains what I think implicitly underpins this conception of property (along with a few similar principles such that if you have the right to destroy something now, you should also have the right to destroy it later, without anyone taking it in the meantime, if this doesn’t leave anyone worse off than if you destroyed it now).
But these actions do effect other people, and perhaps make them worse off. There’s limited resources on the planet: a resource you use is a resource that I can’t.
If property ownership didn’t effect anyone else, we wouldn’t even be having this conversation- you could dispose of all the resources you want, and I wouldn’t even be made aware of it.
If this sounds crazy, let me flip it into a scenario you might agree with:
Let’s imagine that the United State goes full U.S.S.A., with the government now being the property owner of all land and resources.
The owner of this property, the government, decides that it wants to take all of the remaining oil reserves, and blast them into space so that they can never be used again. The now-prominent Libertarian Party, ahead in the polls for the next election, notes that they had future plans to re-privatize the oil sector of the economy, letting people drive cars and power factories again.
Would you say this action by the property owner (the government) did not make you, non-property owner, worse off?
Obviously if you consume some common resource, that may make people worse off than if you don’t consume it. However, it doesn’t affect other people whether you consume a resource in one way or another way (at least in that no one else can use that resource either way). Thus, (*) says that if you are allowed to consume it in one way, you should also be allowed to consume it in another way.
This doesn’t say anything about what resources you should be considered to “own”, i.e. what resources you should be allowed to consume at all; that’s another topic. However, pretty much all existing and proposed economic systems allow some people to consume resources in some ways in some cases, so if one agrees with (*) (or (**)), it is going to have far-reaching consequences about what other ways of consuming them should also be allowed (at least unless there is a good reason they shouldn’t).
In your USSA example, the government’s action doesn’t make me any worse off than if the government makes the oil reserves unavailable in some other way. Points of contention are whether the government should own all natural resources in the sense of having the right to destroy them, and whether it’s a good idea for the government to do so (even if it has the right to do so).
——
Actually, there is another relevant difference between the government and the individual (besides my argument in my previous post). We are discussing politics. What an individual should have the right to do is politics, but what he chooses to do (among the things he has the right to do) is not politics. What the government should have the right to do is politics, and what the government should do is also politics; thus the two are not clearly delineated. Some of us say that individuals have some natural rights, but few would say that the government has natural rights as an entity distinct from the people.
Let’s compare it to a corporation (in its conventional capitalist conception). We can talk about what a corporation should or shouldn’t have the right to do. ‘Rights’ here mean that someone who is not a shareholder of the company has a basis to complain if the company does something it doesn’t have the right to do, but not if the company has the right to do whatever it does. However, if we are the shareholders, we have a bases to talk about what the company should do, and it wouldn’t make sense to say that we can’t object to something it is doing because it has the right to do it. The company’s leaders are the representatives of the shareholders; the company has no rights separately from the shareholders. As a shareholder, the company’s rights only matter in that we (as a corporation) are not allowed to do something the company doesn’t have the right to do, not in the sense that we can’t complain about the company doing something because it has the right to do it.
In the government, every citizen is a “shareholder”. As such, it makes no sense to say that it’s OK for the government to do something because it has the right to do it. When talking about government policy, the “rights” of the government only matter in as much as if we believe in some form of natural rights, human rights, or constitutional limitations, then the government may not violate these. At the other extreme, in an individual, only the individual himself is a “shareholder”.
They could use the oil reserves to power the economy, instead of destroying it and sending us back to the bronze age. So of course the manner in which they “consume” it effects you. Your life may depend on it!
The idea that the manner in which a property owner uses resources effects non-property owners, is such a very common and simple phenomenon, that I’m kind of amazed its up for debate. Allow me to point to some other striaghtforward examples:
-A child’s well being depends on how a property owner (his mother) maintains her house
-A person with a chronic illness’s well-being depends on how another property owner (a pharmaceutical research company) allocates their resources
-A hunter’s well-being depends on how well his hunting buddy maintains his weapons
And on and on.
It seems like your common failure mode here is to analyze people as if they were isolated individuals, and not a part of society. Which is to say, not acknowledging that the way you live your life, and consume your resources, necessarily either helps or hurts others. Any thought-experiment that relies on something along the lines of “assume your life doesn’t effect other people…” should be non-applicable out of the starting gate.
Struggling to see how the second part of your post ties in to what we’re talking about.
I mean, its true that we’ve created a system where government decisions differ from private decisions in that they require some sort of democratic process, while decisions regarding private property are somewhat autocratic. I’m not sure how the question of harm relates to this though.
If the government that destroyed the oil was an absolute monarchy (i.e., with the authority of an unregulated property owner) instead of a democratic collective, that wouldn’t make the problem go away.
My admittedly snarky post to Nornagest, was to draw attention to his double standard treatment regarding rights: The legal right of the government to regulate light bulbs needed to be vigorously justified, but the legal right of the individual to use lightbulbs in the manner he sees fit was taken as default “given”.
It’s an obnoxious, but effective, rhetorical strategy given the crowd: Assume libertarianism, unless non-libertarianism can be proven after overcoming highly isolated demands for rigor.
I wasn’t precise enough in that sentence. What I intended was that the government’s action doesn’t make me any worse off than if the government makes the oil reserves unavailable in some other way that also doesn’t help anyone in any way.
Yes, the way people use their property often affects others. However, often you have a multitude of choices, each of which benefits or hurts other people to exactly the same extent. Then if one of them is legal, then the rest should be legal, too (according to (*)), as should any choice that leaves some people better off than these legal choices, and no one worse off.
In particular, in virtually any existing or proposed economic system, there will be many situations where you are allowed to obtain some resource by giving something (money, product, service etc.) in exchange, and then you are allowed to consume the resource (eat it, burn it, whatever) in some way that only benefits yourself and neither benefits nor hurts anyone else, other than to the extent you benefit someone by paying the aforementioned price. Then it should also be legal for you to consume it in any other way that hurts no one (and may or may not help someone). For example, to go back to the lightbulb topic, let’s say that I have the right to use a given amount of electricity for purposes that you consider non-wasteful, but which only benefit me, as long as I pay the bill. Then if I use part of that amount of electricity for incandescent lightbulbs, I use less for other purposes so I use the same in total, and I pay the same amount of money for it, then I don’t leave anyone worse off than in the former situation, so the latter should be allowed too.
——
To put the second part of my post in another way: Let’s say the government has the right to do X. The government is the representative of the people, so this translates to that we, the society as a collective, have the right to do X. Then we still have to decide whether we actually want to do it, based on whether we have a good reason to do X. We decide that, in large part, through various forms of policy discussion, such as this.
Now let’s say Alice has the right to do X. Then Alice has to decide whether she wants to do X, by considering whether she has a good reason to do it. How she decides that is her business.
So yes, a government policy takes justification beyond “the government has the right to do it”, while for letting Alice do something, it’s enough justification (for people other than Alice) if she has the right to do it.
This is one justification for the asymmetry. My other justification, the one I gave first, is separate from this, and assumes that Alice has some property that we would allow her to dispose of in some way that neither hurts nor benefits anyone else (again, such situations are common in most economic systems, assuming she has already paid for the property). Then Alice doesn’t hurt anyone (compared to the action we would allow her to do) by disposing it in another way that similarly doesn’t affect anyone, while the government makes her worse off if it prevents her from doing so than if it doesn’t.
But no two actions benefits or hurts people to the same extent. The choice of which lightbulb to use has all kinds of downstream effects, from how you participate in the economy, environmental impacts, to sleeping habits (re: light dimming) as others have pointed out.
Three objections:
1. At its most basic, the initial acquisition of the resource deprives someone else from having access to it (makes them worse off).
2. Once the resource is acquired, the ongoing maintenance of your property in resource necessarily requires the threat of violence (as do all laws) to maintain
3. Because we live in a society, there are inevitable downstream effects on others, based on the manner you consume your resource.
At every step along the way, from the initial acquisition, to the holding, to the consumption, you are conceptually treating the man as an island. But at every step along the way, his actions actually effect other people, in both positive and negative ways.
Perhaps it has some unpredictable, probably tiny downstream effects on others, just like allegedly a butterfly might cause a tornado. But that’s not a reason to prevent butterflies from flapping their wings unless we have a reason to think that they are more likely to cause a tornado than prevent one. Also, such downstream effects are largely through voluntary actions of myself or others. E.g. if I make choice A, it might make me participate in the economy in a way that makes someone else better off than if I make choice B, but as long as I would have the legal right to make choice A but participate in the economy as if I made choice B, I should also have the right to make choice B.
Light dimming is an effect on the one who uses the light bulb, not on others.
Yes, I meant that it doesn’t hurt anyone other than that. In other words, the process of consuming the resource doesn’t make anyone worse off than if it just disappeared into thin air. Anyway, I was talking about different ways of consuming the resource such that the choice between the different ways doesn’t affect others, because either choice only affects others through whatever I pay for the resource and, yes, through the fact that the resource disappears.
In any economic system, if I have the right to consume
a resource right now, and I consume it right now, that doesn’t require force to keep it, regardless of how I consume it. To be able to save that resource for later, without anyone taking it, that’s a different question, and yes, that requires force.
Yes, it looks like our attitude towards society is very different. For you, everything is interconnected, everything we do is the business of everyone else, and it’s fine for the collective to order around its constituent individuals in any way it pleases. We are cogs in a machine, and it’s the machine that matters.
My attitude is that while we unfortunately depend on society in some ways, we should prefer voluntary interactions when possible, and we should keep the obligations of an individual down to a small, well-defined, discrete set. We are individual agents negotiating and interacting in a variety of ways.
With the terminology of this recently linked blog post about an alternative political compass, you are a coupler and I’m a decoupler.
If you recognize that every action you take effects others, then the question changes to simply the scale of the net harms or benefits. Any axiomatic deduction that relies, at some point in the logical chain, on an action that effects only yourself, no longer holds.
Net-voluntarim doesn’t change from your proposed system to mine. In your theory: The property owner should be the decision-maker, and that non-property owners should be forcibly excluded from having any decision-making power. In my theory: The public should have decision-making power, and the property owner should be forcibly excluded from exercising his autocratic control.
Someone’s getting forcibly controlled, either way.
If you had said “this doesn’t effect others very much“, that would be a possibly true statement. I would probably try to persuade you by talking about utility-efficient resource allocation, or the importance of the law for setting social norms. But since we’d talking about scale instead of a binary, you would have ample room to maneuver- its a highly defensible rhetorical position. I don’t think we should be voting on all human behaviors after all: some things (such as choice of relationship), while they do effect others, are too highly personal for democratic input to be valuable.
This is in contrast to the position of “this doesn’t effect others at all“. Which, well, I’m going to be charitable and hope is not a standard tenant of “decoupled” ideology.
@Nornagest
How much do you have to raise the taxes on electricity before you stop using incandescents? The new equilibrium will be way worse for nearly everyone. Including the incandescent libertarian fetishist.
The best way to ban things is to ban them; imagine that. But we’d be banning them to save electricity, right? So the better question is how much you have to raise the taxes on electricity before the new usage levels are equivalent to those you’d get from banning incandescents. Which is a lot lower than you need to keep everyone from using incandescents, yet equivalent in terms of emissions and better in terms of overhead to banning them. Which is the point. I’m not sure how much clearer I can make this.
If your goal here isn’t to conserve electricity but rather to keep everyone from using things you don’t like, I concede that raising taxes on resources they incidentally use isn’t a good way to get there, but I also assert that it’s a bad goal and you should feel bad for proposing it.
@10240
Right. But you don’t have the legal right to use incandescent lightbulbs. That’s the point of this debate: what a person’s legal rights ought to be, not how a person ought to behave under a given regime of legal rights.
Falling back onto “well, my actions need no explanation” would be more reasonable if you actually had the legal right to the actions in the first place.
Yes, I meant more whether she should have to explain why she wants to do something, rather than whether she has to explain it. That is, when discussing whether to ban something, the burden is on those who want to ban it to justify why they want to ban it, rather than on those who want to do the thing others want to ban to explain why they want to do it, and thus don’t want it to be banned. Or if Alice has a property, she doesn’t have to justify consuming it, but if the government wants to consume it instead, preventing Alice from consuming it, that very much takes justification.
This discussion of rights gets confusing, because in many places we haven’t distinguished between different meanings of rights, such as legal rights or natural rights. Anyway, we could look at the situation at the time when incandescent light bulbs weren’t yet banned, and whether the law is good or bad didn’t suddenly change when it was enacted.
My house has dimmer switches and whatever non-halogen bulb I put in most recently gives a nasty flicker if you adjust thr switch.
LEDs in street lights, car lights and pocket lights (thjnk reading lights) I think are way too bright; the glare of the former is off very strong and hard on the eyes in the dark.
The latter make reading lights useless e.g. in bed if I don’t want to wake a baby or a spouse.
For the reading lights, you might want to try a bit of translucent tape over it to spread everything out.
Philosophers have a curious way of reinventing the concept of a spectrum…
I was just reading the SEP entry on vagueness
and came across this gem: “Twilight governs times that are borderline between day and night. But our uncertainty as to when twilight begins, shows there must be borderline cases of borderline cases of ‘day’. Consequently, ‘borderline case’ has borderline cases. This higher order vagueness seems to show that ‘vague’ is vague (Hu 2017).”
A scientist would approach the question of twilight by saying that there is a gradual and continuous decrease in the amount of light. He might also say that there is a definite time at which the sun disappears below the horizon, though he might have some clarifications to make about refraction of light in the atmosphere.
Many aspects of nature are best described by continuous scales. Language often finds it convenient to bin them into discrete categories. The boundaries of these categories are arbitrary and people won’t always agree on them. There’s no “hierarchy of vagueness”; there’s just the arbitrariness of binning continua into discrete categories. I feel as though I must be missing something, because the SEP entry on vagueness is quite long. Would anybody better versed in these matters care to explain why I’m being stupid?
I wouldn’t worry too much about it. Are you familiar with “A Human’s Guide to Words” from the sequences? https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb
Yes, I enjoyed that sequence. In fact I think it was reading that that made me want to read up more on the philosophy of language.
You might also enjoy reading about fuzzy logic, which is just boolean logic that allows truth values in between 0 and 1.
That does sound interesting. It also sounds like some weird compromise between classical logic and quantum logic…
I’m having a hard time figuring out what mistake you think philosophers are making here. Your first sentence suggests you think they don’t understand that brightness (and many other properties) comes in a spectrum. But I’m guessing/hoping you don’t actually believe this.
You’re last paragraph suggests that vagueness is obviously a matter of semantic indecision. At least, that’s how a philosopher would characterize your view. Here’s David Lewis:
The general idea that vagueness is in some way a linguistic issue stemming from the fact that for ordinary purposes no one needs to settle completely precise boundaries for concepts is easily the most popular view of vagueness in philosophy. So yes, philosophers are aware of that too.
But then you seem to claim this means that there is no hierarchy of vagueness, so that’s what philosophers are confused about. But why? As the article points out, just as people haven’t settled on a precise boundary for ‘day’, they also haven’t settled on a precise boundary for ‘twilight’, one of its borders. And similarly they haven’t decided on a precise boundary on when something is a borderline case of being twilight. That’s all that is required for there to be a hierarchy of vagueness, and it immediately falls out of your own view once it’s more carefully spelled out.
So where’s the confusion?
First I should say that my tongue was somewhat in my cheek. I’m aware that “the philosophers” isn’t an entity that exists, and I expect there are philosophers with whom I would share complete agreement on this.
However, I don’t think I agree with you that a hierarchy of vagueness falls out of my own view. Perhaps a hierarchy is consistent with my way of thinking, but I think it seems an unhelpful one that makes the issue seem more confusing than it is. The fact is that there’s a spectrum of light levels and humans use different words for different points in that spectrum. You could instead describe this as there being three categories (light, twilight, dark) with two boundaries (light/twilight, twilight/dark) each with two boundaries (light/(light/twilight), (light/twilight)/twilight, twilight(twilight/dark), (twilight/dark)/dark) and so on ad infinitum, but at the end of the day you’ve reinvented a spectrum in an infinitely confusing way.
In what sense are humans using words for points in that spectrum? Do you mean to tell me that day, night, and twilight refer to points in the cycle and not ranges?
Indeed. Ranges would have been a better word.
(Though we also have words to refer to points, such as “noon”.)
I think I see what’s going on here.
You’re thinking of the vagueness hierarchy as a theoretical posit designed as an alternative means of explaining the same data one could explain using spectra. But philosophers simply aren’t proposing it as an alternative to spectra. What they care about is figuring out the semantics and logic of natural language expressions.
One important feature of the semantics of most natural language predicates is that they are vague. This means there are borderline cases, things which (according to the most popular views) the predicate neither determinately applies to nor determinately fails to apply to because of semantic indecision.[1].
But there is semantic indecision not just about the border between what the predicate applies to and what it fails to apply to, but also about the border between what the predicate applies to and what it is borderline of, and so on. The hierarchy just falls out. The fact that for many purposes you can avoid thinking about this by using different predicates is irrelevant if what you care about in the first place is developing an accurate semantics and logic for natural languages.
[1] What exactly you think this semantic indecision comes from will depend on your metasemantics: your view of what facts make it the case that a word has its particular meaning.
Thanks for trying to explain. I still don’t quite get it.
It seems to me that you will get a hierarchy ‘falling out’ if and only if you are talking about something that is fundamentally a spectrum that is described as a series of ranges by a natural language. (Are there counter-examples to this?) If that’s the case then I just don’t see what insight the hierarchy way of thinking about it brings.
It’s not supposed to give an insight into the nature of spectra. It’s supposed to given an accurate semantics and logic for the natural language predicates that pick out ranges with imprecise borders. Philosophers and semanticists care about doing that kind of thing for its own sake.
(There are also a minority of philosophers who think there is “vagueness in the world”, not just in language, so they have metaphysical reasons to care about this matter as well).
Obviously if you’re doing some practical task that requires extreme precision, you will just switch to using more precise predicates rather than thinking about infinite hierarchies of vagueness.
Twilight has several definitions, used by astronomers and navigators for different scenarios. Civil twilight, nautical twilight, and astronomical twilight are all measured by the position of the apparent geometrical center of the Sun below the horizon, and all have a range of expected levels of light in the pre-sunrise or post-sunset scenarios. The visibility of celestial objects (of known magnitudes) is different during the three ranges of twilight. The visibility of the horizon (especially for sailors at sea) is different between nautical and astronomical twilight.
Sunset has a single clear definition, with beginning and end related to the position of the apparent geometrical center of the Sun in the sky.
I don’t mind using twilight (in the colloquial sense) as an example of vagueness. I do mind when educated people don’t appear to notice that there is a precise technical definition of twilight, for those who use twilight to define a period of time.
I’m not sure I understand your point. There aren’t many people who, when somebody says “what lovely twilight” would respond “actually according to the technical definition of twilight it doesn’t start for another one and a half minutes”. The fact that technical definitions exist does not mean that colloquial use isn’t vague.
Having said that, I suppose if one was faced with a situation in which vagueness was becoming a problem, that might be a good time to switch to technical definitions.
Maybe I’m being a pedant.
I do recognize that the colloquial, vague usage of twilight is connected to the time period between sunset and sky-dark-enough-to-see-stars (or, for sailors, sky-dark-enough-that-the-horizon-isn’t-visible).
The technical usage is a better way of describing that range of time.
This doesn’t argue against borderline cases, or borderline cases at the edge of each borderline case. It also doesn’t argue against the concept of ‘vague’ having some degree of vagueness.
Still, I find it surprising that twilight was used as an example.
I think a better example would be the words “several” or “many”, which are quantifiers of indeterminate size.
“several” and “many” are actually really interesting examples because they don’t refer to continuous variables but discrete ones (i.e. the number of something). Does this mean the hierarchy terminates at some finite point or does it still go on to infinity?
The problem is that twilight is the product of a single, well defined underlying physical cause which has an equally well defined beginning and end: A portion of the sky has passed into the shadow of the Earth and is no longer illuminated by the sun. Once the entire sky is no longer illuminated by the Sun, it is done.
(1) When the sun first touches the horizon, a portion of the sky has fallen into the penumbra (only illuminated by a portion of the sun’s disk) and consequently begins to scatter less light. (2) Once the upper limb of the sun passes below the horizon, part of the sky has passed into the umbra and is not illuminated by the sun at all. (3) After passing below the horizon by a certain amount, the entirety of the sky is in either the penumbra or umbra. (4) After passing below the horizon further, the entire sky is in the umbra and no scattering of sunlight by the atmosphere is visible from the viewing location. (Interplanetary dust continues to scatter light producing the zodiacal lights (“false dawn”). Another phenomenon, Gegenschein, is caused by the reflection of sunlight by interplanetary dust near the anti-solar point.)
Twilight runs from (2) to (4) and is the time between when the sky begins to be and is completely covered by the umbra of the Earth. (Astronomers seem to define the instant at (2) as “sunset,” while a looser definition is everything between (1) and (2), and an even looser definition is any cool non-blue color in the sky caused by scattered sunlight.)
For convenience, the time when (4) occurs is approximated by when the geometric center of the sun has passed 18 degrees below the horizon, using whatever idealized, roundish model of the Earth astronomers are using now.
The point remains: twilight is a bad example because it has a well defined phenomena with well defined discontinuities that mark its beginning and end. The fact that astronomers have a reasonably accurate approximate so that they can calculate the effect 10,000 years in the future is entirely beside the point. (And also everyone I know would give you a funny look for saying it is a beautiful twilight while the sun is still clearly visible but are not going to make a big deal about it; nobody but an astronomers is running a stop watch. Clouds or trees or something obscuring your view of the horizon are in all cases an adequate excuse for being mistaken.)
Bugger the philosophers.
It’s daytime till the sun goes down.
Civil Twilight till it’s 6 degrees under.
Nautical till it’s 12 under.
Astronomical till 18.
Then it’s night.
Something I have just been thinking of which I don’t think counts as culture war, since it is a general issue with examples from both sides.
A Question in Scientific Ethics
You are a researcher with a theory and a clever way of testing it. You do the test and it supports the idea. Unfortunately, you believe that publishing your results will have undesirable real world effects. Should you publish?
The first case I am thinking of is the Card-Krueger research on the minimum wage. Basic economic theory implies that raising the minimum wage will reduce employment opportunities for the relatively unskilled workers who receive the minimum wage—they are now more expensive, so employers hire fewer of them. Card and Krueger thought of an interesting possible exception to that conclusion, a reason why, under some circumstances, a small increase in the minimum wage might produce no reduction in such employment, might even produce a small increase. They tested that conjecture, taking advantage of a situation where one state had raised its minimum wage and an adjacent state had not, and found no reduction in employment.
Assume, what I suspect but am not certain is the case, that the authors saw their result as a special case, an exception to a general rule that in most cases held. Further assume, what is possible but might not be the case, that they believed that a large increase in the minimum wage would have unambiguously bad effects. Finally assume that they believed that the publication of their article, by undercutting the economic argument against increases in the minimum wage, would make large increases more likely—as pretty clearly turned out to be the case.
Should they publish?
Consider another example of the same question. A researcher believes that IQ is in large part heritable and has a substantial effect on life outcomes—higher IQ correlates with higher wages, less unemployment, more stable marriage, … . The logic of Darwinian evolution implies that populations that have evolved in different environments will have different distributions of heritable characteristics, a conclusion supported in the case of easily observed characteristics such as skin color. He concludes that races as conventionally defined may well differ in the distribution of IQ, thinks up some ingenious experiment to test this conjecture, performs it, and finds support for the conjecture.
Further assume that he believes publishing these results will have a bad effect. People who are racially prejudiced will jump from “the distribution of IQ is different in different races” to “all blacks are stupid,” act on that belief, making the world a worse place. Because differences in heritable characteristics provide a possible explanation for differences in outcomes by race, people will use his result as an excuse to ignore differences that are actually due to discrimination, past injustice, or some other causes, thus reducing the political pressure to do something about those problems.
Should he publish?
For a third example, consider a researcher in some issue related to climate who finds evidence suggesting either that climate change will be slower than currently believed or that its results will be less bad, perhaps evidence that a particular result will actually be good. Assume, however, that he still believes that climate change is a serious problem and we should be doing more than we are doing to prevent it. Publishing the result of his research will reduce the political pressure to do so.
Should he publish?
Obviously this isn’t a new issue, although the more familiar examples involve scientific research that can be used for military purposes.
My own conclusion is that all three should publish, but I thought I would put the question out for other people’s responses to the question before offering mine.
I would also say he should.
One of the things that I find as I interact with more people is that evidence changes very few minds. Humans are more rationalizing animals than rational. If people use his data to do something that he feels is bad, the odds are very good that they would’ve done the bad anyway and used something else.
It’s better to have the truth out there so someone else can find other important truths without repeating your effort than worry about it being misused.
I agree, and I’d like to add that the few people who understand and believe the paper, rather than using it to support a preconception, are likely to draw the same larger conclusions as the author.
Definitely not necessarily. Smart people can come to different conclusions with similar data, especially when discussing the policies that should be followed because of that data.
I think there are extreme circumstaces where the scientist should suppress the data, but they’re incredibly rare–like if you figure out how to make a civilization-destroying plague in your kitchen sink, you ought not to publish that because it’s very likely someone will destroy civilization with it. But in nearly all real-world cases, including the ones you mentioned, the scientist should publish:
a. In the long run it almost has to be better for us to know more about the world, rather than less. So it seems like we’re talking about a tradeoff here between a short-term social or political gain and a long-term gain in knowledge available in the world.
b. Nobody is smart enough to know all the ways that some new piece of information or idea will be used by others. The finding you think is mostly unimportant except for its tendency to help the wrong side politically may turn out to be the seed for something really useful that’s not at all related to your political concerns.
c. To the extent this is known to happen, it makes it easy to dismiss scientific findings as biased–everyone knows that climate scientists won’t publish papers questioning the impact of AGW, or that psychologists won’t publish papers demonstrating significant gender differences, so when people try to make an argument for what we should believe based on scientific consensus, that argument falls apart. Since science is one of the only really good ways we have of knowing how the world works, undermining it to win today’s political battles is a really awful idea.
More extreme versions of all these come up if you lie/make stuff up in your research in order to support the right causes.
What? Citation very much needed. This is such a far out conspiracy theory that I’d need a much more detailed explanation of what you imagine to be happening to be able to address this. The statement as written is obviously wrong, as a search for ‘gender differences’ on psycnet.apa.org will confirm.
This is not a conspiracy theory at all, and should not be described as such. Haven’t we been through this several times recently?
Not all collective behavior is conspiratorial. If I say “Rich white people avoid inner-city ethnic neighborhoods after dark”, that’s not a nefarious conspiracy of rich white people meeting in smoke-filled rooms and deciding to deny their patronage to nightclubs run by colorful poor people, that’s a few million individual human beings all looking at the same data and coming to the same conclusion – that there’s nothing in it for them worth the risk of getting beaten up over.
Such a claim may be factually incorrect, and I’d like to know how strong the aversion to publishing gender-difference results in psychiatric literature is, but it isn’t a conspiracy theory.
I think there is a decent amount of such studies that are published. But they are published in the niche journals instead of Nature/Science, and when they are published universities don’t issue blaring headlines and tidbits to journalists, like they do for “eating a pear and drinking a cup of coffee everyday leads to a 10% increase in sexual stamina” studies.
In gender differences there is a large body of work that is pretty overwhelming, its just that all the people who know about that work just kinda keep to themselves, teach an upper level grad course about it, etc. And the “consensus” in the media is totally divorced from their work, which indeed happens regularly.
Clutzy:
My outsider impression says that this is true of a lot of fields. The public image of the field has a lot to do with the very small subset of results/studies that get picked up by media outlets, and often is quite different from what most researchers actually understand about their field.
I think that is certainly true. I recall two such instances from the times I was working at places that published. When I was working on LVADs our testing protocol shifted because all of the sudden pig’s blood was trendy and cow’s blood was out of style. That doesn’t mean that there aren’t good reasons for using each one for different uses in models, but the fact is you couldn’t get cow’s blood studies published even if it was a simple side-by-side in vitro test of an industry standard LVAD and a new model.
A second time was when I was on the board of the law review, and a bunch of otherwise high profile entries kept falling to us at a T2 school. This was at a time when the Supreme Court and Obama Administration kept having spats, and the fact that these articles fell was almost certainly political, good for us in the short term, but its also a bummer to see good speculative work that is ultimately a theory adopted by all major courts to consider the issue not get more publicity up front.
What makes it a conspiracy theory is that it is easily disproven in under a minute.
If for you ‘conspiracy theory’ requires secret meetings in smoke filled rooms somewhere along the way, I’ll gladly concede that what I’m talking about is not what you understand by ‘conspiracy theory’.
Murray A. Straus, researcher in the field of gendered violence, wrote a paper late in his career about “methods that have been used to deny, conceal, and distort the evidence on gender symmetry.”
I think that the methods that he addresses in that paper are regularly used to suppress findings that go against dogma.
The worst other example that I can think of is rape, where the commonly used study methodology excludes the way in which women normally rape men. Then the fact that very few women penetrate men with their penis is used to conclude that rape of men by women almost never happens.
Of course, in both of these cases, it is actually gender symmetry in behavior, rather than asymmetry that violates the dogma about how men and women are socialized.
So the earlier claim is incorrect in that findings of gender differences are not suppressed across the board. Instead, it is typically the studies that either suggest that there is a biological basis to gender differences or studies that are inconsistent with a certain narrative (like that men are very strongly encultured to act in certain negative ways, while women are not).
“Things that can be disproven in under a minute” and “things that are conspiracy theories as the term is commonly used” are two different categories with surprisingly little overlap. Please don’t misappropriate terminology just because you think its emotional valence eliminates the need for you to actually argue your case. Which wouldn’t even be necessary if you really could argue your case conclusively in under a minute.
@Aapje
Invalid or corrupted file [the strauss pdf] does it have a title?
Weird, the link works for me. Try this (you don’t need to download, just scroll down and you should be able to read it).
brmic:
I’m specifically answering David’s comment. I’m saying that if scientists are known to routinely suppress findings whose political implications they dislike, then laymen will have a good reason for not trusting the scientific consensus in a lot of areas, because they’ll know there’s a big file drawer full of true but unpublished results that are never seen[1]. I am not claiming this is happening now in psychology, I’m saying it’s a consequence of a widespread commitment among scientists in some field to suppress politically uncomfortable results.
My guess is that there is some of this happening in most fields, for both CW type politics reasons and probably more often for smaller-scale academic/office politics. I suspect it happens more in some fields than others, and that this is one consequence of the extreme ideological imbalance in some fields. And I think we need less of it, not more.
[1] Honestly, the bigger practical problem is people fooling themselves in both directions by misusing statistical tools they don’t understand very well.
I’m sorry, no, you were making the claim, that ‘everyone knows’ this (specific thing) is happening. It’s neither happening nor does everyone ‘know’ this false thing to be true. Verify for yourself and update.
@albatross11
I think that this is part of a larger issue, where displeasing findings have far less chance to be published. Ideological biases are not the only biases that make certain findings displeasing.
A well known bias is to ‘significant’ findings, where studies that fail to show evidence for their hypothesis have much more trouble getting published.
Another is that fields often have hypes, where research on a certain topic and/or with a certain finding for a certain topic have way more chance to be published.
—
Researchers also aren’t immune to incentives, which means that they are often not going to accept that their research doesn’t result in a (more prominent) publication. So they will often frame their findings in a way that is more pleasing to others (sometimes even to the extent that their findings are opposite to their conclusions), do p-hacking or other tricks to get a more desirable ‘finding,’ or simply not do some research in the first place because they anticipate that it will harm their career.
brmic:
I’ve gone back and reread my comment, and I agree it’s unclear, though I think it’s not all that hard to figure out what I’m trying to say.
Anyway, to clarify, my claim is this:
IF it becomes widely known or believed that researchers in some field routinely suppress results whose political implications they dislike, THEN outsiders will rightly start discounting results from that field, because they will correctly see that some kinds of results won’t show up from that field regardless of whether there is evidence of them.
Similarly, IF it becomes widely known or believed that scientists in general routinely suppress results whose political implications they dislike, THEN the general public will correctly have less confidence in scientific consensus.
The question assumes an unrealistically optimistic view of science that renders it the equivalent of ‘surely you would eat a baby to save all of humanity’.
In each case we assume the scientist has correct information which is unlikely to be the case in the real world. In each case we also assume the options are merely to publish or not to publish, which is not true of the real world.
To make this more concrete, each researcher if troubled by the implications of their results could (a) enter into an adversarial collaboration or (b) work out the extent and limits of the effect they found before publication, i.e. provide effect size estimates, circumstances under which the effect exists/ceases etc. or (c) actually take the limitations section of their work seriously and describe exactly and in detail which prerequisite assumptions need to be true and what their falsity would do to the results.
If we get to assume such scientists and such states of theory and such states of experimentation and anlysis, then sure, publish. But under real world circumstances, where the whole edifice is held together by duct tape and motivated by publish-or-perish I’m leaning towards not publishing. Because if the scientist can’t be sure their statements are correct, they’re in essence supplying weak arguments for bad real world outcomes.
brmic:
ISTM that your answer breaks down as soon as someone else wants to build on your work.
Consider stereotype threat–the idea that test scores are depressed among groups that stereotypically do badly on them when they know about the stereotype, and that effect is strengthened when they are reminded of the stereotype. Lots of people were pretty invested in that idea. You can imagine some scientists not publishing evidence that showed it was wrong. But as best I can tell, it doesn’t hold up to replication, so it’s probably not a real effect.
What harm was done there? None, unless you think the performance gap in education is something worth addressing. But if you do, surely it should upset you that substantial resources were being expended on a nonsense theory, and the idea that there were maybe scientists who could have killed that nonsense theory earlier but didn’t for fear of empowering the muggle realists should upset you still more.
As a scientist as long as you don’t go on victory laps and play shit up in the media, I’m pretty sure you can rest safely knowing the odds anyone is going to make decisions based upon your paper are very, very low.
I feel like I’m reading accounts from some sort of parallel universe whenever people claim that what scientists write in scientific journals has serious repercussions on political outcomes. The scientific literature can be cherry picked to use as a bludgeon for almost any point of view you can think of. To an excellent approximation, no one outside the field and especially with respect to politics is going to go to all the boring work of reading the literature itself to figure out if you cherry picked all your evidence.
Some scientists getting filled with the righteousness of their cause or seeing an opportunity for more funding then wandering down a dead end way past the reasonable point seems like a more common failure and worse in aggregate.
Probably more damaging problems in aggregate are chasing fads or failing to document work as well as ideal. Although solutions to encourage people to behave a little better are either nonobvious or difficult to get going. I think documentation and replication is improving in some fields over time though. I’m trying to improve over time myself.
What is your view of my first example? My impression is that the paper by Card and Krueger significantly increased the chance for increases in minimum wage laws. Prior to that, proponents had to either bite the bullet and recognize that they were trading off higher wages for some poor people against unemployment for others, or simply ignore the pretty uniform view of the relevant scientific field—Krugman, for example, back when he was an academic economist rather than a public intellectual, said the same things about minimum wage as the rest of us.
But once the paper by Card and Krueger came out, a politician could take the position that this was simply an open question in the field, and he preferred to accept one side over the other.
Obviously, even before Card and Krueger, there were minimum wage laws, but I think they were lower than they would have been absent the negative view of economists.
I suspect that it had little influence in reality. It was just cherry-picked for convenience. Tons of cities have rent control, tons of countries have tariffs, etc. Almost all economists agree those are bad, but we get those anyways.
No idea how accurate this chart is but the inflation adjusted minimum wage looks like it had an initial spike and a hump in the middle long before Card and Krueger, while the unadjusted minimum wage looks a lot like if you carved an exponential function into steps which is vaguely what I would expect if the same real minimum wage was popular enough to keep becoming policy.
What is your view about the effect of Mann’s hockey stick article? Did it increase political pressure to do things related to global warming or would the same amount of those things have happened anyway?
Similar thoughts.
Very little serious action is taken with respect to global warming despite lots of talk. However, corn ethanol is (was?) subsidized. Of course, corn has been subsidized for a while in the U.S.
Maybe not Mann’s paper on its own but the field altogether probably had some effect on the exact distribution of subsidies. But I’m not sure the net difference is super significant if you aren’t one of the groups who managed some regulatory capture.
There hasn’t been a big effect in the U.S. yet–the push for converting corn into alcohol is probably the biggest and worst. But I gather that a lot of European countries are bearing large costs in order to do things justified has holding down CO2 production.
I had a professor in grad school with a colleague who set out to prove that the russian policy of collective punishment was an ineffective way of counteracting terrorism. He proved the opposite, and decided not to publish. I disagreed strongly with that decision. True knowledge should be treated as good unless proven otherwise, and knowing that an unconscionable method works is preferable to a false belief that it must not because it’s unconscionable, if only in that it will help in the devising of more conscionable methods.
I’d say publish – unless doing so is likely to result in poor outcomes for the researcher, personally. Not everyone wants to be a martyr for science, after all.
Let’s consider what declining to publish implies. Fundamentally, I can see two (not mutually exclusive) interpretations:
1. You don’t trust your science
In other words: you don’t think your research shows what you claim it shows. If that were the case, you should’ve decided not to publish long before the matter of politics came up.
But wait! There’s more!
If you’re declining to publish a result that you justifiably (by your research) believe is true, out of “consideration for the common good” you are presuming to be able to predict the effects of publishing your research on politics/society/etc. Now, I’m reasonably confident that such predictions aren’t based on any kind of research, let alone anything as rigorous as something that could reasonably considered a publishable paper, but instead the researcher’s personal hunch or commonly-held prejudices.
Declining to publish in such circumstances is much like saying: “I believe this to be true, but I guess that it would be bad if other people found out.”
In other words: you don’t trust your science.
2. You’re not doing science, but politics
Researchers are human, so it’s understandable that they’ll have human concerns. Politics happens to be a pretty powerful concern.
The conention that humans are rationalizing, rather than rational, comes up around these parts often enough. “The science is settled” is a powerful rationalization in this day and age – mostly because “God says so” just doesn’t have the grassroots support it used to. Naturally, “unsettling” settled science is exactly the wrong thing politicians (here: anyone interested in doing politics) want – and exactly the thing scientists should be doing.
If you aren’t rocking a boat that needs rocking (meaning: you aren’t publishing results that show something we believe true may be false) because you’re worried about the political issues, you’re not doing science, but politics.
Faza:
I don’t think your conclusion at the end of 1 is right. Consider the case where I know how to build a world-destroying bomb in your garage with an hour’s work and $100 worth of supplies. If I refrain from publishing that result, that doesn’t suggest a lack of confidence in my result or my field, it suggests a lack of confidence in the sanity and decency of the worst person on Earth who can come up with $100 and a garage.
A way to help you decide is to imagine you’re playing a Newcomb-like game: the way you go is the way many others will go as well, simply because you’re similar enough.
If you publish, you have a world where a lot of dangerous information (at least to this particular level of danger, and likely a bit more) is public.
If you don’t, you have a world where a lot of dangerous information (at least to this particular level of danger, and likely a bit less) is suppressed.
Which world you want to live in?
Note: neither world will have “make anthrax in your kitchen” recipes on the internet, or at least you can’t influence that. You’re deciding on roughly the level of danger you’re dealing with.
Thanks to everyone for comments.
My view that you should publish is in part based on the idea of the division of labor. David Card is an expert on labor economics, not on lots of other things relevant to whether minimum wages are a good thing. Similarly for the other two cases. Better to contribute accurate information about the part of the problem you are expert on and leave it to others to use that information and other information to decide what to do.
For my examples, I thought in the first case publishing did net damage, in the final case and probably the second choosing not to publish did net damage. But that depends on my particular view of that issue, and other reasonable people might see it differently. But in all cases, publishing at least improves the information other people have to work with, so I think the right general rule is to publish. And for my set of examples, in my view, everyone publishing is better than everyone not publishing.
There might, of course, be exceptions in sufficiently extreme cases–research demonstrating a way in which anyone who wants to can wipe out the human race with fifteen dollars worth of ingredients and half an hour mixing them in his basement.
So, here’s an interesting wrinkle on this question: how does this relate to what you decide to work on?
I don’t like the idea of scientists suppressing results of research because they don’t like the political implications. But I don’t object at all to scientists deciding not to work in areas where they think their work will make the world a worse place. There’s stuff in my field I won’t work on (key escrow and related stuff, say) because I don’t want to help bad things along in the world. I’m not sure this is entirely internally consistent….
Another wrinkle that’s interesting is *other people* trying to enforce suppression of results or not working in certain areas. We’ve discussed some of that before.
It occurs to me that there may be an important difference between working in an area in the sense of figuring out how to do things, essentially engineering, and in the sense of discovering what is true. I was thinking about the second category, but a lot of real world choices involved the first–should you work on developing a better atomic bomb, or biological warfare agent, or … .
Why does anyone buy negative earning bonds?
I get it that cash also yields negative if there is inflation, and that bonds are safer than stocks, but why bother exchanging a liquid negative yielding asset (cash) for a slightly less liquid one?
Personally, if I can’t access the stock market and interest rates become negative, I would start holding cash under the matress.
https://www.ft.com/content/312f0a8c-0094-11e6-ac98-3c15a1aa2e62
That has a negative return as well, if you price in the chance of theft or loss due to fire.
The negative return can easily be smaller than the negative return of a negative earning bond.
So the answer seems to be, negative rate bonds are bought because insurance companies and pension funds are forced to buy them because of dumb rules made back in the days when bonds where a highly liquid not money-losing enterprise.
And central banks don’t care about profit.
Well, not necessarily dumb rules. It’s obvious why we as a society want insurance companies to have very safe portfolios, but it’s not necessarily in the interest of the insurance company’s managers or stockholders to stick to very safe boring investments.
But why don’t they just keep it in cash? I get that they need liquidity, but why get negative interest bonds, unless there are rules that force you to?
You lose value with inflation, but with negative interest bonds you lose inflation+interest.
By cash do you mean literal stacks of pieces of paper? That’s negative yield too, as you have to pay to count, store, secure, and transport the stacks of bills. Also you probably also have to have a bunch of accounting controls to convince the tax authorities you’re not using your giant pile of cash to launder drug money or something.
I mean a bank account.
And while banks do charge fees for holding cash, brokers also charge fees for holding treasuries.
Right, the bank will charge you, so you’re effectively getting negative interest rates from them. Better to get them directly from the government without adding an extra layer of counterparty risk.
Also, institutional cash balances are not covered by deposit insurance (e.g. in the US the limit is $250K in most cases). So if you keep your cash at a bank, you are at risk if the bank defaults – and very few commercial banks have an AAA rating (most aim for AA or even A+ at best, because it’s not efficient for them to hold enough capital to earn an AAA rating), whereas plenty of corporate bonds do. So a negative-yielding bond may still be better when pricing in counterparty risk.
“If you put your money in a mattress, you have a >1% chance per year of losing the whole stash; an AAA bond at -1%/year interest is a better investment” is neither dumb nor a rule. Neither is “The cost of a vault and security system capable of securing $X against the sort of thieves a stash of $X cash money will attract is typically $0.05X; the -1%/year bond is a better investment if you expect the market to recover in less than five years”.
In some cases these objectively wise behaviors are reinforced by official rules, and some of those rules may be dumb. The underlying behavior is still wise in many cases.
The people that’re buying negative interest bonds don’t have a mattress that’s big enough.
That’s a flippant answer, but it’s basically the correct one. At a certain level, having a giant pile of cash lying around becomes a liability. If it’s physical cash, or gold or jewels or something, then it can get stolen or destroyed, and preventing that costs money. If it’s an account in a bank somewhere, then you’re exposed to that bank’s fortunes — if it’s a relatively small deposit, then you’re covered by FDIC or foreign equivalents, but in the US that only goes up to $250,000 (per account, with some awkward edge cases). Either way you’re exposed to inflation. So if you really need a hedge, and you don’t expect to be doing anything with the money for a while, and especially if it’s an awkwardly large amount of money, then it can make sense to park it in instruments with the full faith and credit of the government of wherever behind them, even if they’re predictably going to lose you (a small amount of) money.
OK, the point about the FDIC makes a lot of sense.
But owning bonds also costs money. Unless they have them in paper or something, they still need a brokerage to keep them. And they usually charge money for managing them. I have heard that most brokerages need to keep their client’s assets separate from their own for bankruptcy. Does that mean that assets held in a brokerage account are safe from the broker’s future (even if you will need to spend some time in litigation)?
I believe so. It’s not something I’ve ever needed to look into in detail, though.
In the worst case, it costs a lot less to buy a safe that can securely store a few dozen bond certificates than a vault that can hold a few million dollars in cash.
Does that mean that assets held in a brokerage account are safe from the broker’s future
Not a brokerage lawyer, or even a broker, but, in general, brokerages hold all their clients’ money in separate accounts from their own. When I have $2000 in the bank, I have a $2000 claim against the assets of the bank, where everything is all mixed together, and it might take years or decades to wind down all the various positions the bank holds to ultimately clear the books. When I have 2000 shares of GOOG at a brokerage, the brokerage holds those 2000 shares of GOOG, and they aren’t part of any bankruptcy of the brokerage.
US bonds are even easier to manage. Go to TreasuryDirect.gov and USG will manage them for you.
For (most) bonds, the common method is to hold them in “street name” at the brokerage. You ask to buy twenty of XYZcorp’s latest $500 bond issue and send the broker check for $10,020, some other customer wants to buy thirty, a third customer wants fifty, so the broker pools all your money and buys a hundred bonds in its own name. It then keeps those bonds in its own name, but also keeps a ledger of who bought how many. If you decide to sell or redeem the bonds, the brokerage picks twenty random bonds, sells/redeems those and sends you the money, and updates the ledger accordingly.
But if you like, and for an extra fee, you can have them put your name on twenty specific bonds and even send you twenty physical pieces of paper to put in your safe. Also, it doesn’t matter if your safe is robbed – what really matters is that XYZcorp’s ledger lists either “[your name], twenty bonds of such-and-such issue”, or “[brokerage name], 100 bonds”. If you sheepishly explain that you lost the paper, then for another fee and some annoyance they’ll send you replacements. If the guy who robbed your safe shows up and says “I have these bonds, gimme money”, they’ll respond that you or your broker were supposed to notify them of the transaction and he should sit in that chair while they call you, some lawyers, and maybe a policeman to sort all this out.
So long as XYZcorp’s ledger and (if you’re using the “street name” default) the brokerage’s ledger are intact and accurate, you’ll get either full value on the bonds or a seat ahead of all of XYZcorp’s stockholders when the company’s assets are liquidated in bankruptcy. And those ledgers aren’t physical books, but redundant databases with probably one copy in an office park in Switzerland and another in a former missile silo on Montana.
If XYZcorp is AAA or AA+ rated, this is going to be very safe, and using a reputable brokerage adds very little risk. The transaction costs are small, so using a broker for a simple buy-hold-sell of a known bond will have very low fees.
If I do a search and find my terms in the Google preview of a wordpress article, but then when I open the actual link I find out the owner deleted the blog, and there’s no archive of it either, then where is the preview being stored and is the full article still in existence somewhere that is possible to access?
The preview is stored at Google.
Google used to give access to their archived copy of the article, but the copyright-maximalists, courts, takedowners, and related thugs forced them to stop.
There may still be a copy at archive.org.
Sometimes (used to be always) on the Google search results there will be a menu indicator to the right of the URL, one of whose options will be “cached”. You can use that option—when it is available—to view the page as Google saw it (with an option to view text-only as well if needed).
In my experience this menu seems more a “Google deciding to give you this feature today” thing than a “this is available for certain links in particular” basis, but I may be wrong on that front.
Ah, that did it. Thanks.
Note that Google’s caches vanish eventually. If you want to save that blog post permanently, save the cache using archive.fo.
Yep. It’s a useful site.
If you think college admissions are messed up, here’s an article on Manhattan preschool admissions. Obviously you’ve got to get your little bundles of joy into the 92nd Street Y’s nursery school, because that gets them the best shot at attending Dalton, and from there, Harvard. Key quote:
This is, of course, completely bonkers. I’m quite critical of standardized testing and college admissions, but at least with transcripts and SAT/ACT scores (or SHSAT/ISEE for younger kids) there’s something to go on. How exactly are these “elite” preschools deciding which 1-year-olds make the cut? The article mentions there’s a lottery to decide who even can submit an application, which I guess makes sense, and demand outstripping supply means tuition is ridiculously high (because we’re a meritocracy!), but the whole thing strikes me as nonsensical.
Of course it’s really all about status games for the parents.
I thought the evidence suggests preschool doesn’t do anything–your kid’s either smart or not.
EDIT: Reading further, it’s credentialism for toddlers. Good lord…
As the friendly neighborhood g denialist, I’d phrase it as your kid’s either precocious or not. But it’s the same deal – a fancypants preschool isn’t going to make the difference whether or not they get into Hunter, and anyway Hunter is explicitly a “gifted and talented” school. I took an unscientific spot-check of some elite private school websites, and only one requires standardized tests for kindergarten applications.
…and now I’m wondering how they decide who gets into kindergarten at these schools. Is it just pure credentialism then?
How can you even have credentialism at this level?
I agree it’s some sort of status game, but I’m still mystified as to the mechanics of it.
The aristocratic primary school will have a strong preference for children from the aristocratic kindergarten. The aristocratic secondary school will have a strong preference for children from the aristocratic primary school. The local Ivy league college will have a strong preference for children from the aristocratic secondary school.
Aristocratic companies have strong preference for children from the aristocratic college.
So key is to get into this track and then not to get kicked out of it.
If that really worked, how does that fit with various regressions that find parental income is only a weak predictor of various forms of success? IIRC IQ does better for a lot of them.
I suppose if there was a rarefied strata at the top that operated this way, but everyone else had to compete you could imagine that outcome. But I kind of doubt that works for long? There’s an awful lot of regression to the mean. The Kennedys and the Bushes may be political dynasties. Rockefellers are still very rich (but much less so than their ancestor). But how many American dynasties of any form have lasted more than a generation or two?
Off the top of my head, income isn’t as good a predictor because:
Not all high-income people send their kids to these types of schools. Some are perfectly happy to have theirs go through the public school system, or a cheaper less “elite” private alternative.
Some lower-income people will make sending their kid to such a school a high priority, and devote significant resources to getting their kids in that track. There may also be scholarships/discounts available for some lower-income people, or a private benefactor, etc.
If we take those two points for granted (that there are low-status people attending high-status schools and vice versa) a weak correlation does not seem surprising.
Chetty finds otherwise, a very strong almost-linear reationship between parental income and child income at age 30.
(but Chetty has privileged access to the data and an agenda, which makes this suspect)
@The Nybbler
"Chetty finds otherwise, ..."
Those charts were fascinating, thank you!
@Nybbler
There’s a hell of a lot of regression to the mean in that. And we can’t see the scatter because the graph is binned and meaned.
Even if I fully trusted him, I don’t think the graph doesn’t work with a much different story.
Have these people ever met babies? A one year old may be beginning to display personality traits, but to say they have “interests” is reaaaally stretching it.
Interests: nursing, sleeping, puking, Sartre.
Bad taste. He’ll never get into a good preschool unless he chews on some better philosopher.
“Once a little boy sent me a charming card with a little drawing on it. I loved it. I answer all my children’s letters — sometimes very hastily — but this one I lingered over. I sent him a card and I drew a picture of a Wild Thing on it. I wrote, “Dear Jim: I loved your card.” Then I got a letter back from his mother and she said, “Jim loved your card so much he ate it.” That to me was one of the highest compliments I’ve ever received. He didn’t care that it was an original Maurice Sendak drawing or anything. He saw it, he loved it, he ate it.”
http://www.openculture.com/2015/09/maurice-sendak-sent-beautifully-illustrated-letters-to-fans-so-beautiful-a-kid-ate-one.html
They are filtering the parents, not the kids. It goes something like this.
1. The richest and most powerful get their kids in easily.
2. Those people tend to be demanding.
3. The staff don’t want to have to deal with every parent being that demanding.
4. Create a system of hoops where all the parents of ‘marginal’ kids you accept have to do ridiculous things (and pay ridiculous prices) to get in.
5. Those people are now more compliant, and less likely to risk their spot with complaints.
When my son was eight months old we dropped him off one day at a “by-the-hour” daycare type place (essentially a babysitting company) so we could go take in dinner and a movie. The form had lines for “likes and dislikes” so I put “likes: bacon. Dislikes: racism.” How am I supposed to know? He doesn’t even move yet! But he could grow up to be a bacon-hating racist, and boy won’t I like foolish!
It’s just people trying to replicate an aristocracy while also believing that an aristocracy is wrong. So they have to add inefficiency until they have plausible deniability (not in the least to their own conscience).
I don’t think this is necessarily very complicated: on the one end, tuition buys lots of stuff: facilities, qualified staff, reserve staff to cover sickeness and leaves of absence, roating shifts that extend opening hours, materials. On the other end, the application process screens (a) parents who are willing to make an effort and (b) toddlers who more likely to benefit from the available resources. This is certainly a poor diagnostic instrument, but probably better than a lottery.
The consultant is a curiosity, but then again (a) so are lots of ‘life coach’ type occupations and (b) to the extent she keeps track of forms and deadlines it’s presumably just convenient for some people to outsource some of that work.
I’ll steelman this.
Daycare is expensive. The article says that some of these “elite” preschools cost $20K/year, but that doesn’t sound crazy to me; it’s about 1/3 more than a comparable daycare where I live, with a much lower COL, and it’s a bit more than half what it would cost to hire a nanny at minimum wage. (And legal nannies don’t work for minimum wage where I live, to say nothing of NYC). I’m not surprised some of the specific schools named cost more than that, 20K/year for full time childcare in Manhattan sounds like a fantastic deal.
On top of that, it’s competitive. Not in the “we only take the best babies” sense, just in the regular everyday sense that there are a lot of options and everyone wants the best ones. If you go look at 10 daycares, a couple of them will seem sketchy, a couple will be too far from your commute, a couple will have inconvenient hours, and a couple will seem overpriced, leaving you with one or two ideal options, and other people will be following the same calculus, and they only have room for so many kids. So the daycares can be picky – they can raise their prices, they can make the parents jump through hoops to discourage the un-committed, and they can reduce hours. All of these just make the best options (the ones that haven’t yet done those things) more desirable.
Some parents have more money than good daycare options, so they say, how can I spend more to get more? Capitalism to the rescue: daycares open to fill the void. But starting a “nice” daycare is hard. You need to find Manhattan real estate in which to put it that’s close to where parents want the daycare to be. You have to find US citizens, ideally with some credentials in early childhood development, and pay them enough to keep someone from hiring them as a nanny. You have to jump through a bunch of regulatory hoops. And even if the number of daycares doubles, parents will still go look at ten options, pick their best one or two, and try to get in to those.
And to make matters worse, there’s no accurate way to rate a daycare by quality. Is the ideal child-to-teacher ratio need to be under 5? 3? 1? Who knows? Is the Montessori approach better than Waldorf? No idea. So parents have to rely on intuition and gossip and superstition – “Did you hear about Little Feets? They play Vivaldi during naptimes!” and that only exacerbates the problem by artificially narrowing the field of desirable options.
Conclusion: for NYC to have selective and expensive daycares that are selective enough and expensive enough to write an article like this is totally unsurprising, and does not indicate weird or irrational behavior on the part of Manhattan parents.
At some preschools the price is significantly higher for something that isn’t full-day care.
But what I was really getting at is the notion that these preschools are “feeders” for top private K-12 schools, which in turn are feeders for top universities, so you have to start when they’re babies to have a chance. First of all, is that even true? And if it is, it’s unnerving that something so early in life, before the child has any real consciousness or control over anything, and so dependent on random chance has such a big impact later on.
If it actually doesn’t matter later on, then it’s just status games for the brunch crowd.
Yes. Much like on America’s Next Top Model, losers are required to immediately pack their things and move to the New Jersey prestige districts, where they take their chances with the public schools.
Source: live next to one such prestige district, see dejected NYers riding by in real estate agents’ cars all the time.
That doesn’t necessarily mean it’s true, it means all those people think it’s true… which might make it true.
I think something like this works when you have a private school that also runs a preschool–when the preschool kids are looking to start kindergarten, they’re at the top of the list. For magnet schools/selective schools, they usually run on some combination of test, parental hoop-jumping, and (probably) connections.
As best I can tell, cherry-picking is *the* killer app of educational reforms. On one side, maybe you can improve instructional techniques in some tricky and demanding way that slightly boosts you kids’ learning and retention of the class material, and maybe instead of being at the 50th %ile your kids end up averaging in the 55th %ile.
On the other side, you set up your application process so only parents who really care about their kids education manage to get their kids into the system, and then you effectively give the kids an IQ test and only take the kids in the 5% of the distribution. Now, your kids average somewhere around 95th %ile on the standardized tests, and everyone talks about what a great school you are and everyone wants to get their kids into your school.
Selection is so powerful in terms of determining outcomes that it swamps any other kind of educational reform you can do. It’s like trying to discover the laws of physics in the presence of a bunch of Uri Geller type illusionists who keep bending spoons and making coins disappear.
Preschools are typically not “full-time” and therefore are not entirely substitutes for a nanny.
People are so funny. “The best daycare”. If your child doesn’t get stabbed or ingest poison, there’s not much more they can do. If you believe the educational quality of preschool matters, you deserve to be suckered out of tens of thousands of dollars.
Spoken like someone who’s never chosen a daycare.
Why do you think they matter? They’ve done a multitude of studies where they can’t discern any permanent increase in test scores comparing those who went vs those who didn’t. Is there some kind of magical properties that finger painting provides that isn’t showing up?
I suspect that the actual requirements on daycare are pretty minimal (clean, enough staff, staff not abusive, safe environment). But I’ve also chosen daycares and preschools as a parent, and it’s an emotionally fraught decision–you *know* you’re leaving your kid helpless in the clutches of some strangers so you can go to work, and you *know* you don’t have enough information to really know whether this is a good choice or not.
OTOH, my guess is that the super-competitive preschool thing is a status game played by rich people with a high dollar/sense ratio, perhaps with some more rational motives involving networking with other high-power parents.
A family member of mine lives in NYC and got his then-preschool-aged child into a prestigious preschool. I believe the process involved an application, then maybe something else, and then an interview with the family member and child together. The way I heard it, the interview was the decisive step.
I don’t know what exactly the school admins were looking for in the interview — it may have been some combination of the child’s personality and the family member’s personality.
Preschool can be excellent networking for the parents. For those purposes, the high cost of the preschool is an advantage as it keeps the riffraff out – if you can’t drop $35K/year on preschool then you’re not worth getting to know anyway.
This isn’t directly about the Marxism, but there was an argument in the guide to where communists are coming from that seemed very strange to me: when it says “You are not intrinsically smarter than a medieval scholar arguing that the great chain of being validates the divine right of kings.” It seems to be saying that what’s correct is so hard to check that what people say reflects more what’s in their interests than anything objective.
This seems completely wrong. We have two powerful methods for objectively testing what’s correct. No matter how strong your interests are, if the data doesn’t show some effect or if it doesn’t exist in every simple enough consistent math model you come up with, it’s really hard to keep on arguing for it.
I’ve been bothered this kind of epistemological nihilism a lot when talking to people not familiar with subjects whose arguments are grounded in this way, either in data or math. When you’re arguing about something like literature, it might be ok to say that you can’t be sure of anything so you should pay a lot of attention to who’s talking and what their interests are, but this should really not apply to most fields. We are a lot smarter about medieval scholars when talking about physics, so why can’t we also be smarter when talking about economics?
Am I being fair here—is economics actually closer to the physics side of things than the literature side? How important actually is this “everything you hear is what supports the powerful” thing to the entire argument?
Having studied both i’d say economics (outside of 101) is closer to politics than physics. Whenever economists try to make out that economics is a hard subject i immediately assume they are full of it. Economists ultimately answer to politicians, and are usually kept on payrolls to provide justification for an already decided course of action. supply and demand, great stuff. Try to put some numbers to it and you start making assumptions about the market you’re talking about, and whether people will find substitute goods or not, and whether the item in question is a luxury item or a staple, or what have you. The theory makes sense but the assumptions are where politics creep in.
In physics you have the luxury that many of the things you are using can be held constant while you change the part you want to measure. You just can’t do this with economics.
An example of this is that many people outside of the finance industry think the FED is very scientific when setting rates and that they analyse all of this data to come to the “right” decision. They do analyse tons of data but ultimately they are appointed and practically beholden to those who appointed them. It’s like a government sanctioned confidence trick where your job is to keep markets from panicking most of the time.
Was bailing out the banks the right decision? That’s a political debate more than a maths debate. The banks would have gone under, people would of lost their life savings etc, but it would have set an example that bad behaviour is punished instead of bailed out. What matters more to you, well that depends on your values. People will say “throw the bankers in jail!” and others will say if we hadn’t bailed them out the economy would never of recovered. Well we can’t run the opposite scenario to see so we don’t know what would of happened, maybe it would mean we have more responsible institutions now, but at what cost?
Hopefully you see where i’m coming from.
I guess economics is never going to be in even close to as good of a state as physics, but surely there have to be things from the subject where we can do way better with than a medieval scholar? Maybe I’m cheating by moving the bar too low, but I can think of few examples (sorry for the 101ness again though):
First, there’s this story I keep hearing about how beliefs about minimum wages changed. Powerful people definitely wanted low minimum wages. However, once economists figured out enough statistical techniques to mimic controlled experiments and collect good data about their impact, it turned out that the claimed negative effects weren’t really so strong. Economists then stopped arguing against modest minimum wage increases and these gained a lot more political support.
Second, even without data, math models can do a lot to check consistency of arguments. Playing around with supply and demand models quickly shows that when you tune some parameters right, increasing the price of a necessity can paradoxically make people buy more of it while taking an enormous hit to welfare. You should therefore know that if you hear an argument like “making rice more expensive isn’t too bad since people can buy other food instead,” the person making that argument needs to do a little more work.
Again, these are both econ 101 examples, so maybe anything relevant to the original topic is in a worse state. However, I hope that when professional economists are doing something like arguing against communism (or more generally giving any policy prescriptions), they are taking into account which parts of their field are more or less certain.
I do not believe that is the case. It was true for a long time that economic theory implied that the minimum wage reduced employment for low skilled workers–and it still does. The theory does not tell you how strong the effect is, since that depends on the elasticity of the demand for such labor.
Card and Krueger looked at the effect of an eighty cent increase in the minimum wage in one state, comparing what happened to fast food employment there and in adjacent state with no increase, and found no effect. That’s evidence against the theoretical prediction, but very weak evidence. It got a lot of attention because there were a lot of people who didn’t want to believe what conventional economic theory implied.
What “statistical techniques to mimic controlled experiments” were you thinking of other than the Card and Krueger piece? Are there studies giving good evidence of the relevant elasticity?
There is a 2006 interview with David Card, which contains the following:
The Card and Krueger study was the one I had in mind (as explained in a class I took, I definitely have not read the paper). I’m not qualified to judge the literature so I also checked IGM (is this the right place to go to?) and found the following that seemed to relate to modest minimum wage increases. The comments on the answers seem to be consistent with opinions being based on things like Card and Krueger and mostly in favor of the policy (question B) even though stereotypically powerful people would be against it. I could be misinterpreting terribly though.
I think everyone agrees that there’s a point at which increasing the minimum wage would make it more difficult for people to get jobs, people just disagree on where that point is.
Like, raising it to $60 an hour would mean employers would have to quickly find ways to drastically reduce their number of employees if they wanted to stay in business (I imagine many would automate the tasks previously done by humans). But raising it to $15 may or may not have a noticeable effect on hiring rates. Or whether it has an effect might depend on a lot of other conditions.
I’ve heard arguments that raising the minimum wage is ultimately good for companies because they become choosier about who they hire, so they hire higher quality, better qualified people, which helps their business. Which might be true. But in a world without a UBI that’s obviously not good for low-skilled workers.
That assumes that the people running the business don’t know their own interest, since without a minimum wage they could have chosen to be choosier and pay a higher wage. Not impossible, but not the way to bet.
Rather like assuming that some arbitrary imposition on you, say the requirement that you wake up at seven every morning, will benefit you. It could, but the odds are against it.
Pet peeve: If you just talk about “Economics” without even making a distinction between microeconomics and macroeconomics, it’s almost impossible to have any real discussion.
My brief stance is…
1. Microeconomics is mostly hard science, and has given us many truly deep and valuable insights.
2. Macroeconomics may be impossible as a science since the only object of study – The Economy – is sentient, aware of macroeconomics, and adapts to change as new findings are published. On top of that it is deeply politicised. Much of the critique of “economics” assumes this is the only economics.
Your second point reminds me of Scott’s post about anti-inductive systems. I wonder if the study of any anti-inductive system (which arguably includes a lot of human interaction) can truly be called science. The scientific method requires inductive reasoning, so it seems like attempting to apply it to anti-inductive systems will inevitably fail.
This is a really interesting insight! It seems it should be possible to study an anti-inductive system, though, as long as you don’t actually interact with it? (Huzzah for unapplied truths!)
What other anti-inductive systems are there? People? What else?
Many (though not all) metagames. It’s usually possible to attack the “best” strategy, which means it will often cease to be the best.
You can’t expect economics to tell you how to value your tradeoffs anymore than you can expect physics or engineering to do so. What you can expect from economics, at best, is that it lets you know what your policy tradeoffs are. Economists aren’t going to be able to tell you whether to bail out your banks, but they should be able to inform you somewhat on what tradeoffs you’re accepting if you do/don’t bail out your banks.
It doesn’t matter. Same principle applies to physics also. No relativism or nihilism is needed here, just a simple observation that you are (necessarily) stuck in a particular paradigm.
Economics are concerned with society, and it’s impossible to experimentally derive a different paradigm without irreversibly altering said society. This means we should be careful, yes, but the current paradigm is full of holes and sooner or later a change is going to be necessary. That the current paradigm supports the currently powerful gives us an important clue about the direction this change should take.
This statement seems too strong, to me–it implies that we can know nothing about macroeconomics. I think a more defensible claim is that we should not put too much faith in the pronouncements of macroeconomic models.
I mean, someone confidently saying what the economy will do next year is full of shit (and probably knows it), regardless of their background. But we can make some pretty solid statements about macroeconomics. For example, hyperinflation is really, really bad and we should avoid it at just about all costs–that’s a macroeconomics claim, but I think it’s a very good guide to what policies we should and shouldn’t pursue. Deciding to make us all rich by running the printing presses full tilt and covering the world with $100 bills isn’t going to work out very well.
I’m not sure it is.
“Macroeconomics” sounds as though it is about big things, but the world wheat market is a problem in price theory, aka microeconomics. I think the natural division is not big vs small, it’s disequilibrium economics vs equilibrium economics.
Hyperinflation can be described in an equilibrium framework, along with many of its effects—it’s just a rapid decrease in the market value of money. Some other effects probably require an analysis of disequilibrium—which, unfortunately, we understand much less well.
I reported this comment by mistake.
I feel like you’re taking this argument in the wrong direction. Look at it again:
The argument ISN’T that we don’t have the capacity to discern truths that wouldn’t have been possible for the medieval scholar to comprehend. That’s clearly false, as you mention we’ve got a thousand years of advancements behind us that the scholar wouldn’t have had access to. The argument is instead making two points:
1. The fundamental capacity of you and that scholar are probably pretty equivelent. Both given the same education and access to information, you’d likely arrive at similiar conclusions, or at least be operating within the same framework.
2. Relying here on the sentence immidiately following the one you mentioned, history shines a harsh light on every time period. If at every point in the past humanity engaged in practices that we today find to be at best ridiculous and at worst horrendous, why would we think our current time period is any different? Just like we now think the scholar’s argument was innane, some core tennants of our current paradigm will likely be revealed to be crazy.
For technical/scientific progress, it’s clear that we’re really seeing progress. I mean, computers and rockets and antibiotics and cars work, we can build them, and nobody in the middle ages could.
For philosophical/religious/moral progress, it’s not clear that we’re really seeing progress or even what progress means. The arc of history bends toward our present moral consensus, regardless of what that consensus is or whether it is in any sense better than any older moral consensus. In some alternate universe where we still have slavery but stopped eating meat for moral reasons, the arc of history bends in that direction, too. We can see that meat eating became less and less acceptable before it was finally banned, whereas there were and are some weirdo activist groups campaigning against us taking up the white man’s burden and benevolently managing the lives of the lesser races in ways that just happens to get our cotton picked cheaply, but of course those silly activists never managed to get anywhere. And in some other timeline, the True Faith of Islam has taken over the whole world, and we can see the arc of history bending toward our current enlightened understandings of the Prophet’s teachings (PBUH) and the proper role of sharia law in society.
May I propose the name: “Anthropic Moral Principle”?
This is either trivially true and meaningless or false, depending on how you set the parameters. There are far more people and far more people who can afford to further intellectual pursuits, and its not about coming up with an idea, writing it down and showing it around. Its about if your idea can stand up against the other ideas that are out there being tried, the sheer volume difference is a qualitative difference in both the quality of the ideas being generated and the quality of the issues that need to be addressed.
Yeah. Maybe I am focusing too much on the diving kings example but the next sentence is:
“If those ideas favour one set of interests over another, the odds are that they will be powerful interests.” That section occurs after discussions about how “the dominant ideas will disproportionately reflect the interests of the powerful.”
IMO the author didn’t choose a medieval economic theory like mercantalism that has been discredited. Instead it was the political system. Regardless, he is not saying those ideas are correct because the powerful push them-he is saying they are “dominant.”
The idea IMO is that author feels the divine right of kings was a post-hoc rationalization for a world where the Church and feudal landowners were very powerful and wealthy. Important stakeholders pushed those ideas to legitimize a system where they were on top.
As others have pointed out its hard to argue we have moral progress or knowledge. This is a CW free thread but if we managed to get someone from one of the Gulf Monarchies or Saudi Arabia, states where powerful, influential, and rich sectors of the society have decided a form of monarchy was the best could we really disprove their system? Is there any data point you could cite to show that the House of Saud shouldn’t control, what, a trillion dollar fortune? And before we laugh at the Saudi citizens as the fargroup maybe its worth mentioning that IIRC the King of Spain wielded a huge amount of power in the late 70s and early 80s.
Now all that seems insane but the author of “seeing like a communist” would argue its because the dominant ideology being pushed by the powerful isn’t monarchism.
It’s true that there are beliefs many modern people have about society that will seem outdated and quaint in the future due to new information, so we should try not to take anything for granted even if it seems really obvious at face value. I don’t disagree with that, but I didn’t find that point very relevant to a critique of capitalism specifically, because that point could be made about literally anything.
I can just as easily imagine an Objectivist critique of altruism (or a neo-reactionary critique of democracy, or a feminist critique of patriarchy, or an MRA critique of feminism, etc. etc.) starting out with, “You are not intrinsically smarter than a medieval scholar arguing that the great chain of being validates the divine right of kings. Don’t think you can’t be duped by ideas that will one day seem laughable.”
And regardless of what they’re attached to, statements of that nature always strike me as pompous and not very useful in a philosophical debate. It gives the impression that the speaker believes that this notion–that human thought processes are flawed and tend to be influenced by time period and circumstances–is something that has never occurred to their intellectual opponents, and that once they realize that they might be wrong about some things, their minds will be totally blown.
Yes, thanks, I’ve already considered the idea that I might be wrong about some things. Believe it or not, that’s not very revolutionary.
I’m also struck by how, in this particular case, the author seems to assume that a critique of capitalism is automatically a defense of communism. And that seems to be the case for a lot of communist arguments in general. There’s a whole lot of talk about why capitalism is bad but very little talk about why communism is good or why it would work better. It’s as if those are the only two systems the person can imagine.
Maybe in the future society will evolve to a post-capitalist form and become something we can’t even imagine. Maybe automation will make work redundant, maybe we’ll all transcend our fleshy human forms and become part of an immortal internet, whatever. Such a future wouldn’t necessarily be “communist,” it would just be something other than capitalist, or the current incarnation of capitalism.
Indeed, the 2019 USA version of representative democracy + capitalism doesn’t even look much like the 1880 USA version of representative democracy + capitalism.
I’d draw another analogy: economics is rather like the art of war.
Of course it’s possible to be better or worse at war, or to gain knowledge about how to wage war. But all knowledge is relative to the material conditions under which war is waged. When those material conditions change, big chunks of the knowledge go obsolete. And we’re hard-pressed to predict in advance what’s going to change and what obsolescences it will cause.
Reading Sun Tzu’s The Art of War really gave me a feeling for this. My experience reading the book was that it freely intermingled “timeless” remarks, suggestions based on durable social conditions which no longer apply, such as advice about how to divide plunder among the troops, and specific suggestions about, for example, the daily cost in silver about fielding an army of a specified number of chariots.
For the modern reader, there’s stuff that still applies, stuff that applied for a long time but has changed (often since the introduction of the modern national army and its theory of discipline, pay, supply, etc.), and stuff that was probably out of date within the decade or century of publication.
But from Sun Tzu’s perspective, all of these were facts! He doesn’t distinguish between them because he couldn’t. How could he have predicted the myriad social, political, and technological changes? How could he have modified a sentence like “do not attack an army uphill” with a clause like “without recourse to heavy artillery or close air support”?
I think economics is like this. The modern discipline of economics describes modern society. It does so well; it could do so better. But specific results in economics have a lot less to say about other societies. General principles may be relevant, but it can be hard to deduce which principles are general and which ones are not.
So when an economist says “here are some truths about how human societies can possibly be organized”, the communist says “your conclusions are based on observations of present capitalist societies. There’s no reason to assume that all or any of them apply to future societies with different material conditions and modes of production.”
Now, putting on my personal hat for a minute, I find the communist rebuttal pretty persuasive. The trick, of course, lies in making predictions and not just antipredictions about future economies, and that’s where Marx I think goes off the rails.
I don’t remember Sun Tzu, but I recently read Clausewitz, and he is very organized. He puts the timeless generalities first and the specifics last. Part of this is that he’s writing about how Napoleon changed war, which is a big reminder that some things aren’t timeless. But I think a lot of the organization was with analogy with other fields. Advice abstract enough to apply across fields is probably more lasting.
Except they are not. Going back to Adam Smith, economists have been interested in a wide variety of societies, including some in the distant past.
Economics starts with a set of logical arguments applicable to all human societies (and, arguably, other species as well), that suggest certain conclusions. Getting useful conclusions usually depends on some simplifying assumptions. We then tune the theory by trying to test those conclusions against real world evidence and modifying our conclusions, sometimes even revising the underlying theory, accordingly.
The result will be better tuned to the sorts of societies we know a lot about, but tuned to some degree to other societies–there are communes out there to be observed, we have historical and anthropological evidence on societies very different from ours, and there is recent, even contemporary, evidence on communist societies (in the conventional sense of “dictatorship of the proletariat” systems such as the Soviet Union).
Indeed, it seems to me that when we try to imagine what an entirely alien society might look like, economics and game theory are probably the best tools to work out what’s possible.
Does anyone have a good experience around ERP software? The company I work for has implemented it over the past year (in my role I don’t have to deal with it much, but from co-workers who do, I understand that this list of disadvantages from Wikipedia rings true) and when I’ve mentioned the roll-out while shooting the breeze with customers or suppliers, “condolences” has been a common response.
I understand that in theory it should make business processes more legible to the executives, but in practice the main effect seems to be slowing down order processing for things that never would have been an issue under the old, manual (and supposedly less-efficient) way of doing things.
The recent discussion about Inventing the Future crystallized some ideas around this for me. I’m very sceptical that complete automation is possible. Reality is fractally complex. Any map will have cases where it doesn’t fully capture the territory—but automation must always work from a map. Even a super-intelligent AI will have to work from an internal model that necessarily glosses over some external details (unless it expands to fill all of reality so that the map is the territory). In contrast, discrete actors are able to deal with edge cases (related to their normal roles, not generally) because it is possible to have a very accurate local map.
The struggles that seem to be common with ERP systems illustrate that individuals with metis for their local tasks can be more efficient than a centralized, generalized system. I’d like to assign The Fatal Conceit as remedial reading for C-suites across America.
Now imagine trying to manage a national economy or a planetary economy with an ERP/CRM system, and not just for a single business’s processes, but for every business unit everywhere all at once, and not just for business processes, but right down the household and personal level, on every household and on every person, everyone everywhere.
Even if this could work (and it can’t), and even if it could be implemented (and it can’t), and even if it could be imposed (and it can’t), why would anybody think it’s *desirable*?!
What group/person is supposed to be advocating this?
“Soviets with Computers”.
“Good” experiences? No, not really.
If I hear “increase visibility” one more time…(thankfully now I’m at a much smaller company with no desire/apparent need for ERP).
I wonder how much of the pain caused by an ERP system is the moving, rather than the using. Typically, an ERP system is adopted by a substantial company that already has its own ways of doing things. And rethinking how to do everything in a way amenable to computerization is pretty painful. Perhaps it would have been easier if the company had grown up with an ERP and at each stage of its growth, as it faced new challenges, it had preferentially adopted one of the ways of doing things that the ERP system already supported.
That is almost certainly a huge part of it, but another issue is that ERPs are a poor fit for businesses that do not have more or less “cookie cutter” transactions. In other words, never selling the same thing twice (a high degree of customization). This can be managed, but requires a much more flexible ERP (which costs more to implement/maintain and may be more prone to allowing human error at various stages).
Right, my exposure is definitely in the latter category. In the division of the company I’m in all of our projects are one-offs (custom designs being built in different locations).
No, a lot of ERPs require novel procedures to track stuff that no one has really thought about tracking before, which introduces all sorts of new complications. It also puts in hard rules on certain types of transactions, which makes correcting issues a lot less easy to do. For instance, as an accountant, I cannot journal materials to accounts, it must be performed via material transaction, which must be done by someone with correct user privileges, and if you are the only one that has them and I just asked you to do it for the last 3 months and you ignored me, you must come in on December 28th and perform the correct materials transactions so we can close out our books for the year.
That does kind of sound like what johan_larson was saying to me…ideally that material transaction should have been required to take place before whatever was done with the material actually happened…that is a process flow issue not an ERP issue (and having only one person with privileges to enter certain transactions is probably another issue).
Maybe it should be, but it isn’t.
So my father-in-law installed ERPs for decades at all sorts of firms. The typical firm that does not have ERP operates something like this, using a real example.
1. Company makes turnstiles and actually makes 80% of the turnstiles in the US (industry changed)
2. At the beginning of the week, decide how many turnstiles you want to make.
3. Make turnstiles using raw ingredients.
4. Finish making turnstiles at the end of the week.
5. Record total production and total consumptions.
6. Boom, that’s how much a turnstile costs.
This is completely unacceptable for any high-performing business. The only visibility they have at an extremely high level is the amount of stuff they consumed in an entire week for an entire week’s worth of business. They have no ability to schedule, they have little inventory control, if they have a quality defect they cannot trace it to a shift, and they can’t tell why they are losing money or if they even ARE losing money until the end of the week when they close out reports, and that’s only if they are actually doing cycle counts correctly.
That also means if they are developing new products, they cannot tell you how much the new product costs. Say I make a new turnstile that has 20% more material, is 6 inches shorter, and might actually take 10% less time to make. How much does this cost? you have no idea, because it is comingled with everything else. You might naively assume that you can make some assumptions, like, you increased your volume by 10% and your costs increased by 15%, so this product is actually more expensive than your existing portfolio, but that’s a moronic assumption, because it assumes all else is equal, but line performance can change dramatically, and, again, who the fuck knows what’s going on with your cycle counts that dramatically change inventory and require you to restate prior period financials.
Unfortunately, a lot of managers don’t have an idea of what’s going on, certainly not enough to really gauge why or how they are losing money, because you can’t just intuit detail to that level. High level executives will have absolutely no idea.
I’m…not sure how that ties in with my comment? I agree that is the problem solved by ERPs if they are used correctly, my comment was about ERPs that are not being used correctly (and stating that this is a usage problem, not an ERP problem).
@acymetric
I mean, ideally that’s the order in which things should happen, but they already didn’t happen that way, and now it’s a trial to fix it; with people enforcing a protocol, they have the flexibility to deviate from it, scrawl a note in the margins of whatever form, and initial it. A computer enforces it with no thought.
In its worst form, the lower level employees will populate the system with bullshit just to make it shut up so they can get on with their actual jobs (either putting it in “correctly” is a trial that eats up a bunch of time, or they just don’t have the information and have to put lies or they can’t move on to the next screen). Dr. Grumpy posts weird things he finds in medical records, and I’ve often wondered if those happen because the electronic records system provides a question that must be answered or it won’t let you save the page, so the doctor quickly just fills those in with the default by rote keyboard strokes and types the real answer into free text fields; the output then reads as conflicting nonsense because it turns the checkbox input into a full explanatory sentence.
Medicine is a great example of something I mentioned a little upthread (that some businesses/industries are too complicated to design an effective ERP that covers all cases without making it either massive in cost/scale or totally unusable, or both).
In the specific example I was responding to (which sounds more or less like simply issuing materials used for a project in the system), that sounds more like someone (or multiple someones) just didn’t do their job when they should have done it and then kept declining to do it after the fact until the last minute despite multiple requests/reminders, which is a usage issue.
There’s lots of “mandatory” in medical software, and the lower down you go the less flexibility anybody will give you. I’m in EMS. For cause of injury I have “Fall inside an occupied spacecraft” but no “fell from the bed of a pickup truck”. Guess which one I’m most likely to use …
Do you assign zero probability to the idea that a superintelligence might be able to read human minds from a distance, and/or observe everything that happens on Earth? In either case, it would have at least as accurate a local map of every human’s economic and work life, such as they may be, as those humans.
I think some futurists speculate that these things are plausible. Also, if it didn’t have such ability to invade human privacy, the efficiency with which a superintelligence would process the data it does have might allow it to form highly accurate and detailed theories about things it doesn’t directly observe.
Of course, plenty of other people are skeptical of how super- a superintelligent AI would be, or of the very notion. I’m just saying that there are people who would totally agree with you about the enduring importance of distributed knowledge in human society, but not if you bring superintelligence into the conversation.
Do you assign zero probability to the idea that a superintelligence might be able to read human minds from a distance, and/or observe everything that happens on Earth?
I do.
The only way that this is even remotely plausible is if we are living in a synthetic simulation, and that intelligence has full read access to the underlying sim state.
I agree with Mark Atwood, and I’d also add that to observe everything in real-time the superintelligence would have to be present everywhere. A map can be completely accurate if it’s co-extant with the territory but there would be a lot else to worry about in that scenario.
I do. Which one? Microsoft Dynamics got pretty good, I have seen projects even with zero configuration, just load the data and go work out well. That happens when opening a new subsidiary somewhere. But we are talking about like 4 user projects.
Something is telling me you must be working at a very large business, because from the small business perspective there is no such thing as not using ERP. If you have invoicing, inventory, accounting, purchasing in one software, that is ERP and why use different ones. So if you think there is such a thing possibly at all as not using ERP, that must be a very large company.
Which sounds like SAP. Is it? Unfortunately I don’t have much experience with large businesses. For me a large business has always been something sort of a perversion.
So when you are saying it makes business processes more legible to executives: more legible than what? What is the alternative to ERP? In my (rather extensive) experience with small business ERP it is more about saving work than about legibility.
Suppose the company is a shoes wholesaler to retailers, OK? So one day Uncle Joe who owns a shoe store in Podunk phones in and gives an order for 30 pairs of shoes. Someone takes the order and records the important data into a sales order: item number, quantity, price agreed, shipment date, payment date. Then presses a button and sends an order confirmation to the customer. Then presses another button and the boys in the warehouse get a picking list of what to ship. Presses another button and there is a delivery note. Presses another button and it is an invoice. Presses another button and the invoice is automatically booked into the accounts receivable, into the G/L, into whatever tax there is, sales or VAT, the stock level is reduced and the stock value is FIFO-ed out into a COGS account. This is what ERP from a small business perspective is. What is the alternative? Doing all this manually? No point.
As for legibility, sure there are reports. Boy I programmed so many of them. But I find them a little pointless in many cases. Like boss telling me something is wrong, we never have profits more than 20%, why does this product show 45% ? True it was an issue, don’t remember if bug or user. But if he knows that so accurately, why does he need the report? 🙂 OK OK I know, because little differences matter and because one needs to be sure. But on the whole managers know what is going on even without them.
So. Suppose the big business is using not ERP but say one program for invoicing and another for inventory, which is stupid, but whatever. It is not necessarily less legible. You see if for example all of them use an MS SQL database, it is trivial to write queries that combine data from multiple databases, even from multiple servers. So there is no such thing as “island software”. In SQL Management Studio stuff like linked servers, to query data from another server, work very well. The issue is often how you link them. For example if the item numbers or customer numbers are different in the two software, you have a problem. But even then you do not have a legibility problem. All the “executives” need is a bunch of interns, hopefully paid, who pore over reports from multiple sources and combine them in Excel, or what is a better idea, basically do data entry, fix the e.g. item or customer numbers in software so that they are the same in another software and then you can write those combined queries.
It isn’t really about fractal complexes. Not unless the business is so obscenely huge that things are buried in a thick layer of bullshit from performance evaluations to “best practices” and “reengineered processes”, so that the business owner cannot sit down with a programmer who says lookie here the report you want is this stuff in this table here combined with that table there, okay, that is what you wanted? So if the bullshit cannot be cut out.
Most of these things are very simple. Either very simple or not doable. Or not profitable. For example one day boss man came up with the idea that products should be barcode scanned at ever step of the process. The programmer was sweating. Luckily for him, it turned out, various outsourced warehouses, logistics companies would charge so much for scanning (and for unpacking before and packing back again) that it would eat 10% of the profit at which time the boss dropped the idea and the programmer feeled relieved.
The company I work for is large but not obscenely huge. Using different software for invoicing, inventory, etc might seem like more work but it also can make exceptions easier to deal with. A few examples that come to mind are:
– if a trusted customer has an urgent order, workflow steps can be done in parallel or out of sequence (e.g. ordering supplies to fulfill the order before payment has been processed) but in an ERP workflow sequence is more enforced
– something that is fabricated on-demand by a third party might not be in inventory; in an ERP it is difficult to prepare shipping labels or invoices without receiving it into inventory whereas with separate programs it would only need to be touched once
– if one non-essential component of something being fabricated is back-ordered, it might be better to ship it as-is and send a service tech to add the missing part when it becomes available but an ERP will complain if you try something like this.
I’m sure there could be technical fixes to each of these examples but my point is that exceptions to normal workflow come up fairly often. A more decentralized approach builds in more flexibility instead of trying to anticipate every exception and make a rule for it in advance.
The first is not necessarily true. It is just that managers want ERP set up so, because managers often don’t understand the full complexity of what their people are dealing with. I often have this request from managers that the software should enforce business rules and I try the hardest to fight them. For example, do not let people purchase more stuff if we have X million euros of outstanding purchases. I explain them that there will be special exceptions. Which the manager may approve. But how if he does not even use the software or is not in the office. So I tell them the correct solution is that every day he gets an automatic email on his phone saying the total outstanding purchase order value is X and if he finds it too high, he and not the software makes the decision that no more purchases without his approval and he and not the software must enforce it. The software should only inform decision. In fact I can also send another automated email, showing every purchase, so that he can see if some non-approved ones went out, but he and not the software will enforce decisions, the software only gives information. This is my principle and usually manage to convince them.
The second one is definitely true. The reason is the enforcement of accounting standards & correctness. An invoice without an outgoing stock movement should be called “prepayment invoice” or “proforma invoice”, different rules, all that. But for example to take the order confirmation report and write “invoice” on it so that you can kind of make a fake invoice is a few hours work at most, billed at a few hundred euros or dollars.
The third is true, again, can be worked around by a simple customization of a “fake” delivery note that does not move the stock therefore does not mind that the finished product is not yet on stock. Once the tech returned, you can close the production order, putting the finished item on stock, and then book the proper delivery note and invoice.
We do not anticipate these exceptions, we just make these customizations on the fly. Thing is, they all follow the same pattern. To do business, you need documents. Invoice, delivery note, purchase order, all that. Yes, these rules are tied to the creation and posting and printing of such documents. But once we make “fake” documents you can basically do anything.
This came up during my very first project. Boss, not owner, made a strong rule, no goods ever leave the warehouse without first the sales have issued a sales order confirmation, okaying it. Turns out, the owner has friends. And friends are above rules. They just waltz in the warehouse and say I want that thing there. And pays cash. Shrug. Gets a “fake” invoice and the real paperwork will be done later.
And seriously those “fake” documents could just as well be made in Excel. The simplest way to deal with the rigidity of ERP is to not even enter the transaction at all until all the prerequisite transactions are sorted out. The question is really how much you want to break the rules.
It’s a bit of an art to make a software system actually work. Things can go very very very badly if something goes wrong, and many things can go wrong. Starting with the incentive system of the developer/integrator – if you’re paying them by the hour or project, their skin in the game regarding your long term success is very low. Possibly negative – a perfect integration will give them no further work from you.
I’ve developed from scratch a couple of small-medium size ERPs. With experience you can get to pretty good results – mostly with a healthy dose of respect for the client’s independence. Like have everything exportable to _and_ importable from excel – this way a lot of “december issues” can be fixed creatively on the spot, with a note to the developer to make things prettier in the future. Have a permission system that allows the admin to do absolutely anything, and give him the responsibility to assign roles etc. Mostly have a healthy dose of humility regarding future change – and this includes never trying to fix future problems today.
This is only true if the business needs are static.
If the development is ‘agile,’ where small bits of business value are constantly being delivered and tested/used by the client, then the developer constantly has to produce value, assuming that the client can judge the business value being delivered.
Good point. That’s the best way of doing things. I didn’t have the occasion to observe it with bigger companies though, and I suspect it may be a bit difficult to pull off.
Plus, beginnings are a sensitive time – you still need a MVP to start with, and almost by definition you have to put it into production in one piece.
Half agree. There are a lot of customizations, developments to be done in one big first step just to make basic operations possible. Asking the customers to test it makes the consultant look stupid, because the consultant does not really consider it a special extra thing but something obvious, “we have always done it this way” (because someone wrote an Excel macro and then it was forgotten), especially if legal requirements are involved.
Basically an ERP is like a half ready car. You cannot just put in a steering wheel and ask them to test it. Every car has a steering wheel! Just give me a normal one already! “Yes but we are checking if it fits your height” “Don’t give that bullshit! Every car has an average height steering wheel fitting most people!”
You get accused of not understanding business processes they think are common, or try to scam them.
Once you have what they consider a basic functional car, of course you can go agile about pimping the living hell out of it.
It would be helpful of course if the vendor would ship the software really finished. But they don’t. Microsoft Dynamics has not even a standard setup, just a demo setup. They really expect the consulting partners to configure even the basic tax reports like VAT here. While the customer might buy 3 days for that, grudgingly, they do not see it value delivered. They are just like what is this shit software that does not do this out of the box when my old MUCH cheaper little local software did. Ultimately the consulting partner will have to build up their own standard solution eventually… But there is also the part where a lot of businesses are doing things their own way and think it is a common industry standard thing, when not. In my corner of Europe it is not uncommon to work 20 years at the same company in the same position. Little turnover means little information about what other businesses are doing.
Endorsed.
Hourly billing is an extremely high-trust situation. The reason it exists is twofolds. First, figuring it what a business needs is billable work. So technically, yes, one integrator could charge a fixed price for surveying all processes and writing a very detailed requirements specification, another could use that as an input and write a spec for a fixed price and even a third one could do the development strictly to the spec. But that means the customers ends up to pay a lot of money for writing papers and they don’t want that. They want real work, that is, most billable hours spent programming or training. This means the project has to be a bit open-ended. Works in a high-trust country like Britain. It is very bad in a low-trust country like Poland, constant arguments of what was part of the price and what is an extra.
The second reason is this. Consider the term “consultant”. It means advisor. E.g. Berater in German. The point is, at some past, probably forgotten era of IT history, only large businesses implemented ERP, large businesses who can afford an internal IT and developer team, and the external integrator really just gave them advice, the dirty work was done in-house. But later on also SMBs wanted ERP, who have no internal IT, and effectively see the consulting partner as a contractor delivering software. But the hourly billing and the outdated term “consultant” remained.
Once the SMB grows enough to get hire an internal ERP developer, the external ones will really just advise and that works perfectly on a hourly basis.
>I’ve developed from scratch a couple of small-medium size ERPs.
How can I politely doubt it without having to imply you lie? OK I believe it in one case: no stock. Because if you reinvented stock value FIFO on your own, no I won’t believe that. It is hideously complicated. It took Microsoft Dynamics something like 20 years to iron out the major bugs. Because there are a gazillion of special cases.
Without stock, ERP becomes easy, as it is basically just entering documents, i.e. forms with a header and line, saving them in transaction tables in a straightforward way, making the documents printable, have some checking for wrongly entered data, and sum up the transactions in reports. The issue is, while easy, it is still big. It is still 25 documents, 15 kinds of transactions, 50 reports. Maybe you did not write, but generate much of that, in Rails, generate scaffolding style, or you used some kind of 4GL environment also known as inversion of control, where you don’t have to write the basic CRUD, only the code for checking data and saving into transactions. Something akin to Microsoft Access, just better.
Maybe I will believe it if it was in a country where legal requirements are not strict. Like no such thing as invoice numbers must be strictly monotonously growing by date and no holes.
@nameless1
A few things:
You may be overestimating the complexity of small-scale ERPs.
Radu may be describing something that has some ERP-like features but is really more of an MRP.
It may not be necessary even if the ERP includes stock. You are looking too narrowly at scenarios and needs from your own experience. There are lots of others out there, I promise.
This is likely, I think you are overestimating the prevalence of this as a law (rather than simply good practice for better recordkeeping) based on your own experience.
It is also unclear to me why sequential numbering of invoices by date issued would be a particular challenge (particularly for smaller scale software).
> How can I politely doubt it without having to imply you lie? OK I believe it in one case: no stock.
You can call me out, no problem 🙂
I think it depends on what you’re calling an ERP. I’m currently maintaining two live projects: a decade old software for a FMCG company with about 1000 merchandisers, and my own http://www.couriermanager.eu with about a couple dozen clients. Second is more of a melange between a small ERP and a SaaS, but the first I think covers 90% of what that type of company wants, including very light stocks, HR (that one was fun), a couple of mobile apps etc.
Neither is document-based, btw.
Looking at the feature list Courier Manager, I think the best name for this kind of software would be “operations”. That is, focusing on running the daily job of the business. Operations isn’t really the best term for it but there isn’t any better. It is something you would demo to the COO, not the CFO.
My view of ERP – different from the very old MRP based view of @acymetric – is that everything grows out of accounting, which you don’t have here. That is, for example, if you have a Debtors account in the G/L, you also must have a detailed Accounts Receivable ledger where auditors can see actually who is owing how much money. This AR ledger is best created by booking invoices and payments. And at this point the accounting begins to approach operations. Better not to write invoices by hand, but click on a delivery note or sales order and create an invoice automatically. And so on. So it can reach very deep into operations, into daily work. But it all comes from having a G/L, having the other ledgers to support it, and then these ledgers created by the more operational functionality.
The accounting oriented-view might be partially because my attitude and experience but to a certain extent pans out historically. SunSystems has a SunAccount core and a later SunBusiness extension around it that handles things like order processing. Microsoft Dynamics was first called Navision Financials, basically the provided the accounting and others developed the operations and later the largest add-ons got integrated into it, making it a full ERP, that is, accounting + operations.
Just to make something clear. Basically I find developing operations functionality very easy because it is often just literally writing down into code what the customer wants. I find the accounting part super hard. When it is about two hours discussing whether the very special transnational VAT scenario they want the software to support is even allowed or not. Or how should the idea of returning damaged goods at half value even affect the stock value, is that even correct to do under the same item number or do it under a different one, because it might be wrong that the total value of the same item number will be based on both good and damaged stocks and maybe the auditor will not like that. Or maybe the whole idea is completely bogus, the auditor will rightly ask, why exactly half, what evidence proves that it worths half as much damaged as good, and not only a little less or maybe almost nothing? This is what I find the hard part, this is why I think accounting is an important part of ERP…
@nameless1
You may be right. You’re definitely right about CourierManager being demoed to the COO. The other software is more well rounded, including a financial-only module that took a bit over a year to get into production. Operations are easy compared to that. HR was also pretty bad, but yeah, financial wins.
>Like have everything exportable to _and_ importable from excel –
No wonder from-scratch ERP developers love Delphi, that has stuff like that out of the box in the dev environment.
>Have a permission system that allows the admin to do absolutely anything, and give him the responsibility to assign roles etc
Roles suck. In my experience not even the direct manager knows _exactly_ what work an employee does and there is the case of substitutions for illness. And then even if the role is internally assigned, the developer has to figure out exactly what tables belong the role because being an order processor in this company may be different than that one.
No, the only permission system that does not drive people crazy is the negative one, which is sadly usually not supported out of the box: you tell only what certain people are NOT allowed to do. You focus on key data, most important tables, and ignore the gazillion secondary lookup tables. You decide for example that the warehouse dudes have no business in modifying G/L accounts or posting G/L entries, you forbid that, and leave the other 900 tables accessible. And even this is not really necessary because they do not know how to do that anyway. Security through obscurity? More like security through being a lazy dumbass who did not even listen to the training about how to do the things in the system he actually needs to do.
Your approach to roles based on tables is what makes this such a challenge. Better implementation of roles (done in a client-friendly way) requires a bit more work from the developer but is definitely doable.
Eh, roles for dashboards (i.e. “what do you see without having to dig”) are still pretty handy.
What’s the argument in favor of moving to Silicon Valley to start a new technology startup?
I understand the argument for:
– Moving there as an employee: salaries are higher, even when accounting for cost-of-living differences.
– Not moving an existing company out of there: you’re not going to be able to pay less even if people follow you to the new office, because wages are sticky. And the disruption of moving is not worth it once you’re past a dozen people.
– Starting a new business in SV if you already live there: Convenience, ability to support the business with on-the-side consulting revenue, your professional network is local already.
But I keep getting suggestions that my next startup (whenever it happens) should definitely be established in Silicon Valley, and that this is beneficial to the business (and not just me personally enjoying California weather instead of living in the frozen wasteland that is Canada). I don’t get it and it’s usually just asserted, not explained. This place tends to do good steelmanning, so I suspect I’ll get a better explanation by asking here: What am I not accounting for?
For context my default behavior would be to start the next business in Waterloo again, simply because of its low cost of living (= much longer runway), and good access to hire co-op program students (basically junior devs that fire themselves automatically after 4 months, which is great when you’re still in exploratory mode) as well as the overall talent pool. Toronto has a similar value proposition, but it more expensive. Vancouver seems worse on both fronts. Silicon Valley has the biggest talent pool, but it seems like that’s only because it has the highest salaries (that is, a Toronto-based company offering similarly high salaries would be drowning in applications from qualified candidates, which is why they universally offer much less). It’d also require me to navigate the visa bureaucracy (E-2 visa most probably).
I’ve never lived in Toronto, so I’m not sure. But I spoke recently with some Waterloo founders who moved to SF to start their company, and they were all like, “The network here is unparalleled. Sure, you can hire people where ever, but you’ll get more people who are higher quality here who are enthused about working for a start-up, who bring experience with other start-ups, and who understand some of the general constraints of start-ups. You’ll get better advice from better investors, you’ll have better access to an ecosystem of service companies that mostly cater to start-ups, and you’ll be able to network with many, many, many more founders who can give you useful advice.”
Is that true? I dunno. They seemed to think so.
There are a lot of highly tech competent people in Vancouver and in BC in general, who would love to work for a CA based company instead remote to a US based company. Why there hasn’t been a tech boom in Vancouver is confusing to me.
My advice is if you don’t want/need Sand Hill Road (and you don’t) and don’t want/need hottest buzzword compliance trendy tech workers (and you probably don’t), stay out of the SF Bay Area.
If you really really need a couple of *specific* such tech workers, pay them to relocate to the midwest and while keep paying them SF rates. Once they get a taste of what their salary can buy there, they may never want to go back to SF again.
I know of a bunch of low-key you-use-their-tech-every-day-but-never-heard-of-the-company slow-steady-growth tech companies that happily are based in low-pop “flyover regions”, several of them 100% remote workers. The real estate costs are 10% of the bay, salary costs are half the bay, the employee lifestyle at that salary is positively cushy, you have much more loyal employees, and the founders become deca-millionaires, the key employees become millionaires, and there are no Sand Hill Road -types crushing down on the founders and selling out the customers to the the cesspits of ad-tech for yet another swing at a billion dollar Event.
My impression is that if you are aiming to start a tech company that will grow to be very large, it really helps to be in Silicon Valley, for four reasons. First, it’s one of the few places where you can hire large numbers (hundreds) of good software engineers quickly. Second, it is one of only a few places where you can hire the very best software engineers, should you need them. Third, it’s where the investors are, and if you want to raise venture capital it really helps to be local and part of the scene so you know who to ask for money, they get to know you by reputation, and they know they can stop by your nascent business without getting on a plane. Fourth, the Bay area has a lot of futuristic oddballs who decided to try to live in the future when it hasn’t arrived yet, which is useful for finding very novel ideas before all that many people have noticed them and tried to monetize them.
If none of these points are essential to what you have in mind, then you are probably better off staying outside Silicon Valley, because it’s easier and a whole lot cheaper.
I’m a software engineer, not an entrepreneur. I’m also Canadian.
I’m came to Silicon Valley because I was sick of needing to move to another area every time my employer and I parted ways. In many ways, I wished I’d stayed in Canada, but the city I was in had only 2 or 3 big tech employers, and not enough small ones – if any of the 3 had layoffs, every software person in the city was stuck where they were for a year or two, which might be on the bench. Add to this the prospect of a big raise for taking a contract job in New Jersey, and I became nomadic for a decade. (Have job offer, will travel…)
I can’t answer your questions, but I find I *want* you to stay in Canada, and contribute to there being more opportunities back home, so some other person like me won’t wind up here – in spite of the money, the climate, and the community of fellow techies.
That doesn’t mean it’s a good decision for you. But note that if you do the same as everyone else, your results will be the same. Not worse, but not better either. And “go to Silicon Valley” is the standard wisdom.
and not enough small ones
That’s the other thing I’ve noticed about Canada, and I completely lack the knowledge to form any theories as to why that is the case, but… there is a curious lack of medium-small to small-medium tech companies in Canada. A bunch of very very small mayfly shops, and a few larger-ish (and frankly, boring) tech companies. Almost nothing in the middle, and nothing in the very large.
The story I heard is that successful small-to-medium tech companies often receive acquisition offers from bigger players. Apparently Canadian executives are particularly prone to accepting such offers if they are at all reasonable rather than fighting on alone. We’re so very reasonable up here, you see. I sure wish I had some data on this, though.
Michael: My father made him an offer he couldn’t refuse.
Kay: What was that?
Michael: He assured him that he would steer the company faithfully and offered him fair market value.
Kay: Gasps.
Michael: That’s my father’s world Kay, not mine.
I think you’ve hit on a lot of the arguments for/against. One point no one has mentioned in the “for” column would be the Silicon Valley culture where, often, high quality programmers (etc) are willing to accept lower wages at startups and accept stock instead. And in many cases these people are already quite rich because they took that same deal at Google or Twitter, and they just see you as a flier as opposed to needing a steady salary.
The pro arguments are all true, but they’re becoming less so as the cost of living here reaches even more truly absurd levels.
One emerging pattern is the young company with a front office in Silicon Valley, but the back office somewhere else, even overseas. It just doesn’t make sense to pay SV rates for anything you can get elsewhere.
Actually, outsourcing is already starting to slow down and even stop. The reason is because a lot of the grunt work that used to be outsourced is being automated by the public cloud and AI.
For example, you could hire people in India to set up database and a storage systems for you. Or you could just use an Amazon Web Services service to do it. The latter is going to be less risky and more scalable. You could hire a bunch of IT consultants in India, or you could just use Microsoft’s cloud. And so on.
The only SV companies I’ve seen trying to set up back offices “somewhere else” are companies in a death spiral who are trying to cut costs. I’ve never seen it work, though. If you can’t pay for good engineers, you’re better off just selling your IP for whatever you can get and then shutting down.
For a long time, the conclusion for going to college was “it’s always worth it.” That was at the end of a long logic chain.
Eventually people took the conclusion for a starting axiom, and the natural result was that colleges went insane in price.
I don’t know if or when SV will hit that point, but the necessary preconditions are there.
It hasn’t already?
It may have, but people don’t seem to have noticed.
I think it likely has, but knowing for sure is difficult. If the real estate bubble pops, it will be extremely obvious in retrospect, and everyone will say how it was totally obvious to them at the time. Real estate prices could keep going up for a generation, though.
A few years back GE wanted to start up a new office focused on the internet of things, and they chose the San Francisco suburb of Dublin, CA. The reason I was given for that was that there were more software engineers living in that town than anywhere else in the world, and since they needed to hire a ton of engineers, they just went where the supply was. I suspect that is the #1 reason people move their tech companies to the Bay Area.
I’d like to share an anecdote about something that may, I’m told, be more common than generally assumed:
Vitamin B12 is saving my life.
I’m 34. For the past couple of years, my mood has been getting progressively worse. I was dealing with that, managing the symptoms as they cropped up; I even managed to, despite a severe depression in 2017, force myself to switch jobs in the winter of 2017/2018 (winter is my “up” season, not my “down” season, so I knew it had to be that winter or not for another year).
The severe depression scared me for reasons other than mood – my short term memory got kicked in the teeth by it and essentially died (to the point where I’d forget the nature of a decision I’d felled five seconds ago – I’d recall I made a decision, but not what), my ability to pay attention shrank to nearly none, and I became painfully sensitive to sound (the slightest squeak and I’d wake up in the middle of the night, unable to continue sleeping).
My mood improved after I switched jobs, but… my memory, attention and sleep problems persisted, only slightly dampened. For a while, I thought it was just me growing old, though I found this so striking an effect that I was basically shocked no one had told me growing old was this bad, this soon.
When my mood felt decently normal, with the blessing and encouragement of my GP, I sought out a neurologist, wanting to tackle the aforementioned problems. The transferal I had from my GP said “sleep problems”, because I suppose no one ever took the other issues seriously. Seeking a neurologist was a difficult ordeal, in that all neurologists I called told me they were accepting no more patients. I outsourced the problem to my health insurance. They called me back one morning and told me they’d not found anyone, and that I should go back to my GP and get an urgent transferal. I didn’t think my state warranted this (in hindsight, fuck me), so I put it off.
Around Christmas time, I got lucky – one of the neurologist offices I’d called got back to me. You see, after hearing I had sleep problems (and cutting me off and not wanting to hear the rest), they’d put me on the waiting list for the psychologist in their office. They called me on a day off and said “Good news, we have an appointment for you. Bad news, it’s in a few hours. Got time?”. Fortunately, I did have time. (Due to further shenanigans and good fortune, this transformed into an appointment with the neurologist, not the psychologist that I was actually originally on the waiting list for.)
The neurologist listened to me talk, looked down her nose at me, told me to get a psychologist because I clearly had deep-seated issues of some sort, told me she didn’t want to talk about my audio sensitivity because that had no bearing on anything and I was there because of my sleep problems. Toward the end of the session she assured me that I wasn’t showing the first signs of dementia or anything like that, squinted at me in a kind of “I’m having second thoughts” way and told me go to my GP and get a general check-up, including vitamin B12, vitamin D and folic acid.
Without wanting to bore any of you with any more details leading up to this: Two weeks ago, I got my results of the general check-up. All my values were quite excellent, except my complete lack of vitamin D (as in, the sheet with the results puts it within bounds of their measuring error) and less than half the minimum value of vitamin B12.
My GP told me to take supplements, and also to eat more meat. I made a very surprised face – I’m not vegetarian (although I tried a few times, each time for a few months, noticed in hard to grasp ways that it made me feel worse, and stopped). We think I might have vitamin B12 absorption issues, although another check-up in two-ish months will show how well I’m absorbing the supplements.
I didn’t think this would make a difference. I read up on vitamin B12 problems, noticed the grim verdict that I might never be able to recover from the neural damage, wept myself to sleep a few nights, but dutifully took my medicine.
And then spooky shit started happening.
The other day I was pondering a problem at work, brought my hand up to my head to scratch at my scalp, and had to pause because I was so completely baffled at how soft my hair was. Make no mistake, I don’t think for a second anything changed about my hair – but I do think something happened to my sense of touch.
My subjective eyesight improved. What on Earth is “subjective eyesight”, you might ask? Well, I haven’t been able to figure out any objective difference in my perception whenever I do an actual comparison, but things seem sharper, more focused. I don’t understand this. I wasn’t expecting it, but it’s oddly striking. I assume (but don’t know) that my visual cortex is functioning better now – I still have eyesight issues, still need my glasses, but the stimuli that hit my retina are converted to information more readily. (Late edit: Contrast may have gone up, actually, since a few days ago I was in the bathroom, commented to my German boyfriend whether he changed anything about the lights, since I could have sworn it got brighter – nothing had changed, though.)
A few days ago, while two colleagues were discussing something near me, my ears ‘popped’ without the corresponding physical sensation. I nearly jumped. The whole quality of my hearing very abruptly shifted. I wouldn’t say it got better or worse, but it changed drastically in an instant. I don’t understand this either.
For a few days, my entire body hurt. I didn’t understand that, either, but I decided it might be a good sign – nerves working again, firing at random as they healed? I have no idea. I have no illusion that I know what’s going on.
But the most striking change is that my attention and audio filtering issues have nearly disappeared. Where previously I couldn’t hold a thought while someone else was talking (on TV, or a conversation next to me), I am no longer bothered by that. I can read a book while people next to me are chatting – absolutely unthinkable even just two weeks ago. Are my thoughts isolating themselves from each other? Vitamin B12 is important for insulating nerves, from preventing cross-talk between them. It doesn’t seem far-fetched, although if you’d told me of these magical effects beforehand, I would have looked at you funny.
Whatever the cause, exactly, I’m seeing my neurologist in two weeks. I’m sure she’ll be pleased her footnote hunch at least appears to have been correct. I feel like I’ve gotten ten years younger; after three years of trying various ways to help myself out of my increasing misery and neurological deterioration, this is like some bizarre miracle cure.
I don’t know how much of the damage is permanent, but I’m still improving, and I hope that I can get at least some of my intelligence from a few years ago back. One’s own intelligence is, like with many things, most sorely missed when it’s disappeared. I’ve been miserable at my own inability to focus, remember or imagine things. I can, in fact, imagine things again – I hadn’t consciously realised I’d lost that until it came back.
I’ve already gained a lot of vitality back. I used to feel like even in times of sadness, there was a joy in me, a kind of baseline happiness – and I can feel it coming back. It’s like my personality is healing from a grave wound. Some of that will also be from the vitamin D supplements that I’m also taking, of course – but so much of it is because my brain is gradually working again.
Anyway, I know I’m more lurker than poster, but I just wanted to share my joy here – and perhaps this account can help someone who is fighting with similar symptoms. (There doesn’t seem to be much risk in taking B12 supplements, if you just want to try it for a week or two.)
Stay awesome, everyone, and thanks for letting me read your comments. 🙂
Wow; I’m so glad your problems had such a simple solution.
I’m fortunate enough not to have any horrible symptoms like you do, but I’d still like to test for vitamin levels. (Among other reasons, I’m getting almost no sunlight.) How do you get that general blood(?) test for vitamins? As in, if I go to my GP and ask for it, will he know what I’m talking about and be able to do it? About how much does it cost?
I live in Germany, and the test was just what you’d consider a ‘general check-up’, plus explicitly requested folic acid, vitamin D and vitamin B12 – which is covered by insurance once you’re 35 (the irony of which has not escaped my notice!) and supposedly costs about 50-ish EURs before then. (I haven’t received my bill yet – my GP said something about that those are handled quarterly, so I’m anticipating getting the exact quote sometime next month.)
As for how to get it, asking your GP for it should be enough, yep! If it’s not, your GP should absolutely know enough about it to tell you what else is needed.
Also worth noting, once your general check-up is covered by insurance (i.e. you are 35+), if you want non-standard things tested (it seems as though many vitamins, including B12, are unfortunately non-standard), you may need an explicit lab transferal (“Laborüberweisung”), judging by some to-and-fro discussion I was involved in that I left out of my already far too long narrative above. But this isn’t relevant if you’re paying out of your own pocket.
Hope that helps! <3
This is interesting!
I’ve observed that ever since we got started planning for our baby, my wife hasn’t had a severe depressive episode (though still has some “lows” and, occasionally, hypomania). There are, of course, many impossible-to-separate changes that coincided with this:
– As an overall lifestyle change from a little over a year ago, we live in a smaller, tidier, cleaner house, without roommates.
– We have a baby on the way (I noticed differences in her behavior long before we conceived, but the baby-on-the-way thoughts might have been helping since it was confirmed).
– Her doctor recommended that she switch from Sertraline (bn: Zoloft) to Fluoxetine (bn: Prozac).
– She misses/forgets her medication way less frequently. (Her short-term memory in general is just better. It’s possible Fluoxetine has less of a negative effect on her memory than Sertraline did, or maybe one of the other factors influenced her ability to keep to the schedule in some way, but either way it seems to be a virtuous cycle.)
– We have a cat now. The cat is slowly killing me (I’m allergic, I have my own issues with remembering to take medications on a schedule, and I’m its sole caretaker for the foreseeable future due to concerns about toxoplasma), but having it around seems to make her happier.
– She started taking Pre-Natal Vitamins as soon as we started talking about having a baby, a little before we even talked to a doctor about it (so, also before we swapped out the Sertraline for the Fluoxetine), and well over a full month before we even started trying. This anecdotally matches up really well with the beginning of the time interval when her depressive days got a lot less frequent and less severe.
Definitely not conclusive. Because it’s definitely, definitely, not something we actually wanted to run a Very Scientific Controlled Experiment on, even if we could make lots of clones of my wife and put them in isolated chambers, some of which were/weren’t given certain medications or supplements, some of which were/weren’t living with clones of me in messy/clean shared/private living spaces. But it’s an interesting anecdote…. and is reason enough for me to be keeping an eye out for further indications that certain vitamins might be super important in cases like this.
That’s definitely interesting, thanks for sharing!
For what it’s worth, I’m de-facto addicted to online freeform roleplay and recently aggressively dove back into that (after talking to the neurologist back in December, her “you have issues” comments making me decide that maybe I should do something against the four year cold-turkey dry-spell :P). That change caused a noticeable uptick in mood, but the nature of that uptick feels quite distinct from the kind of uptick I’m experiencing since I’m taking the supplements.
I’m wondering if the happiness about the kitty might be in the same category for her as freeform roleplay is for me – it improved my mood, but without the vitamins, I know I would still feel constantly assaulted by reality.
As for cloning experiments, I suppose you’ve got half a clone in the making. 😉
Regardless what the cause may be, I hope the good moods last and you both have a great time!
You can’t to a proper controlled experiment but you can have blood tests for lots of things that you wife might be low in, assuming you haven’t already done so for all plausible candidates.
Fascinating story.
As I recently mentioned, I’ve recently been looking into work by two different people who argue that nutrition is much more of a problem for people like us than we usually assume. One of them, the one I entirely trust (both because I know him and because he is a very prominent scientist in the relevant field), believes that recommended vitamin D levels are far too low–on his advice I am taking 5000 IU. The other claims to be able to not merely slow but reverse age related cognitive decline. I am less sure if he can be trusted, but not sure that he can’t.
Do keep us posted.
Here is the article I mentioned. Only the abstract is available for free–I’m not sure how hard it is to get access to the full article.
Not hard if you reject citizens being shielded from scientific findings.
Hmm, seeing as one of the effects of aging seems to be a poorer uptake in vitamins, it at least doesn’t sound entirely farfetched that fixing the vitamin uptake would show anti-aging effects (i.e. the aging effects may be due to reduced vitamins). I’ll check out your previous comments about that at next opportunity, I hadn’t run across them yet (my comment reading, while enjoyable each time, is unfortunately very sporadic and I miss a lot!).
In either case, thanks very much for the information! 🙂 Curious to see how it pans out.
(P.S. Love your work. Your footnote recommendation of the book Passions Within Reason in one of your books let me trip over one of the most delightful non-fiction finds I’ve made. Thanks for that – and your very educational sense of humour. :D)
My symptoms were not nearly as severe, but I was also told to take Vitamin D and Vitamin B-12 to help with exhaustion and memory problems, after a blood test revealed that I was deficient. It was kind of amazing how much they helped, specifically with the exhaustion. Going from “I can barely function after 2 pm” to “I have enough energy to get through both of my jobs” was great.
I had to stop taking the B-12 after six months since it was giving me some really weird and unwanted side effects. Granted, I was taking 4000% of the recommended daily value (which was the only option available at the time due to my specific circumstances). I doubt the side effects would have been so severe with a smaller dose.
The body can store some B-12, so you could try to take the supplements once or twice a week.
If it’s not uncomfortable for you to say (and I absolutely understand if it is), what kind of side-effects did it result in? This is the first I’ve read about B12 ‘overdose’ having a negative effect, so I’m curious what I should maybe be looking out for. 🙂
That’s great news. Read your posts before although I can’t remember about what.
I take vitamin D supplements since I tested low for that. I had moderate depression at the time that’s mostly cleared up. I wonder how much variation there is in vitamin uptake needs and how many people are effectively deficient.
I mostly just complained occasionally (though I tried to be respectful about it when I did), so you’re not missing much when you don’t remember my comments. 🙂
I’m really happy to hear your depression’s almost bested at this point. Good luck for the last mile of it!
To the person who asked about what information from the meetups survey will be released – good point, thanks for asking!
Data will not be released publicly because it would be too easy to identify individuals and I neglected to include a question about releasing people’s answers, but I am planning to share aggregate statistics and lessons learned publicly. I will also probably reach out to individual meetup organizers if there’s significant data on what people want to see from their groups.
I’ve added the above statement to the beginning of the survey and also included it in my announcement of the survey on LessWrong. Hopefully this addresses your question fully; let me know if not.
Can anyone recommend a book and/or other resource(s) on good scientific research methodology? I am looking at the graduate student / academic level. My Google searches usually turn up things for science projects and whatnot. Thanks.
Are you thinking statistics methodology, experiment design, or pychology of it?
For statistics I like Andrew Gelmans blog. Feynman has some decent advice in essays about the idea of how to do good science and how to think about things. But a lot of things are field specific.
All of them, but at least something on experiment design would be good.
I know that it is mostly field specific, which is part of the challenge I am having. Mainly, I have realized that when talking about ‘good science,’ I have very little to refer to.
Gelman’s blog is good, but it’s much more stats and philosophy than experiment design (although stats is important to informing experimental design). I’ve mostly worked in fields with more powerful models and much stronger applicable prior knowledge than the fields that Gelman has specialized in though.
I’m having a hard time thinking of something good on general themes in experimental design. Or even on specifics. I learned things mostly by mucking around in graduate school. I think I’m roughly an average experimenter; my strengths are more in absorbing and evaluating the literature and deciding on a model. I did experiment because I hated not having data and I get a bit twitchy at a desk, not because I was particularly skilled at execution which tends to require a bit more of a engineering/hands-on bent and either a big team or a hell of lot of endurance.
The biggest thing that strikes me is that the economics of experiments tends to be neglected in philosophical or general discussions but is extremely important in practice.
I read SSC across 4 different browsers. This makes keeping up on the “has read” comments somewhat annoying.
Before I start the work to hack something together myself, has anyone else written a TamperMoney script for SSC to keep the read cookie in sync across multiple instances?
While i haven’t written any such a script, i do read SSC comments on different browsers. What i do is is alter the time and date on the bar that shows up on the upper right to whenever i was last on. It works well enough.
Rationalists, I have $225,000 of my own money to invest, and I’m a conservative (har har) investor.
Is the most rational way to invest it in late March 2019 something like “buy the S&P 500 fund with the lowest management fee”?
Are you buying and holding forever (ie 30+ years?)
Open several onling brokerage accounts, especially any with “free X trades with Y thousand dollars deposited”, then buy shares of as many S&P 500 companies as you have free trades for.
Edit: Just making it clear that I am joking.
That’s high-effort compared to buying an index fund. Not just today, but ongoing — one benefit of an index fund is that when individual companies drop out of the index for whatever reason, your holdings automatically update for you.
baconbits9 is right if you are going to ignore or not be affected by market swings in the meantime, and won’t need the money for 10+ years.
If you happen to know that a 2008-style crash would give you ulcers, mixing an S&P index fund with some sort of bond fund is not entirely irrational, nor is buying in gradually over the course of 6-12 months. But even adjusting for the high CAPE right now, for long-term investors your best bet is dumping it all in a low-cost equity index fund, yes.
No I’m not, I was kidding around. These funds are actively managed as stocks drop out of and climb into the 500. You will miss out on the next amazon/apple/google if you do it this way unless you actively manage it yourself.
Write a daemon to crawl a definition of the S&P 500, log itself in to your brokerage accounts, and make the appropriate trades automatically. Then absolutely nothing can go wrong.
I won’t necessarily need the money for decades. I own a house outright.
I want to have the presence of mind to be able to shrug at a 2008-style crash.
Prior to 2008 lots of people had ‘conservative’ portfolios that didn’t allow them to shrug off the crash. This was because the 2008 crash was large and unique. People had 30%+ equity in their homes, 401k plans worth more than the remainder of their mortgage, and no financial planner had ever said to them “what happens if you lose your job, health insurance and half your 401k while you home value drops in half all at the same time?”.
If you want to be sure that you can shrug off a 2008 collapse then you basically need to look at a worst case scenario for yourself to begin with. What three bad things all happening together could make you desperate to sell your investments.
A 50-50 plan lost 23% of its value in the 2007-2009 crash.[1] Losing money isn’t good, but if you can look at yourself and realize you are okay with losing 23%, that’s good.
You can get more conservative, not necessarily by buying more bonds (this does reduce risk some, but also exposes you to interest-rate risk, at with a fed rate around 2.5% there’s a lot of room for interest-rate risk). You can diversify across a wider part of the economy, across international economies, add in commodities[2].
Look at baconbits9’s recommendation of worst-case scenario. You (and a spouse?) lose your jobs and health insurance at the same time: how much money do you need to buy essentials for a year? You can make CD ladders that release 1/4 of that money every 3 months, and keep 3 months liquid in a savings account.
[1] This isn’t the best citation, but it’s one I could find: https://awealthofcommonsense.com/2015/04/a-historical-look-at-a-5050-portfolio/
[2] Commodities suck for long-term growth. But they are very good risk-mitigators if you worry about needing to eat cat food. They let you lock in prices for the things you consume.
No financial expertise but I’ve been very slowly reading through a newer edition* of ‘The Intelligent Investor.’
Based on that, it seems like the smartest thing to do would be to divide your investment between a stock index and US treasury bonds at a ratio which reflects your risk tolerance. If your stocks make more money, which is likely, buy bonds until you’re back at your original ratio; if your stocks aren’t doing as well, buy more stocks using the money from your bonds. Set aside a small amount of money for speculating on exciting companies, and treat it like money you bring into a casino: you can keep spending it until it’s all gone but once you’ve spent it all, you’re done.
People with actual financial experience should feel free to roast me if this isn’t a good plan.
*The edition is new enough that the notes mention the Dot Com bubble but old enough to predate the Great Recession. That actually made me more confident because it seems like following the advice in the book would have left you in decent shape after 2008 without foreknowledge.
My only problem is that I don’t know how to define my risk tolerance. I like the stock market, but I’m not going to risk money on the next Amazon.
If you’re decades away from spending the money as you indicated in another branch of the thread, a conservative investment mix is something like 70% stock, 30% bond, and an aggressive mix would be <=10% bond.
In the long run, 100% stock is very likely to have the highest expected returns, but only if you have the stomach to ride out a 2008-style crash or a 1970s-style protracted bear market without panic-selling near the bottom. A moderate bond mix can help with this both by diluting the losses and by allowing you the satisfaction of being able to buy more stock when it's "on sale" by rebalancing your portfolio back to your target allocation. For example, if your target mix is 80/20, and you're at your target the day before a 50% drop in the stock market, your total losses for that crash would "only" be 40% and your post-crash allocation would be 67/33. Rebalancing back to your target allocation would allow you to sell the now-excess 13% of your portfolio from bonds and buy stocks at the post-crash prices.
I’d suggest looking at a total bond market fund (e.g. Vanguard’s VBMFX) or even an investment-grade corporate bond fund. It’s a pretty big yield spread (taking Vanguard’s intermediate term funds as an example, current yields are 3.8% for corporate, 3.1% for total bond market, and 2.3% for treasuries), more than enough to compensate for the historical 0.2% default rate for investment-grade corporate bonds.
The yield spread is probably driven largely by US financial regulations: banks, insurance companies, and other parts of the financial industry face reserve requirements or similar regulations, which generally place a large premium on AAA-rated bonds (mostly treasury and agency bonds) relative to A-rated or AA-rated bonds (most investment-grade corporate bonds) in terms of how much of the market value of the bonds can be counted towards your reserve requirements.
Standard advice is, first pay off any outstanding high-interest debt you have. Then, if you’re expecting a major purchase (car, down payment on a house) in the next couple of years, hang onto enough cash to cover that. Otherwise, divide between well-diversified stocks (standard advice is to use low-fee index funds for this), bonds (US Treasuries are traditional, but there are several good bond indices out there now), and other assets according to your age and risk tolerance. Older or more risk-averse => more bonds in the mix, but you shouldn’t be approaching 50% until you’re close to retirement. Stocks and at least some bonds is almost always better than stocks alone in the long term, but the exact optimal mix is going to vary: I use roughly 80/20 as a youngish mid-career adult. Sink it into tax-advantaged accounts (HSAs, IRAs [you probably want Roth if you already have a big pile of cash lying around, and you definitely do if you have a traditional 401K], etc.) first if you can.
Bear in mind that the traditional advice re: stocks and bonds assumes that bonds are generally going to go up when stocks go down and vice versa. This has not always been true in recent years, but it’s going to be hard to find a store of value that isn’t at least somewhat exposed to systemic risk. Real estate’s a common asset class to play with in this context, but I’d stay away from it (aside from a primary residence) unless you already have several hundred thousand dollars invested. And in this crowd, I should probably note that crypto is not an uncorrelated asset.
I own my house and car outright. My outside-the-box investment idea was to use ally.com to buy safe-ish dividend stocks like AT&T, utilities, residential and health care REITs, etc and reinvest the dividends whenever I’m employed, taking them as FU cash to pay my bills if I’d rather not be. Otherwise it seemed rational to put it all in Vanguard 500 or some alternative that’s skimmed less, or 80 500/20 bonds.
Owning real estate other than your primary residence in Portland would be a terrible idea.
Well, it wouldn’t have to be in Portland. I have a coworker here in $LARGE_WEST_COAST_CITY who owns a couple of houses in Utah. Still don’t recommend it though.
An REIT isn’t a bad choice if you want to dip your toes into real estate, but almost any personally managed set of funds is going to be riskier than just putting your money into VFINX or VTSMX, and even that I wouldn’t describe as conservative. By all means sock some away for personal investing if you feel like it, but don’t make it your mainstay. On the other hand, putting it all into bonds is way over-conservative, especially in this market. With $200K to play with, you can diversify without getting hit with higher fees for having too low a balance, so there’s no good reason not to.
I’d stay away from the Retirement 20whatever funds that most brokers offer. The fees are high enough to matter, and the only real value-add over VTSMX + bonds is that they’ll rebalance as you get older, which you can easily do yourself if you’re halfway conscientious about your money.
In my opinion, most people who own a home are already over-invested in real estate compared to the rest of their portfolio.
In my case, since I have a mortgage, I’m probably not over-invested in real estate, but it is the only portion of my portfolio that’s effectively leveraged.
What’s a good rate of return these days? I have access to a savings account that gives 4% apr for a certain amount, and after some cursory research it seems I should be trying to get the maximum in there I can before looking elsewhere (other than company matched 401k obviously).
4% APR is outstanding for a savings account, and it’s probably better than anything you’re going to get on bonds in the near future, so I’d treat it as funging against that part of your portfolio. Stocks have historically averaged around 7% but are much more volatile.
> buy the S&P 500 fund with the lowest management fee
Would be the standard answer found on e.g. personalfinance or investing subreddits. I personally have my (small, <$100k) nestegg entirely in VTSAX, I am at the beginning of my career and am risk tolerant. However nobody should ever call 100% stocks conservative, perhaps a 50-50 mix of total-bond-market/total-stock-market funds would fit the bill. See boglehead wiki for a good introduction.
On the subject, I have seen it suggested that this folk-wisdom of “time-diversification” is fallacious, AFAIK economists don’t actually have anything like a consensus on whether it is real or not. Also there is a parable among professional investors that when your cab driver has a stock tip, it is time to sell. I wonder if a sort of index-fund bubble might not be happening (basically, broad asset overvaluation).
10 years of the best bull market in history may be causing people to forget themselves. The idea that you can just hold stocks for 30 years and it will “average out” to the historic mean could be a catastrophically wrong belief. Risk is real.
It all depends on your goals. What are your goals?
Have enough money to pay all my bills without touching the principal when I don’t work… like everyone else, right? As I replied to Nornagest, I have a house and car with no mortgage or car payment, and want to put my liquid assets away, either on the standard “for over 60” plan or for dividends.
Well, $225,000 dollars will become about 1.7 million dollars (real dollars, it will be more in nominal terms due to inflation) if you match the stock market for thirty years. At that point, even drawing on the interest aggressively, you’re going to get about $70,000 in real dollars. You could, of course, supplement that by guessing that you (at 80) are not going to live another thirty years and taking out 1/30th of the principal or something like that. That’s what your bog standard retiree does and is pretty safe. Basically, buy index funds, shift to bonds and such as you get older.
You can add about another million, and get to roughly $110,000, by socking away another ten thousand dollars for thirty years.
The traditional rule-of-thumb is that the long-term safe withdrawal rate is 4% of your initial balance (as of when you start withdrawing), adjusted for inflation. Historically, that’s had something like a 99% success rate over a 30-year window (success defined as the balance never hits zero during the window) and a >90% success rate at capital preservation (your balance at the end of 30 years >= your inflation-adjusted starting balance).
If you want to be ultra-conservative, use a 3% or 3.5% withdrawal rate instead.
Since nobody’s mentioned it, one classic rule of thumb is “your age in bonds”. That is, your age is the percentage of your portfolio that should be invested in bonds, with the remainder in stock indices. This results in a gradual shift to safer assets as the day you’ll need them draws nearer.
Note: I am not an investment professional, the below is not advice.
How purely is this money a crash-hedge? If you could trade this money for a genie-wish to protect against black swan events, would you take it? (That is to say, do you have enough expected income without this money to cover expenses?)
The way I see it, you have two kinds of concern to cover; the base, prosaic concern of not having enough money in the usual case, and the crash-fear. Money spent in the S&P 500 fund will grow absent black swan events. You can set aside a portion of this money and invest it in bond funds, and you’ll be protected against 2008 come again (but not necessarily all other catastrophes).
If you have a 100% certainty that you’ll have Enough in the non-black-swan case, the rational thing to do is to put this money in bonds and money markets and other super-secure investments, and to set aside a portion of it for specific things like ultra-portable wealth in case of the vantablack-swan events. If not, then invest in proportion to those two concerns.
If you want it to be done rationally, then it’s just a math problem; put in the expected rates of return for the money in a stock fund vs. a bond fund vs. a slightly more complicated under the mattress deal, and look at the failure to get the stock returns as the price you pay for security.
Vanguard has several target date retirement funds (others do, too, but Vanguard tends to have low fees). They automatically shift the allocation of your money toward more conservative mixtures as you get closer to the target date.
Eg, the Target 2060 fund has this mixture:
1 Vanguard Total Stock Market Index Fund Investor Shares 54.1%
2 Vanguard Total International Stock Index Fund Investor Shares 35.9%
3 Vanguard Total Bond Market II Index Fund Investor Shares** 7.1%
4 Vanguard Total International Bond Index Fund 2.9%
While the Target 2030 fund has this mixture:
1 Vanguard Total Stock Market Index Fund Investor Shares 42.1%
2 Vanguard Total International Stock Index Fund Investor Shares 27.7%
3 Vanguard Total Bond Market II Index Fund Investor Shares** 21.4%
4 Vanguard Total International Bond Index Fund 8.8%
This depends on a lot of factors. Some things to consider.
Do you have children? If so how to do you want to handle sending them to college?
When would you need the money?
What other assets/liabilites do you have?
What level of income would of like in retirement?
How old are you? When do you want to retire?
What is your current income?
Are you married? Does your wife work?
When you say conservative do you mean Cash funds only conservative or Venture capital funds are fine i just don’t want to lose money conservative?
Just to name a few questions. You’d probably want to be more diversified than just the S&P 500 though.
I’m going to go against the consensus here. I think the really optimal thing to do with the equity portion of your portfolio is likely to buy a good smart-beta (also called active beta) fund or ETF now that they’ve gotten so cheap. The evidence that exposure to momentum, value, and other factors can help a portfolio outperform is of comparable quality to the evidence in favor of the equity premium itself.
Unfortunately this takes a bit more sophistication than the standard “buy an index fund” approach since I think it’s harder to recognize quality. I haven’t done the research myself (my income is moderately correlated with smart beta performance, I don’t need more exposure to it), but Goldman Sachs has a decently well-regarded and very low-fee ETF, and the firm AQR has an excellent reputation. Vanguard has some too but I know nothing about those other than that I generally trust Vanguard.
All that said, buying an index fund is a close second best. And probably the most important thing is to invest sooner rather than later; if you take two months to pick a slightly better investment, the loss of value from not being in the market for two months is likely to dominate the performance improvement.
Well, obviously, pay off any high-interest debt you have. I assume you’ve already done that.
Consider taking out a mortgage on an additional house and using that one as your primary residence. You can then rent out the previous one.
This gives you two things that are very hard to get otherwise: free money from the government, and leverage. The free money is the tax credit you get for paying mortgage interest each year. The leverage is the massive amount of money you can borrow from the bank for a mortgage. The rate is quite reasonable too (that is also subsidized by the government).
Without leverage, you’re stuck just increasing your money a few percent each year at best. That keeps you in the middle income bracket where you’re too wealthy to get helped much by the government, but not rich enough to really have financial independence. It’s better to make a riskier bet, because if you lose, you’ll get bailed out by the safety net, and if you win, you get to keep the winnings. Once you get enough money you can hire people who know the cheat codes for the tax system– in the middle class, you get screwed.
There is no tax credit for mortgage interest; it is a tax deduction. You get back $YOUR_MARGINAL_RATE*(mortgage interest paid). If you can reliably make more money than (1-$YMR)*(mortgage interest paid) by investing the borrowed amount, it makes sense to do it, but otherwise not.
The safety net requires you lose everything. And isn’t guaranteed anyway, though IIRC LMC isn’t a single man so a little better odds than for many of us.
Re the mortgage interest deduction, even “(marginal rate)*(interest paid) might overstate the real amount of the deduction. The federal standard deduction for a single person is now $12,000. If you currently have total itemized deductions totaling $10,000, then your real mortgage interest deduction is (marginal rate)*((interest paid) – 2000).
PS: As my aunt used to say, people get rich by getting interest, not by paying interest. So, borrowing money to buy real estate makes little sense, unless you can rent the property out at a profit. Even buying property in the hope of selling at a profit in the future might not work too well. Even in the midst of today’s boom, the real price of residential housing has merely doubled since 1974.https://fred.stlouisfed.org/series/QUSR628BIS That is an increase of about 1.5% over inflation, which is not a great return. You could do as well with inflation protected bonds or with lots of other investments.
Not for this OP’s goals, but you ignore one advantage of real estate investing — you usually borrow much of the capital. Generally you can’t (or at least shouldn’t) borrow to get starting capital.
Certainly home ownership can be a bad investment, but its a rare investment where your down payment of, say, $100,000, and be getting that 1.5% over inflation on capital of $2,000,000.
Maybe I’m missing something, but aren’t you forgetting about the cost of borrowing? A handy online calculator tells me that if I take out a $1.9 million mortgage at 3.92% for 30 years (not that I will get that low a rate for such a big loan), I end up paying $3.2 million over the course of the loan. Plus, I am paying 20K per year in property taxes for 30 years – that is another $600,000. Property values would have to increase by a lot more than 1.5% over inflation to make up for that.
So, it is only a good investment if I can rent it for a decent price.
That comment corresponds very poorly to my understanding of the “cultural Marxism” claims. I don’t actually think it’s a very useful descriptor, but it’s more reasonable than it’s getting credit for.
Marxism tends to collapse a lot of complex economic relationships into a simple clash between two major groups determined by wealth (apologies for the oversimplification comrades). Currently we’re seeing elements of the Left collapse a lot of complex social relationships into a clash between two major groups determined by individual identity/culture. They are both, to some extent, seeking to accomplish similar goals like eradicating traditional/”family” values, and justify short-term extreme action while seeking an eventual vision of equality. There’s a great deal of fretting over who gets classified as the powerful vs the oppressed classes. I’m sure you could perform a similar exercise drawing parallels between most theories, but Marxism seems to fit particularly well.
I’m not sure it’s very charitable, or even correct, to describe it as a vast conspiracy theory (as the comment does). It might just be difficult to point at a successful march through the institutions in a way that doesn’t sound like a conspiracy.
there needs to be a word for people acting in their own self interest in a way that seems like conspiracy but without actual coordination, e.g. various civil engineering societies releasing reports that talk about how many more civil engineering projects we should be starting. The people involved are sincere, no one is coordinating them, but the results are identical to what you’d get if you had a great secret cabal dedicated to milking money out of the rest of the country on behalf of the engineers for purely selfish reasons.
Convergent motives?
Kenji Kamiyama, a Japanese anime director, did in fact coin a term for this: Stand Alone Complex. The complex refers to a collective behaviour towards a particular common goal, and it stands alone because it arises dynamically without any coordination or organizing force behind it
In the eponymous anime the salient example of a stand alone complex involves around half a dozen people deciding to conduct a simultaneous assassination attempt on the Japanese Prime Minister at the same event while claiming to be the same famous criminal. The level of apparent coordination is such that the police initially suspect large scale brain hacking (it’s a cyberpunk setting), and are baffled as they slowly come to the conclusion that not only is every single of them is acting of their own free will, but they did so individually and with no awareness of the others.
This is of course all rather far fetched, it’s a sci-fi anime after all. However, the more mundane example of multiple organizations with similar interests acting without coordination towards the same goal is itself an example of a stand alone complex.
“Tacit collusion” is very close to what you are talking about, though I think it implies more self-awareness on the part of the colluders.
This is the no culture war thread.
In which Scott linked two very CW-inspiring comments/essays. Though the hidden OTs seem largely automatic right now, and though I know he often recommends “comment of the week” in the non-hidden OTs, if he wants to keep CW discussion limited to the hidden OTs, he should probably recommend CW-related links in those, or else link threads, assuming those are CW-safe.
Yes, but I think it was very unsporting of Scott to link to a culture war topic in the no-culture war thread so we can’t discuss it. I think Simon_Jester’s comment is completely wrong. The first response from Ambi underneath it is correct.
I’ve got some armchair thoughts on “Cultural Marxism”.
The uniting theme that I see going through various ideologies called this is belief that entitlement is defined by need. If you need something hard enough, you are entitled to get it, if those who can give it to you won’t, or only do it conditionally, they are oppressing you. For example, if you are a woman and need man’s resources or favour, and man would only if you put out, you are oppressed. Or you are trying to get into a better country than your home, but natives don’t want you unless you assimilate. Or if you need money to pay for your healthcare.
Of course such an attitude can’t exist in free-for-all form. There needs to be a hierarchy of whose well-being is more important. For example, you can’t simultaneously claim that women are oppressed by men who would only share resources with exclusive partners and men are oppressed by women who take their resources but won’t put out. Women’s need tend to have priority, so men are expected to do them favours and satisfy their needs as part of “basic decency”. LGBT folks or ethnic minorities need “representation” to feel better about themselves, so you owe them roles, but they don’t owe white heterosexual males a slot in their culture. That kind of thing. It started with social democracy kind of a deal, with material entitlement but grew into cultural and emotional area.
This runs the opposite to more “capitalist” mindset where you have no entitlement at all and must exchange what you have for what others have and more of “conquer” mindset where you simply victimize others with their victimization is alone proof of your superiority that entitles you to things. This mindset also lacks a true shared front, devolving into squabble over the place in hierarchy of needs.
I wouldn’t call it Marxism though, because OG Marxism was advocating destruction of (what it perceived as) parasites, while Social Democracy was a reaction to it with parasites saying “You don’t have to destroy us, you can too become a parasites”
The other two examples you gave are fine enough, but this one seems very distasteful. You seem to present a scenario where men are interacting with women mainly as prostitutes, nothing more, and then blame the *women* for being dissatisfied with this arrangement. Sure, women shouldn’t feel entitled to someone else’s resources and favor without providing anything in return, but maybe the situation would be better resolved by the men accepting something other than sex in exchange? Would you not feel, if not “oppressed”, incredibly used and disrespected if the people you interacted with regularly demanded that you let them fuck you (in a way that is not guaranteed to be pleasant) in exchange? Not a great example of “entitlement”.
Very few people would request that I fuck them in lieu of some other form of exchange. I think you’re looking at this from the wrong side — offering sex for value is an option available to attractive young women, but hardly anyone else.
This assumes that there is no retaliation if someone asks you for sex and you decline, which is… not always the case.
Sure, but is it more likely than retaliation if someone demands money from me and doesn’t get it?
Eh. We’re discussing too broad a range of scenarios for the question to be meaningful, probably, as with virtually all discussions of entire genders.
Well, in online games, the common response to “I’m a girl 🙂 Can I have some free gold ? :-)” is usually, “tits or GTFO”. The idea being is, if you are asking for free stuff just for being female, then you need to a). actually prove it, and b). provide something in exchange. Furthermore, since you do not appear to have any marketable skills and are actively invoking your sex/gender instead, your best option is to provide something that is uniquely marketable with regards to your biology.
@stillnotking as well
If we are literally talking about “I’m a girl, can I have some free gold?”, then sure, that’s entitled. And you can probably generalize this further to “hot young women use their looks to get free drinks, etc”.
But I don’t think I’ve ever seen *anyone* suggest that women are oppressed because they asked for a free drink and didn’t get it. The complaint seems more likely to be the reverse — “A man gave me a free drink unprompted and now demands that I pay him back with sex”.
The situation that ARabbiAndAFrog described, where the woman *needs* a man’s resources but the man is only willing to accept sex in exchange, sounds more to me like a landlord demanding a woman sleep with him if she doesn’t want to get evicted than a chick at the bar batting her eyelashes for free drinks.
It was mostly a description of marriage, as dysphemistic as possible.
Even in marriage, I’d hope your wife has more to offer you than just sex. For a cynical and transactional view of marriage that sounds much less sexual-assault-y, you could at least have the man requesting childcare and housekeeping in exchange for paying bills.
Well, sex is what sets your wife apart from a maid or a nanny, doesn’t it?
Although originally, beside marriage, I was setting a broad spectrum of conceivable relations where man would give his resources to his sexual partners and withhold it from the women who would refuse to be his sexual partner. Which feminists (Adherent to mentality I outline) tend to condemn, whenever it traditional marriage, trying to woo someone with favours and breaking up with them upon realization of being friendzoned, or prostitution (It was weird how many feminists I saw opposing prostitution, but after formulating this description it makes much more sense – they dislike any norm where a woman is not supposed to accept payment if she isn’t willing to put out).
In the modern understanding of the word, possibly that is it. Marriage may also involve a permanent combining of financial interests (which makes certain financial arrangements, like jointly purchasing a house, much more reasonably) and the likelihood of sharing moral and legal responsibility for children–no one comes after the nanny for things the children do after she clocks out, contra the parents.
I think we are getting off-course with discussion of this specific wording and how rapey it is. What I was getting at, is that women tend to have extra needs and are sometimes worse in earning their keep, particularly during and after the pregnancy and many of them would have to rely to man, or men to plug a hole in their ability to support themselves during that period. Traditionally it’s supposed to be covered in marriage, but marriage is denounced as oppressive to women because it forces obligations on them in exchange to receiving the resources they need. The empowering of women, enabling them to divorce their husbands, or live promiscuous life in general for most would require a system of extracting resources from men, either through child support or welfare tax.
This forcible distribution makes women more free, while unwillingness to pay for it is villified, trying to set any kind of conditions, like, say being only willing to support one’s exclusive partner and one’s own children is denounced.
Giving resources to sexual partner(s) and withholding them from non-sexual partners isn’t quite what you described, since this wording implies that the man is initiating the offer of resources. If you take a girl on a fancy date and then she decides she’s not interested, that sucks but it’s the kind of risk that’s inevitable when looking for a partner. If you feel like you’re putting a bunch of time, effort, and resources into a relationship that isn’t being reciprocated, then it’s fair to be upset, but if the only reciprocation you’re interested in is sex, then you want a prostitute, not a girlfriend. If you’re happy with the rest of the relationship but just wish she’d put out a little more, then it’s not entitled of her to say no, it’s just a relationship incompatibility and you can either look for ways to increase her desire as a couple, or decide to call it off.
On the other hand, if the situation is closer to “Women is willing to obtain some resource / service the usual way, by paying for it, but the man refuses to accept anything other than sex in exchange”, then yeah, that’s gross and exploitative if it happens once, and oppressive if it happens regularly.
(I actually only just noticed ARAAF’s reply at 3:26pm)
I think “marriage is inherently oppressive to women” is a much more niche and radical position than any of the others that you listed, so much so that I didn’t even recognize that as the position being criticized. (I got that it was probably implying marriage, but the most charitable reading of your wording imo was still “wives are basically long-term prostitutes”.)
Actually the position being criticized is “Women deserve their ex’s money even if they no longer service them in any way” alongside more generic welfare consumption.
This is an example of “Entitlement comes from need, trying to leverage this need is oppressive” mindset I am trying to describe – your ex-wife or baby-momma or even some random woman need your money, so state will extract them from you on your behalf, if you complain about not getting anything out of it, you are entitled and evil oppressor.
In the same vein, “Women have certain power over male behaviour because they control which behaviour is rewarded with sex and companionship and which is punished by loneliness and celibacy” is absolutely trivially obvious for anyone of “capitalist” mindset, because of course they do, how can they not, but for feminists it seems to pretty upsetting, because in their mindset, it’s painting them as the real oppressors.
Please move the CW discussion to a CW-allowed thread.
@mdet
I don’t see any claim that this is all there is to the relationship. It seems to me to be a completely normal complaint that a certain aspect of a relationship is unfair. For example, if a friend doesn’t have a car and always depends on you to drive places and to pay for the gas to get there, this is a valid thing to be upset about and want to see redressed. Such a redress can be fair without being equal (like the other person paying for gas half of the time and then also doing something else to make up for you having to chauffeur).
Such a complaint doesn’t mean that your relationship is solely about driving places. It merely means that there is an issue of (un)fairness in this part of the relationship that you want addressed.
Attacking such a complaint as treating people merely as X seems to usually be an covert attack on the validity of the needs of the other person. The problem I have with such attacks is that they diminish the needs of others based on a false accusation (that this one important aspect is all one cares about) & that they never seem to be applied fairly.
For example, you can similarly (unfairly) attack the common female complaint that men should do more in the household by accusing women of ‘interacting with men mainly as housekeepers, nothing more, and then blame the *men* for being dissatisfied with this arrangement.’
Yet I’ve never seen the people who attack some men for supposedly seeing women as prostitutes also attack some women for supposedly seeing men as housekeepers.
Note that this example actually seems quite valid:
– women desire cleaner houses and men desire more sex, so in both cases there is a mismatch in desire
– many relationships have a quid-pro-quo where men do more outdoor/large chores and/or put more effort in earning income, while women do more housework
– many people feel like the other person is obliged to adapt to their own standards of cleanliness, rather than recognizing that one standard is not automatically better than the other, just like many people feel that the other person is obliged to adapt to their sex drive.
Sex is one of the strongest desires for many people and a certain level may be required for the well-being of a person. Imagine that a man would say: I don’t want to spend a lot of time with you, but I’ll give you lots of money instead. I think that many women would not consider that acceptable below a certain level of time spent together, as they need that contact and it is not really replaceable with something else.
Even to the extent that a woman could provide other things, the value of sex may be so high that those alternatives would have to be very impressive and costly. Now, obviously some women are highly offended by how highly men tend to value sex compared to other aspects of a relationship (like companionship), but many men are also offended at how much value women tend to place on their earning ability, their stoicism, etc.
As far as I can see, there are only two options here. Either we respect that men and women have different needs & thus that some unequal-but-fair quid-pro-quo’s are needed to provide a similar level of satisfaction of needs, or we call the needs of one gender ‘normal’ and then vilify the other for having different desires.
The latter seems quite popular now.
Not if they made similarly burdensome sacrifices for me in return. Note that civilization, government, work and relationships all require burdensome sacrifices.
If you merely want (or even refuse to recognize) that others make sacrifices for your benefit, but refuse to make sacrifices for their benefit, then you are not fighting against oppression, but for it.
ARabbiAndAFrog, this is the “no culture war” open thread.
Question for SSC generally: I was aware that this was a No-CW thread when I replied, but thought that “Don’t make statements / use wording that implies that women should generally be treated as prostitutes” had enough consensus behind it and was an important enough norm to police that I could push back against it without necessarily contributing to a mindkilling culture war. Did I overstep with my initial reply? With my subsequent replies? Are there certain norms that are still worth replying to enforce, even in a No-CW thread? (Say, if someone starts advocating murdering political opponents, it’s not necessarily succumbing to mindkill to say “Hey don’t murder people”)
At first I was going to say you engaged in a way that would foreseeable lead to culture warring. But then I reread your posts and they do sound like you weren’t expecting culture war.
So I guess I’d say you mispredicted what kind of interaction you were setting up. Honest mistake. (Also a chance to refine your predictions going forward).
I wouldn’t say “mistake”, I think the conversation went about as I expected / hoped — I think we settled that what ARAAF intended to get across was not the meaning that I read in their words, and they ended up restating themselves in a way that acknowledged and avoided my objections. But I guess that was luck, since if they had doubled-down on the meaning I had read then my replies would’ve been throwing fuel on a CW fire. So point still taken to avoid these type of replies in the future.
Replying to CW topics is discussing CW. If we were to adopt a rule that pushing against CW claims is fine so long as there is broad consensus behind you, that is tantamount to declaring that CW restrictions only apply to unpopular opinions. Moreover, if someone were to find your reading of ARabbiAndAFrog’s post to be inaccurate and uncharitable, they would be inclined to raise some argument about it, and argument about CW subjects violates the no-CW rule.
In short, the correct response to CW in the non-CW thread is not, “Surely everyone agrees with my objection to this.” The correct response is, “Please no CW in the no-CW thread.”
I couldn’t have put it better. All my +1 on this one.
I think you’re picking up the negative vs. positive rights distinction. This is a real and important thing.
I don’t think it’s a good definition of the belief cluster Cultural Marxism is trying to gesture at, though. A lot of the people you’d call Cultural Marxists put a lot of effort into arguing that victim groups are victims of negative rights violations.
If I had to pin it down, I’d say it’s more a matter of skepticism of any line of reasoning whose conclusion benefits those currently in power. c.f. the “Method and social epistemology” section of the Marxplainer Scott linked. Social justice types and Marxists have different ideas of who exactly “those currently in power” are, but I think that section is something both groups would agree on.
If a country like the UK declared all rented accomodations to legally be signed over to their occupants would this cause a recession, or could you get away with it?
I think aspects of capitalism, particularly those to do with land and housing need to be reset now and then to pull them closer to a distributist ideal of property, but how do you do so without destroying the economy and sending everyone spiralling into poverty? Is there any way to enact this plan gradually enough that it doesn’t tank the housing market and then send the banks under, or do we need to sort out how dependent the banking system and national GDP is on housing prices to begin with first? Then the question is how might we do that?
I frequently have this same thought about New York. And I always wonder what the relative differences would be if:
All Buildings become coops owned by a corporation itself run jointly by the inhabitants of each apartment.
vs.
All apartments become independent condos and the shared amenities of the building are dealt with some other way, perhaps as bundled assets for management companies purchased by resident/owners.
How would those who became co-owners of a split-up brownstone compare to those who now co-own a hi-rise?
Just imagining the resulting maelstrom I find fascinating. The mad scramble as people try and reconsolidate their holdings. The general economic chaos. I don’t have an answer for you, just pleasantly surprised someone’s had the exact same thought.
I can’t edit the main post, but I just realized this might be too culture war although I was asking an economics question. In my defense the OT comment of the week was about communism.
You’d bankrupt millions of people. And how do you decide ownership shares? If I own a 2 bedroom apartment and am renting out the guest bedroom, how much of my apartment would my tenant be entitled to? Does it make a difference if they’ve been renting for a day or a 10 years?
IMO, you wouldn’t need to force that the accomodations were to be sold to the renter, just make the practice of renting itself illegal. Give everyone a fairly long time to get ready for this change, but fully outlaw the practice of paying money on a monthly basis in exchange for letting someone occupy a space. Then make a second law that says existing mortgages on houses previously being used as rentals would cap at a certain % of the final sale price of the property that would scale by how much of the loan was already paid off.
Speaking as a landlord, here is what I can guess would happen:
1. The only reason I can afford to own property is that the rent I’m paid, by and large, covers the mortgage I have on the house. If I didn’t have someone in the place paying me, I couldn’t afford to keep the property. Therefore I would be motivated to sell.
2. Yes, every other landlord in America would be in my same boat, so likely housing prices would plummit. While there would also be substantial demand increase (all the rentors now need to buy) I’m pretty certain that prices would level out to well below where they are at now, at least in areas previously saturated with rental properties.
3. The second aspect of the law would ensure I made at least some of my money from the house back, so I’d be less-well-off, but financially fine. I guess the bank is taking it in the shorts here, but that could be offset by the fact that the people who used to pay me a rent are not paying the bank a mortgage.
While there would be certain classes of people who are wiped out by this change (i’m thinking large-scale real estate investors) I also don’t really care about them. Overall I’m surprisingly in favor of this new law. Hand me a petition and I’d sign it.
This would be a huge difference, and lead to much worse outcomes for the country.
For example, take my situation six years ago when I was moving cross-country and didn’t know how long I’d be staying in the Seattle area. As it was, I signed a lease and decided if I didn’t like things – whether the individual apartment or the area – I could reevaluate after several months. If renting were illegal, I’d need to buy a house, which would be a much larger financial investment, require much more time to research in advance, and put much more money at risk if I decided to leave the Seattle area a year later.
Or, take someone I know who’s about to be on a short-term work assignment for a few months. As it is, she’s renting an apartment month-to-month. If renting were illegal… what would you recommend? Should she invest tens (or hundreds) of thousands of dollars in buying a house or condo when she knows perfectly well she’ll only be living in it for a couple months?
Forward Synthesis advocates a one-time reset and then allows people to lease out homes going forwards. I’m not sure how good that scheme is on balance, but it’s got advantages. Your scheme, in contrast, would act as a continued barrier against people who have reasons to look for temporary accommodations.
This. I’ve always rented, because until recently, I wasn’t sure I’d be in the area long enough for buying to make any sense. My last apartment had water in it five times in the 10 months I lived there. If it had been my property, I’d have had to fix the problem when it occurred, and it might have made it hard to sell. Instead, I just didn’t renew my lease and moved to a house that was better-located and had more space.
I’m sure that there would be workarounds to any “renting is forbidden” law. Maybe I “buy” it, but with a complicated contract that gives me the right to sell it back to the company that previously owned it after a specific time, and they also offer me to loan me the money at a specific rate, which is slightly higher than what rent used to be, because of all the new overhead.