This is the twice-weekly hidden open thread. As the off-weekend thread, this is culture-war-free, so please try to avoid overly controversial topics. You can also talk at the SSC subreddit or the SSC Discord server.
Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.
80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.
Seattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide here.
Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.
Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page
The COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.
Norwegian founders with an international team on a mission to offer the equivalent of a Norwegian social safety net globally available as a membership. Currently offering travel medical insurance for nomads, and global health insurance for remote teams.
Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.
B4X is a free and open source developer tool that allows users to write apps for Android, iOS, and more.
Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here
The Effective Altruism newsletter provides monthly updates on the highest-impact ways to do good and help others.
Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.
MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.
Support Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.
AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.
Substack is a blogging site that helps writers earn money and readers discover articles they'll like.
Why the section of bridge collapsed. There were some features which seemed like a good idea at the time, and a lack of maintenance.
I forgot to comment after seeing this originally, but thanks for posting this. I hadn’t seen it, and passed it around the office, as there’s a few of us who were interested in this collapse.
It’s interesting that both this bridge and the FIU bridge are concrete structures with a lack of redundancy. Steel bridges had some catastrophic failures due to lack of redundancy (the Silver Bridge being the most notable), which drove some changes in the design philosophy of steel bridges.
The biggest one is the designation of a “Fracture Critical Member”, which is a structural member with three properties: 1) It is in tension, or has tension in it (bending in a beam) 2) Fracture of the member will result in collapse of the bridge and 3) The member is steel.
The last one is an artifact of most spectacular failures being steel bridges, because of fracture due to fatigue. Prior to fatigue research in the 60s and 70s, it wasn’t recognized that some common ways of detailing steel bridges were exceptionally prone to fatigue, so a lot of bridges had to be rehabilitated or replaced. Concrete never went through this, I think in part because it’s more difficult naturally to do non-redundant designs in concrete, and partly because all of the real boundary-pushing in the 30s to the 60s was in steel, because it has a higher capacity-to-weight ratio. However, I’m wondering if that will change, and you’ll start to see a “fracture critical” designation come up for concrete structures.
Anyone else here read Backwoodsman magazine?
Tabletop RPG rationalists, do you use miniatures and a table map when playing face-to-face?
I’ve played tabletop RPG’s a couple times (a oneshot and the first part of a longer campaign where I dropped out halfway through), and we used miniatures for battle but nothing else. From all appearances, that was my GM’s normal habit.
As a sidenote, when I was a kid, my sister and I played games with our stuffed animals that would almost be called RPG’s. We didn’t have any combat rules, but we played out social interactions and settled fights between the animals with “what sounds like it’d be a nicer story” or “who can tell it more convincingly.” By the time I was twelve or so, the stuffed animals themselves weren’t always making an appearance – though they still usually were until we grew out of it.
Depends on the system. With most stuff, I don’t bother, but recent versions of D&D are sensitive enough about spatial relationships in combat that I find they play better with at least tokens and a map. Miniatures are optional, though, and arguably counterproductive (they never look quite right): my characters have been chess pieces, coins, bottlecaps, AA batteries, and in one case an empty bottle of the Thai version of Red Bull.
Historically, No. But after some system pretty much required it, we kept doing it for combat and pretty much anything else where positioning was relevant because it reduced the amount of misunderstanding and clarifications substantially.
I enjoy painting and creating miniatures and terrain so yes, but I also find that having various bits and pieces on the table encourages my players to get creative. They take better advantage of terrain heights, bottle-necks, and involve the furniture more in their fights.
The physicality of the pieces over a wet-erase maps is nice because players can knock over, move, and use the actual objects as reminders of things to investigate. That and I do not have to draw everything and if the players go in a surprising direction I can whip together rooms and terrain using existing pieces rather than having to draw it out (erasing what I currently had drawn out).
That said, I still mix it up with maps the players have to draw or purely theater of the mind encounters, sometimes incorporating a gimmick, but always to keep the players from getting complacent. I make up a lot of stuff as I go, and like to build an aura of mystery so they can never be certain what I pre-planned and what I made up on the spot.
For example, one of my dungeons, Hero’s Tomb, where the players
Where the players what? Does it just trail off or say AAAAAAUGH here?
Whoops, I changed around the order of things and deleted stuff but kept the link to show what I meant.
The players had to go through a series of puzzles, tricks, traps, and fights that allowed them to backtrack, modify the terrain, and try to turn the furniture to their advantage.
Back when I DM’d 3.X and Pathfinder I would always use grids and tokens (usually spare change) when I ran combats. It had mixed results: certain climactic fights needed that level of tactical precision, but an ordinary brawl or a fight against a mob* would drag when people started counting squares.
Now for my upcoming 5e game I’m going to use a hybrid approach. Theater of the Mind and a hand-drawn map of the terrain for any normal fight, up to a Deadly CR, only breaking out the battle map for “boss” enemies with Lair and/or Legendary actions. Hopefully that will allow me to keep the pace quick during normal combats while letting the big showcase fights stand out.
*The mob template was one of my favorite innovations of 3.5e. Modelling a horde of orcs as a swarm of medium creatures makes intuitive sense and allows for low level enemies to stay relevant a little longer than normal. Bounded accuracy and 5e’s mob rules make this obsolete but it was a cool idea nevertheless.
As Nornagest says, it will depend heavily on the system. I have multiple “layers” of physical map usage, based on how heavily the game depends on precise positioning.
At the high end, you have D&D. Every edition benefits greatly from the use of miniatures, and it is all but required in 3 or later (including pathfinder). However, D&D is not the only game that requires or greatly benefits from a grid. Pokemon Tabletop United uses one, for example.
Next there are games where relative position matters, but exact distances and such are typically not needed. For these games, I’ll sketch out the area and place miniatures or markers about where people are. This is most common with zone systems (such as Fate or Exalted), but also finds good use with cover-based combat systems where ranges are longer than the effective map size (such as Dark Heresy or Shadowrun). Basically, if the system benefits from being able to tell whereish people are at a glance, use a zone map.
Finally are games where a map just isn’t needed. World of Darkness tends to fit this for me, as do most combat light games. In a system where combat is abstracted or even absent entirely, a map is barely useful and a grid entirely unnecessary. To give an extreme example, there is absolutely no point in trying to use miniatures and a map when playing Golden Sky Stories.
Yeah, this is the correct answer.
D&D/Pathfinder/retroclones all have wargame-inspired movement increments and attack types, so IME miniatures always get pulled out because any GM narrative wouldn’t be complete enough for players to feel they had “fair” information to make a tactical decision.
If you have long combat ranges, I suppose I’d pull out a dry-erase grid and draw abstract zones and cloud-squiggles of cover. And if there’s no combat or it’s something like WoD, you should be standing and walking as you role play your character’s behavior rather than staying seated at a table.
I’m not a rationalist, but no.
I’ll fill in the last quadrant with “not a rationalist, but yes”
Oh and as for me, I own over 1,000 plastic miniatures to pull out for D&D. This would be insane if you used RPG minis and you’d be better off using tokens instead, so the trick is to source 1/72 figures, which can be done for 4-25 cents. Eagle Games sells sets of ~50 Egyptian, Greek or Norse figures from their Age of Mythology board game for $2/set starting Black Friday, Twilight Creations produces a variety of zombie animals (and too-modern human zombies) and Deep Ones by the hundred and an inexpensive Cthulhu! board game with 75 cultists & 25 Byakhees, Alliance fantasy wargaming figures can be found on eBay for $8 for boxes of 40+, and if you don’t mind cheap-looking dragons… 😛
For people who — unlike me — didn’t barely eke by in high school physics by copying test answers, here’s something I’ve wondered while somewhat nervously re-racking weights after squats:
If a full-sized 45lb barbell is supported horizontally by catches in a standard power rack, how much weight would have to be stacked on only one side of it to make it lift up and slide off or tip over one of the catches?
Assume all plates are slid onto the barbell as far as they’ll go, and that lighter/smaller plates are always slid on after heavier/bigger ones. Assume also that plates magically do not slide off by themselves once the bar starts tipping upward.
I can answer this based on experience: three plates on one side and no plates on the other will very likely cause the bar to tip as you described (squat racks vary in geometry slightly). With four plates it’s a guarantee. Two plates can cause it to tip if you allow one of the plates to fully rest on the bar near the end rather than by the load sleeve collar, say by starting to pull it off and then getting distracted.
If you’re squatting heavy, you absolutely should not put more than two plates on each side at a time. One plate per side at a time is a perfectly reasonable safety precaution, although not necessary.
Good to know. It sounds like I’m not strong enough to squat or press the kind of weight that would cause my bar to tip while re-racking one side at a time.
And answering based on some theory: the point at which it would begin to tip would be the point where the weight on one side of the rack is more than the weight of everything on the other side of that rack. So a 45lb bar should start to tip when there’s >45lbs on one side, right? Given other little forces I’m probably overlooking I’d bet that you’d have to go a bit over 45lbs before you really noticed it — I’d guess that with ~55lbs on only one side, you’d see definite tipping but not falling off.
Not exactly. It tips when the center of gravity is to the outside of the rack. This means you have to calculate moments, which is not that hard as there’s basically only one dimension to worry about, but it does require you know where each weight is and where the rack’s point of contact with the bar is.
For each weight on the heavy side, multiply the distance from the point of contact with the rack times the weight and total these values — call this OH.
For each weight on the light side, multiply the distance from the point of contact with the rack _on the heavy side_ times the weight and total these values — call this OL.
Now take the distance between the point of contact with the rack and the center of the bar, multiply that by the weight of the bar, and add that to OL. If this total is less than OH, your bar is tipping.
The rack support has positive width; you can use the outside of the rack support as the relevant point of contact (because that is what it will tip around).
In other words, what we are dealing with is a lever. As in “give me a lever long enough, and a place to set it, and I can move the world.”
The contact point with the rack on the heavy side is the fulcrum. The reason that the bar doesn’t tip immediately when you rack a weight is that the length of the lever on the light side (which could be roughly estimated as half the length of the bar from the fulcrum to the end if you assumed the weight was evenly distributed) is many times longer than the distance from the collar to the weight plat on the heavy side.
The Nybbler’s answer contains all the necessary physics. To give you an exact answer we’d need to know things like the length of the bar, how close the plates go to the rack support, even how wide the plates are.
My hunch is that the intuition of a non-physicist gym-goer will be worth more than a back-of-the-envelope calculation by a physicist.
One thing I will say, though, is that any plates on the light end count for a lot. You might find, as sfoil says, that three plates can tip a bar that has nothing on the other end, but just putting one plate on the light end might allow you to put five or six on the heavy end.
Apparently pistachios have very high levels of naturally-occurring melatonin. I’m going to try this next time I’m dealing with jet lag.
Late Bronze Age effortpost: Milawata, and Greek culture in Hittite cuneiform.
Now that I’ve covered all the cities of mainland Greece, let’s look at Miletus on the Aegean coast of Asia Minor. Called Millawanda and later Milawata in Hittite tablets, the latter name also appears in Linear B tablets from the archives of Pylos and Thebes. Archaeologists have found “Minoan” artifacts at the site of the archaic/classical Miletus (strengthening the identification) they date as early as 1900 BC, a century after Crete’s First Palace Period began and well before Cretan presence in Greece. When Mycenaeans took over Knossos, Millawanda passed into their control.
The first Hittite references to Millawanda occur circa 1320 BC, when it supported a rebellion against King Mursilis II after his second campaign season. King Uhha-Ziti of Arzawa called the new Great King “a child” for demanding the extradition of defeated princes who fled to him and formed an anti-Hittite alliance with Seha River Land and Ahhiya (i.e. Achaea, which had a pan-“Hellenic” sense in Homer rather than the north coast of the Peloponnese). Uhha-Ziti was defeated and Millawanda has an LH IIIA destruction layer followed by Hittite-plan fortifications.
The city is then mentioned in the “Tawagalawa letter”. Unfortunately this was a multi-tablet letter of which only the third has been found, but the author is generally believed to be Hattusilis III, Mursilis II’s youngest child, and from it we know that Millawanda was ruled from Ahhiya, had a governor named Atpa and his/the city’s territory included a place called Atriya (Atre-land – note that Atreus’s family was said to be from Asia). The titular Tawagalawa, brother of the King of Ahhiya, has been suggested by numerous scholars as a Hittite rendering of “Eteokles”, as it would have been in the Bronze Age before Greek dropped most w-sounds (e.g. wanax -> anax).
The real purpose of the letter, though, is another Hittite extradition request. It seems that an adventurer named Piyama-Radu had made himself king of Wilusa (Ilium) but had lost the throne in a conflict with the Hittites. He was now an exile in Ahhiya and the Great King of Hatti wanted him, promising “my brother” of Ahhiya safe conduct. Amazingly, he says further that “these days we have an agreement on Wilusa, over which we went to war.”
Finally, the name of the city shifts in the “Miliwata letter”, now closely matching the Linear B spelling. This letter demands that the recipient client ruler resolve a dispute over hostages, turn over fugitives from Hittite justice, and turn over a pretender from Wilusa to a Hittite envoy so that the Hittites can reinstall him as king there. The letter reminds the recipient that the recipient’s father had turned against the Hittite king. The Hittite king then installed the recipient as king in place of that one’s father. It also alludes to Piyama-Radu as a troublemaker of the past.
Not surprisingly, Miletus then has a destruction layer associated the Sea Peoples.
I still have a post to make on Crete to wrap up Greek cities before the Bronze Age collapse, but first there’s more cool stuff from Hittite cuneiform! So next time: Ilium!
I just want to say I really enjoy these posts.
It’s taught me quite a bit and I’m someone who studied the Bronze Age Agean civs, geeks out at the Ashmolean any time i’m on the right continent, and has an Archeologist cousin who worked on sub-Minoan/Mycenaean influence in early Apulian culture of the Pre-Geometric
Highly looking forward to your description of the collapse, especially your views on the System Collapse theory that’s in vogue right now.
I’ve never read Tainter, who apparently came up with Systems Collapse theory, but I’ve read Cline (1177 B.C.: The Year Civilization Collapsed), and his version seems like a sound scientific hypothesis. It sounds about right: civilizations don’t necessarily respond to climate change (economically-significant climate change being a known factor at least in the area pertinent to the Sea of Galilee) by collapsing. However, drought would be very helpful as a causal factor for the scale of the Sea Peoples attacks, and the “palace economies” lacking flexibility would contribute to their success at destroying civilization.
So it’s a hypothesis I vaguely support, but/therefore I’d like to see attempts to falsify it.
How secure are the connections between these Hittite names and their putative Greek counterparts (Ahhiyawa = Achaea, Tawagalawa = Etewoklewes, Millawanda = Miletos, etc.)? My impression from skimming a few articles is that the Hittite sources don’t clearly refer to specific cultures or geographic regions unless you pipe in material from Greek tradition, but of course that doesn’t mean it’s right to do so. The sound correspondences seem a little contrived to me, and apparently to some scholars (but then again if there are no more certain examples of Greek loanwords in Hittite you get to make up whatever correspondence rules you like), and given the scope of Greek mythology/history it seems inevitable that there will be some soundalikes. Some phonetic shakiness is of course OK if the myths match up in unexpected detail with the historical sources, so are there any surprising parallels?
I also want to second Michael Handy — these posts have been the highlight of the last few OTs.
I imagine they’re roughly as secure as Champollion’s deciphering of Egyptian hieroglyphics. He had some very lucky help, to be fair; I don’t know what helped in the Hittite case. But he then made some very educated guesses equating glyphs with phonetics that turned out to be predictive.
(I listened to a very well-made explanation of his process… in a walking tour expansion that came with Assassins Creed: Origins, of all things. One of the major reasons I play that franchise.)
I’m glad you asked! Just because something is the consensus of PhDs in a subject doesn’t mean it has anything like the epistemic status of, say, the consensus of biologists. A minority of scholars dispute nearly all the correspondences, even Hisarlik = Ilium, and that’s OK.
The one really secure Greek name we have in Hittite cuneiform is Alaksandu(s) of Wilusa, who signed a treaty with Muwatalli(s) II, eldest son/immediate successor of the Mursilis II I’ve mentioned. This is too close to Alaks-andros (“protector of male humans”) to be coincidence, when we have it attested in its native language, feminine form (Alexandra) at Pylos.
This, incidentally, helps us place Wilusa as a coastal city, because the Anatolian coast is where we find potsherds suggesting trade with the people who wrote in Greek. Once I’ve covered collapse-era Crete, I should jump over to the Hittites and talk about how only a couple of place-names are really secure.
If the differences between the Hittite names and the purported Greek counterparts follow a consistent pattern, that goes a long way towards making them less contrived. For example, substituting phonemes that don’t have a 1:1 correspondence in the two languages (e.g. 北京 -> Beijing or Peking, depending on who you ask), or munging word-endings to something more “natural” in the destination language (e.g. Londinium -> London or Moscva -> Moscow).
I don’t know if this sort of study has been done for Hittite-to-Mycenaean, or even if there are enough data points to do a meaningful study.
A point that 1177 BC made in favor of purported correspondences between place names in a non-Greek source (in this case, Egyptian) and Greek places was the “Aegean List” found on one of Amenhotep III’s monuments: if the vaguely Greek-sounding names are matched to Greek city-states of the era by a similar method to the Hittite correspondences, then the order of place names in the list makes a fairly sensible itinerary around the Aegean coast. And this is corroborated by Egyptian-style dedication tablets in Greek temples constructed around the time of the monument, in some of the named cities, crediting Amenhotep for supporting the temples’ construction. (Epistemic status: this is from memory from something I read weeks ago, so I might be getting some of the details wrong).
Hopefully this doesn’t broach CW territory, but let me know if it does – I recall a post that surveyed whether someone would prefer a short “happy” life over a long “suffering” life. So curious on thoughts on the following hypothetical:
You’ve been taken aside by government officials after chancing upon secret research into the elixir of life. As such, you cannot return to your normal life. You are given two choices:
A) Live the rest of your life in a rural community of about 1,700 people with roughly 1910s conditions – you have electric lighting, but no AC, obviously no internet, and you have to hand wash/dry your clothing. This means no Amazon two day delivery, social media, or blog reading! In addition, these people are not that familiar with your modern ways and cannot commiserate with you on your loss. You will be given a manual labor job you are not immediately used to that makes you sore daily, but you must work in order to secure your housing and availability of varied food. If you do not work, you will be put in 2.5m by 0.5m by 0.5m pod and allowed to exit once a day for bland porridge, water, and a set of vitamins (your pod has a refuse tube). The summers here are hot and humid and you have but fans to rely on and mosquito nets to keep the buggers out. The winters cause your diet to be far more bland and preserved foods come into play.
Your personal safety is assured, however, and you are given a supplement that assures you that you will live to at least 90.
B) You are given unlimited personal license. The US government will tacitly just print whatever money you decide to spend, and so long as you are not physically harming anyone else, will look the other way for any crimes you may commit (e.g., steal candy from babies, drive a tractor through the Hamptons, pee off the Empire State Building, make your own TV show/movie where you belittle or judge other people). You can fully indulge yourself in modern pleasures, travel anywhere you’d like in the world, and sample all modern luxuries you once thought out of your reach.
But you also agree that at the end of 18 months, you will be knocked out never to wake again, as your body will be given over to the elixir of life research. It is implicit that during the whole period you are being observed, as divulging your discovery to anyone would result in the immediate expediting of the knock out process.
Which do you choose?
I would definitely choose A. Option A is better than what almost all lives have lived and omits just the last century of technological progress. But it also omits a lot of the horrors and tragedies of the last century. Grueling work is in itself satisfying (“sleep of a laborer is sweet”), and while initially it would take adjusting to the townsfolk, by the end of 18 months, I think I’d be part of a community rather than isolated and alone. I’m much more in the joy in suffering camp than the pleasure life is the best life camp.
Please don’t take this personally, but I feel compelled to ask. Exactly how much grueling work have you done in your life?
Haha, that’s probably a bad way to put it. You’re right to call me out.
I was thinking of growing up shoveling a three-car driveway when neighbors had snowblowers in Michigan, or mowing the lawn, or studying when everyone else I know is out playing, but I suppose you can hardly call that “grueling”.
Or being made to practice two hours of piano when friends are asking me to play street hockey – just the general notion of having to do something you don’t like at the time but much later on finding it satisfying to have experienced it.
It doesn’t really become grueling until you feel compelled to do it beyond the point of real discomfort. Shoveling snow hard for 2-3 hours can clearly be tough, but it is nothing like shoveling snow hard for 50 hours a week, 50 weeks a year.
Right – so personally I am speculating but in the hypothetical the gruel stands – either you do grueling work or you eat gruel.
Yeah, this is sort of what I was getting at.
I mean I agree that there are times when you work hard and then you feel good and satisfied for having done so. However, I suspect that people who work 50 hours a week toiling in physical labor in the hot sun or brutal cold do not get that same feeling.
I think that “feeling of satisfaction” is mainly based on the fact that physical exertion is so generally unrequired in modern life that there’s something of a cute novelty to it. But if it wasn’t novel… Trying to avoid CW here, but “the sound sleep of laborers” sounds like some sort of Marxist thing, widely theorized by authors who have never had to actually do much work themselves…
I have a small (not large by any means, but nonzero) amount of cred here, having worked manual labor jobs and also being a homeowner who doesn’t hire out much of any kind of work…
I don’t think the “sound sleep of laborers” is unheard of. Maybe 90% of the time, as an actual gruel-laborer you go to bed and grit your teeth and wish you had a nice cushy job in an office where apparently people make money just sitting around, but 10% of the time it’s nice to look down at your muscular body and dirty fingernails and feel like you know what an honest day’s work feels like.
Probably depends on how much you like other aspects of the job, too. And I’m sure it’s a bit different if you’re working (e.g.) someone else’s land vs. your own.
Well, I think the “sound sleep of laborers” is literally true, but not really in the way it’s being portrayed here.
Laborers sleep soundly because they are fucking tired, not because they are morally content and at peace with their harmonious existence or whatever.
My dad would have loved an extra hour a night to read or listen to music or do something otherwise fun and pleasureable. He went to bed at 10 because he was too tired from doing physical labor all day, and knew he had to start again at 6 AM the next morning.
He occasionally took me to work with him – not to teach me the value of “hard work” but rather to teach me the value of a good education (which is primarily that it enables you to avoid doing hard work)
I cut trails for a couple of seasons and have worked a few modestly physically demanding jobs (compared to say coal mining) like hand needing bread and washing dishes at high volume restaurants. Observations
The first days of cutting trails are the worst days of any job I have ever had (ignoring things like getting poison ivy or breaking a bone in my foot at work). Whole body aches, sleeping in tents, blisters on hands and feet and getting up to do it again the next day. Day 3 tends to be pretty awful, you are just building up points of discomfort, covered in mosquito bites and scratches and sweating and working in the heat (long, thick pants and boots) or soaking from the rain. It gets better pretty quickly though and can be thoroughly enjoyable (for 18-22 year old me not present me I bet) and as you toughen up, get in shape and get used to sleeping on the ground.
Needing bread eventually just got worse. Partly because the better you are at it the more loaves you do, and partly because the discomfort becomes very specific. Feet and forearms are basically perpetually sore or you get twinges in your wrist and elbow (and by twinges I mean stabs of pain for 3-4 hours a day 5 days a week) without any real way to alleviate them or rest for long enough to make them go away. Summer is less work by far hotter work, showing up at midnight or 3 am and its 90+ degrees before you crank on the ovens.
I would take whole body work over the very specific work in general, but it is much harder up front to get used to.
He occasionally took me to work with him – not to teach me the value of “hard work” but rather to teach me the value of a good education (which is primarily that it enables you to avoid doing hard work)
That’s what Caplan with his “college is signalling” and Graeber with “bullshit work” are missing or don’t get; working class parents who don’t want their kid digging ditches or working as a waitress like they did. They don’t care if you’re chirping away about “but college is simply an expensive way to show you’re diligent” or “90% of office jobs are bullshit make-work”, they want their kids to have (in the words from a Victoria Wood sketch) “clean indoor work with no heavy lifting” and a college degree still gets you that.
@Deiseach: Too bad there’s no way to distribute economically-useful heavy lifting so white collar workers each get at least 3 hours a week instead of fighting to get us to use gyms so we don’t die of cardiovascular disease.
That’s what Caplan with his “college is signalling” and Graeber with “bullshit work” are missing or don’t get; working class parents who don’t want their kid digging ditches or working as a waitress like they did. They don’t care if you’re chirping away about “but college is simply an expensive way to show you’re diligent” or “90% of office jobs are bullshit make-work”, they want their kids to have (in the words from a Victoria Wood sketch) “clean indoor work with no heavy lifting” and a college degree still gets you that.
I haven’t read Caplan and Graeber’s essays on this, so I don’t know if they address it, but my counter to this nowadays is that a college degree doesn’t always get you that. Even when I was earning a degree in the 1990s, it mattered what you majored in. It’s not enough for your blue collar parents to urge you to get any degree at all. It has to be a major that will have high demand by the time you get the degree. The only ones I can think of that are fairly safe bets in the US are law, business, health care, and software development, and those all still require a great deal of work, even after you get the degree.
You might get paid a lot more for roughly the same hours, so that much is still probably a good idea, but this is again assuming you were selective.
At the other end of that spectrum is where you got any degree you could, and now you’re doing somewhat less strenuous work than your parents, for somewhat less pay, noticeably less job security, and ruinous debt on top of that.
Heh. My dad once absolutely lost his shit when my mom paid for a gym membership.
“You want more exercise? Do me a favor and mow the lawn for me next weekend! There are plenty of opportunities to burn calories right here!”
How do you get one?
@Le Maistre Chat,
In my experience “economically-useful heavy lifting” is just as likely to make you weaker (from your injuries) as stronger.
But some help would be appreciated!
Provided you make it to the end, of course.
Mowing the lawn is genuinely relaxing but that’s because it is mindless and relatively easy. I spent a lot of time this summer digging out, resetting, and recovering a drainage pipe with an incorrect pitch, in 90+ degree and 90+ humidity days. And, man, it sucked. That’s probably one I should’ve outsourced given the cost: a coworker told me she only paid $250 for similar work done. $250! I spent days digging, setting, and resetting that crap, not to mention the material cost.
The post you want is here: here.
The real lesson of “Option A” is that I enjoy luxuries, but draw long-term satisfaction from relationships and meaningful progress.
You could get a similar result from something like, “Would you take a job as Park Ranger, manning an off-the-grid fire watch station, for a happy family and True Love?” It would be sad that my True Love couldn’t find their happiness somewhere with wifi and hot showers. But whatever.
If you change the situation so that the suffering in A interfered with satisfying long-term relationships (eg. the community is embittered by the work) then my decision would change radically.
So part of it is that you do not know at the time of your decision whether you’d find true love, happy family, etc. All you know is the description above, your impression is the community is rather used to the hard work and certainly not embittered by it, but at the same time you have no clue how well you’d fit in with the people.
The “suffering” comes in the form of things you’ve taken for granted so far taken away, and being made to do a job you’re not used to that makes your body physically sore daily, but by no means does this mean the whole community is reduced to slave-like conditions. You’re simply not used to it from the outset.
C) Fight the POWER! Plan a method to expose the research and conditions there. If it somehow involves toppling the corrupt regime, all the better.
What? Do you feel that the conditions are ethical or that the nature of the research would at all excuse it? Or that the results would be used in a manner you’ll be comfortable with?
While I applaud your choice, I don’t necessarily endorse your reasons or aims. 😉
Can I defer my choice until I’m 70-80 and then pick option B?
Sure, you’ll just have to wait in the gruel pod in the meantime!
Sounds pretty clear for A), as long as “makes you sore daily” isn’t an understatement of the hardship.
Well I spend almost the entirety of my waking hours in front of a computer, whether for work or leisure. So going transitioning to physical work and a pre-info era life would be quite an adjustment for me at least.
I assume the community in A) is actively luddite to some extent. Because otherwise I’d be playing Pac Man and working on refrigeration by year’s end.
You had me with A up until the part about bland food during the winter. A’s also a gamble because if I’m fairly healthy in my old age and then just die suddenly that’s fine, but if I’m sickly I want that state-of-the-art healthcare, and not having it is going to SUUUUUCK.
If I wasn’t basically killed after 18 months in B, I’d do that and just keep living my normal life, maybe using the free money to see family more often or helping out people I’m close to if they’re struggling. But essentially having 18 months to live is a nonstarter.
“sleep of a laborer is sweet” sounds so much nicer than “dead tired and passed out exhausted”.
Since me and my wife lived for a while out of an RV (though I kept paying rent to keep a rent controlled apartment in Oakland) while I worked as an apprentice plumber mostly doing manual labor in “Silicon Valley”, with no AC, etc. I’m pretty sure I’d choose option “A”.
One thing to keep in mind though is that sometimes sleep doesn’t come easy, even when you’re exhausted, because you’re in too much pain.
“sleep of a laborer is sweet”
I worked construction from age 15-22. Summers were hot enough that I lost about 10 lbs of water weight every day (replenished in the evenings). By replenished, I mean, I would wake up around 2am every night due to thirst, walk to the sink and drink about 8 glasses of water, then go back to bed.
During the week in the heat of the summer I basically did not urinate – every once in a while a little in the morning. I could have used a nutrition coach to tell me to drink water to the point of discomfort in my waking hours and I would not have had that problem. I think the only guys there who urinated on the job were the alcoholics, who drank to discomfort without any advice from a nutritionist. When the temperatures were below the 90s, the problem mostly went away entirely.
Anyway, most sleep was deep, if not good. I had bad dreams about being thirsty sometimes. I wasn’t really in much pain except when I was injured, which wasn’t too often. Glad I was able to stop before my body aged.
For the option you give to live to a ripe old age, it would matter to me whether I could stop working. Either from retirement or suicide, I would want to quit some time between age 60 and 70. I definitely don’t want to be 85 trying to work a manual labor job that causes daily muscle soreness.
Could anyone recommend a good privacy-focused VPN? I suspect my ISP is throttling certain kinds of streaming traffic and VPNs seem to be one way of getting around this. Ideally, the VPN would be compatible with both Debian Linux and Windows, but Debian is the more important of the two.
I currently use Private Internet Access, and it seems to be pretty decent… but some sites have anti-VPN setups to block users of VPNs. Notably, Netflix does. Though it seems they aren’t keeping their block list up to date anymore.
I am using PureVPN and it works great, and it’s much more Secured. Also it does work great with linux and windows both. It’s easy to set up on Linux, I have set it up in few minutes on Linux and hopefully it will work same on all other linux distributions. The Plus point about this VPN for me is that it does have a great range of VPN servers . Which makes easier to access things around the globe.
Genus Rattus did not exist in the Mediterranean until after Alexander the Great’s death, Harvard journal claims.
Apparently rats first appeared in Egypt under the Ptolemies and began to replace native rodents in the diet of Sardinian owls by the second century BC. European Rattus rattus has 38 chromosomes and speciated from an ancestor with 42 in southwest India, where the Ptolemaic and Roman Egyptians bought most of their pepper.
This raises the question of what earlier disease-vector rodents like the ones who served Apollo in the Iliad were (Greek mys means any ground rodent, and Renaissance Classicists treated the word “rat” as barbaric).
Could be just about anything. I remember news in the late 90s or early 2000s about chipmunk-borne plague in one of the national parks (which really deserves to be made into a low-budget animated film and traumatize a generation of toddlers).
The linked paper just mentions “native rodents”, unfortunately.
“Remember little ones, all those small woodland creatures are actually plague vectors!”
“sobs Even bunnies?”
“We’ll look into that.”
To follow up on dndnrsn (pronounced “din-dinner-son”)’s effortpost on Isaiah:
If you read through Pascal’s Pensees, in a few places he mentions off-hand the numerous fulfilled prophecies as a demonstration of Scripture’s divine inspiration. But he never actually gets into specifics. I have vague memories of other works in the time period doing some similar, implying to me that there was a fairly standard and well-known list floating around. Does anybody know what that list might have been?
To all those people who recommended people play King of Dragon Pass last thread:
Thank you, but also screw you for hemorrhaging my free time so much. A game this immersive centering on a theme I enjoy this much is an unhealthily strong bit of catnip to me, and I’ve been having some very, very real trouble putting it down.
So, it’d be a bad time to point out that there’s a sequel (prequel technically) then?
I don’t have access to iOS, so I’ll live for now.
I missed last thread, but King of Dragon Pass is awesome. My favorite thing about it is that to do well, you have to think like a Bronze Age pastoral warlord; taking what would be the winning approach in most games and choosing all the options that sound the nicest from on top of your 21st-century secular culture with running water and refrigerators and relatively little pervasive violence will get your ancestors pissed off and your cattle stolen in short order.
Also, it’s the only non-dinosaur-themed game I can think of where you can raise triceratops.
It can’t be overstated what an opportunity it is to see Bronze Age morality portrayed sympathetically in interactive media.
Plus yes, Triceratops!
Case in point: I remember at one point an event happened where some wolves were hunted down and killed after they killed some farmer, and at the wolves den two wild children are discovered. There is an option to adopt them, and an option to kill them. Modern Christianized Westerner answer is “adopt them, everyone deserves a chance.” Bronze age chieftain answer is “Them kids look like trouble, and they ain’t our blood.”
Fher rabhtu, gubfr xvqf or jrerjbyirf, lb.
I’m not sure I buy the whole “bronze age morality” thing. When I played KODP I just did whatever I would decide in that situation and things always seemed to turn out okay. Maybe that says more about me than the game, but “slavery bad, stealing only okay from people who attacked first, defend innocent unless we can’t afford to, other than that don’t fuck with anyone” seemed to work every time.
We have the word “gentleman” for a man of a certain refinement. And we have “lady” for the female counterpart. Do these words have antonyms, for men and women who are distinctly not gentlemen or ladies, without suggesting anything more than that? The words I come up with are all rather more specific: scoundrel, ruffian, thug, hick, bitch, ho, slut.
For a man, churl, from the same Germanic root as “Carl”.
Boor would be my best guess. It comes from the same root word as the German word for farmer, Bauer, and refers to a general classlessness. Usually applied to men IME but a woman can still be boorish.
‘Villain’ has a similar backstory, though it’s since shifted.
Goodman and goodwife are archaic polite terms of address for the master and mistress of a commoner household. Though they could also be used the same way we use mister and missus today.
And the leader of the Jacquerie peasant rebellion was called John Goodman.
IIRC even among commoners (in the usual sense not the strict British legal sense which includes landed gentry and knights) there were gradations. John Shakespeare, William’s father, became entitled to be known as Master Shakespeare rather than Goodman Shakespeare when he was elected to a municipal office in Stratford.
Maybe “lout” or “boor” for “gentleman”? I also considered “knave” and “rake”, but they’re a bit on the specific side.
The feminine of “rake” is “ho”, which is just too perfect.
I’m still waiting for a music clip featuring a rapping Santa Claus with his 3 ho’s.
So “garden implement” as a generic euphemism for low class people of any gender?
“Garden implement” would cover promiscuous low class people regardless of gender, yes.
“Slut” originally meant a low-status female laborer, kinda like “churl” and “boor” above. The sexual connotations only later grew into the word’s primary meaning. Similarly, “wench”.
Words for low-status men seem to pick up connotations of laziness and violence (a “villain” was once a guy that lived in the village, as opposed to the lord’s residence). Words for low-status women seem to pick up connotations of loose sexuality. Both seem to pick up connotations of untrustworthiness.
So Eye of Argon‘s usage wasn’t as far off as is commonly believed?
Never read Eye of Argon, but it’s pretty common for bad writers to use words like that in ways that’d be correct (or, at least, incorrect in subtle ways) in period English but come off as jarring because the rest of the language they’re using sounds too modern.
Even good writers, like George R. R. Martin, fall into the issue sometimes. It’s really hard to nail down a tone that’s old-timey but still flows well to a modern eye.
Eye of Argon is a novella written in 1970 and self-published in a fan journal. It’s notorious for being hilariously poorly written.
The actual underlying story isn’t all that bad, especially considering that it’s a 16-year-old’s first serious attempt at fiction writing. The badness lies mostly from it having been published as a very sloppy first draft full of typos, misspellings, malapropisms, and grammatical errors.
“Slut” is used several times in the story as a generic insult, mostly directed at the Conan-inspired barbarian warrior protagonist. And just about everywhere it’s used, “churl” would have been a pretty good fit for what from context appears to actually have been meant. It’s possible that the author meant it in its historical meaning, but given the malapropisms and other usage errors in the rest of the story, I think it’s more likely that the author had heard the word and knew it was an insult, but didn’t really know what it meant.
I thought ‘slut’ originally meant ‘lazy woman’; specifically lazy housekeeper; this from old Michael Gilbert mystery stories.
Wiktionary thinks its original meaning was “dirty or untidy woman”, so maybe I didn’t have the whole story. It does have the sense I was thinking of, though, attested from 1664:
(…are we still doing “phrasing”?)
Maybe context will help.
What would a bartender in a low-end bar call a male customer he doesn’t much like?
What would a construction supervisor call kinda dodgy unskilled workers?
Something along the lines of “asshole”. Though likely not to his face unless he’s throwing him out.
Probably an ethnic slur.
A hole for guys sounds about right.
For women it begins with a C and rhymes with what you call an undersized piglet.
Interesting fact about that C word: in Scotland, it’s used colloquially to mean “person”.
“Does any c*nt have a phone charger I could borrow?”
“Oh him? He’s a daft c*nt, right enough.” (This is not an insult.)
“Last night I heard some c*nt on the radio talking about the evolution of slang in middle English. It was interesting.”
I’m vaguely aware that in the USA (and to a lesser extent the rest of the Anglosphere) it’s a Very Rude Word, but in Scotland most folk won’t bat an eyelid.
Yeah, I think c*nt is probably the worst word you can use in the US that isn’t an ethnic slur. Out of curiosity, what’s the Scottish equivalent? As in, the most offensive thing you could call someone who bumped you on the sidewalk?
That’s an interesting question. I actually think we don’t have an equivalent (unless you count Gobbobobble’s suggestion…).
Ethnic slurs are the worst, probably followed by the homophobic f-word. If you’re a catholic or a protestant then there are some pretty nasty names you can call each other, but I think (like the ethnic/homophobic insults) these are mostly upsetting because they crystallise an upsetting conflict rather than the words themselves. Certainly I wouldn’t be bothered if somebody called me a fenian.
But to me, the idea of having a context-free super-insult is very strange. Why should a word inspire great feelings of animosity without referring to existing conflict?
(Also, to avoid misleading you, it is still rude if you say it with rude intent. Like, if somebody bumped into me on the pavement and I called them a c*nt, they wouldn’t think I was calling them a “person” in the sense of my examples in my previous comment; I would definitely be insulting them. And you’d never say it in front of your granny. But yeah, it definitely doesn’t have the oomph that it does in America.)
Lots of responses already, but let’s not forget ‘pleb’!
And while I’m at it: I’m also looking for data on iodine, aah, prevalence in different European countries. So basically to which degree iodine is/was naturally part of the nutrition in pre-iodisation times in different European countries. Here I’m more handicapped by my inability to express this correctly in English. “Iodine deficiency” only gets me recent medical data, usually assessments of iodisation programs. Something like number of goiters in 1900 would probably do. I googled for “iodine soil levels” and “iodine groundwater levels” but to no avail.
Just as an anecdote: the Spanish have a despective word for French people, “gabacho”. There are different theories about the etymology of the word (speaks badly, mountain region) but one of the theories is that it comes from the word “gava”, which meant goiter.
This may point you at some useful resources.
I found it by searching on [iodine deficiency history Europe].
I came across that as well. But now that you call my attention to it, I realise that the references are probably a good starting point. So thank you.
I was looking for standardised test results for kids in the different French regions. Something like PISA, where these regional results exist for some countries (Germany, Spain, Saudi Arabia…) but don’t seem to exist for France, or something like PIRLS. Or something unique to France (Germany has IQB with very detailed data). Bac percentages or marks might also do, but given that the scoring isn’t standardised, that’s probably bad data.
I suspect that this data doesn’t exist for France, but I’m handicapped in my ability to google for French sources and I was hoping somebody with more knowledge of the French education system and of the French language could help me out.
Says that fracking isn’t profitable, but low interest rates enable? encourage? people to do it out of optimism.
Something being unprofitable is either the result of low revenues, or high costs. I’ll advance a potential theory for both.
1. Revenues – I’m not an energy expert, but I have worked closely alongside several. As far as I can tell, they’re always incredibly bullish about oil prices. Their long term projections involve oil prices going higher and higher and higher. So when a fracking project starts, they aren’t thinking they’re going to be selling it at the current price, but at a (much higher) future price. If the higher prices don’t materialize, their project will be unprofitable, but in a way that has nothing to do with fracking specifically. The major error was their inability to forecast prices.
2. Costs – This is still a pretty new technology. I think a lot of companies are betting hard on what’s known in the consulting world as the experience curve. That is to say, the more often and the longer you do something, the better and more efficient you get at it. Basically, you don’t get to have cheap, efficient fracking without investing a lot of time and money in expensive, inefficient fracking beforehand. The somewhat unprofitable projects launched today are, in a sense, an “investment” in learning more about fracking, and how to do it better, for the purposes of driving down costs in the future.
ETA: These can also build on each other. The more people who rush into unprofitable fracking projects hoping to get going down the e-curve, the more additional production occurs, increasing supply, thereby lowering prices, making everyone’s projects even more unprofitable. And as the article mentions, the Saudis attempted to fight back by increasing their own production to lower prices even further still.
This is a mistake that many people make when analyzing emerging markets* and acting as if they are fully developed markets. Markets rely on price discovery, you have an idea, implement it and see how it works out. Eventually you find out if your sales are enough to overcome costs. When you have a new industry (or a new technology in an old industry such as fracking or say electric cars) there are going to be lots of entrants looking to dominate and many of them will fail, you might even suppose that in a free market most will fail. There is a large gulf between individual companies in a market failing and the market itself failing, and that is the distinction the author misses**. Take this quote here
Besides what I believe is a misleading wording choice (the link actually states that out of the top 20 companies that mostly rely on fracking) apply this to electric cars. How many car makers are claiming positive cash flow for their electric car division. Is it roughly zero out of the top 20***? How many need long term positive cash flow for the industry to be viable? I would say roughly 2 are needed in the long run. To use her Tech bubble analogy, yes the crashing of the bubble reshaped the industry but it didn’t kill it, and we ended up with the Faangs of the market dominating.
As a secondary point every fracking site that shuts down for lack of profit should push oil prices a little bit higher which will help make the remaining ones profitable, there should be an equilibrium in there somewhere (though it might not be reachable).
* I am sympathetic to the argument that artificially low interest rates will cause unsustainable booms in come industries, but this article seems a little pointed towards “fracking is going to fail” given the facts that it lays out.
** The author does deserve some credit for presenting contrary evidence at the least.
*** There are some inherent differences here because the fracking industry is more likely to be dependent on local conditions so you could have a few areas where production is profitable but without high enough volume to impact the market significantly. If this did happen though they would likely end up as a corner in the market keeping R&D alive in the industry and steadily bring down costs for other areas.
For the record, I am very sympathetic to this argument, applied in general. It’s basically the underlying core premise of Austrian Business Cycle Theory.
I am as well, and am as close to an Austrian in thinking as you can probably be on this count though I prefer to not self affiliate with any school of thought, and am finalizing my short term investing thesis on these lines.
This might veer into culture-war, but what are your opinions of the standard counter-cyclical fiscal and monetary policies?
EDIT: I ask because “misallocation of (some) capital as a result of low interest rates” seems reasonable enough, but it’s a big leap from there to the anti-demand management stance of the Austrians you would find in an internet comment section.
Short, incomplete, maybe even technically inaccurate answer for the sake of brevity.
Demand is a function of production, so supporting demand means supporting the (presumed) misallocation of capitial and prevents reordering to better uses. In general I think the internet Austrians get this point right. They are wrong in two other major complaints.
1. There is nothing inherently wrong about counter cyclical policies, the caveat being that they have to be set up (and funded) ahead of the crash. UE insurance is fine in theory, the practice of increasing benefits (and not actually investing the receipts) post hoc is not fine in the name of stimulus.
2. The fact that interest rates were “artificially low” in the past does not imply that the proper interest rate now is “high” or “higher”, this is a very common mistake.
In regard to counter-cyclical policy in general.
1. The Keynesians are wrong on all counts imo. Even if you grant that government spending could plausibly counter a shortfall in aggregate demand the theory only credits positive effects for periods of time when the economy is suffering a shortfall and any debt and leftover spending is a drag on the economy after that.
After the last 4 recessions in the US publicly held Federal debt as a % of GDP has risen for several years after the official end of the recession, and after the recent one is still rising 8-9 years later despite all kinds of favorable conditions. This is like a car company bragging about having the best acceleration and then never proving that their brakes are up to the task. Repayment (or at the least return to prior levels of debt to GDP due to growth) of the borrowed funds is a crucial part of the Keynesian prescription, and so I would not support their methodologies even if they appeared correct until they demonstrated that they could get the proper whole recession plan together and not just the expansionary aspect of it.
2. The monetarists have a similar, but less explicit issue. They expect rates to return to appropriate levels when growth returns but (as far as the Fed is concerned) it does not appear to have happened. If you take the Federal Funds rate and graph it you get a wave in increasing amplitude from the 50s through the early 80s and then one of decreasing amplitude from then until now, hitting zero. It would be a rather large coincidence if that is what market rates were doing on their own in that period and monetary policy was just along for the ride.
My interpretation of such a graph (and some other evidence) is generally that once the Fed has made a mistake in setting interest rates they will have to either eat that mistake in the form of a recessionary period (not necessarily a technical recession) without attempts to alleviate it monetarily or face decreasing effectiveness of its policies to combat either inflation or UE (or both).
What is the mistaken conclusion? You don’t say. The article doesn’t really reach a conclusion, and maybe it should be condemned for vagueness, but not for false conclusions. As best I can see you interpret the article as saying that the economically rational level of fracking is zero. But it just doesn’t say that. It says that the boom is a bubble which will burst and “energy independence” and its “muscular” consequences will end.
People give fracking credit for lowering the price of oil. If this is only partly true, and part of the low cost is due to unsustainable investment in fracking, that is good to know. Certainly, it was wrong to credit it with lowering the price to $50, since it’s now back up to $70.
Or is the mistake to look at profitability too soon?
That seems odd to me. Do fracking companies sell themselves as technology development companies? Then why are they funded by debt rather than equity? Are they raising money for practice wells? If so, that’s news to me. The cited Economist article says that they only forecast modest improvements. Also, it says that fracking companies are priced based on acreage, which sounds appropriate for short-term investments in a mature industry. If what matters is long-term development of technology, then acreage shouldn’t be so important, even if it’s hard to predict who will be FAANG. (In the tech bubble people counted eyeballs based on theory of first-mover advantage. It was a wrong theory, but at least it was a long-term theory that justified the lack of profits.)
There’s a difference between the median investment making money and the mean investment making money. The Economist article seems to make a good argument against the latter, except that the price of oil is up a lot since then. The citation to WSJ that they still aren’t profitable is misleading, because the article explains that much of it is driven by hedging. More of them are profitable at today’s $70, but it’s not clear whether that’s the long-term price, so long-term profitability. We could interpret the investments in fracking as a bet on future oil price. But why don’t people directly bet on oil futures? If A bets on oil prices going up and B is the market maker, it makes sense for B to invest in fracking as a hedge. Why didn’t fracking companies fund themselves with 10 year futures back in 2010?
I believe there are at least two mistaken conclusions.
1. That industries highly dependent upon innovative and emergent technologies should be judged by the same profitability standards as established and mature industries
2. That artificially low interest rates leading to malinvestments (a bubble) and corrections (a bust) will uniquely affect fracking – while not similarly affecting other emergent/low profitability industries
I didn’t state that she had a mistaken conclusion, I stated that using an analysis appropriate for developed markets and applying it to an emerging market is a mistake. A quote of hers I used
A quote I did not use, but could have.
The point is not that the fracking boom is fine, or in deep trouble or anything else, it is that the presentation of certain facts or claims are done so in a way that doesn’t allow for a meaningful conclusion.
Emerging markets are not emerging markets because they are technology companies, they are emerging markets because they do not have mature price points. Those prices have to undergo discovery, the process of bringing goods to market and selling them on the market. Take a hypothetical question, what is the demand for a $35,000 Tesla? The answer depends on what Tesla has to cut from its higher models to make a profit on a $35,000 car. The answer for fracking is probably going to be along the lines of it depends on local geology, interest rates, equipment costs etc.
For something like fracking you don’t know what your profit point is until after you start pumping, and honestly you probably don’t know it until several years of pumping because production declines are likely to be variable.
How do you know that this market isn’t mature? Why is the next well different from the last well? Sure, every well is a lottery, but a hundred wells are predictable. If interest rates matter, than no market is mature. The next wells should be better because geology and fracking have advanced, but the fracking companies don’t claim that they are rapidly advancing.
Yes, pricing a Tesla is not just technology, but also unknown demand. The demand for oil is volatile, the future unknown. Does that mean that no oilfield is mature? The existence of futures makes this market more mature than most.
I don’t, which is why my objection is what the phrasing of the arguments. Statements like
are bad in part because they are tacitly assuming that the markets are mature. When you are discussing a relatively recent industry (by the author’s description it began in earnest 10 years ago) you should not default to comparing it to pricing mechanisms for industries that have been around for much longer (or much shorter even).
When do the next 100 wells become predictable? Definitely not before the first 100 well have been drilled, almost definitely not after only 100 well have been drilled. You expect convergence over time of course, but the article indicates that there are still significant surprises with paragraphs like
The market for carbon based fuels is very mature. Nor are these particular carbon based fuels novel in any way other than how they are extracted. So, I don’t think it makes sense to analyze this as an immature market. Only a somewhat less than fully mature extraction technology.
This isn’t electric cars.
That is literally half of the equation. If you are breaking it down like that there is how much it costs to extract them and the market price for them.
The market for cars is robust, we know people want cars and we know can extrapolate reasonably well from hybrid cars how much reducing gas consumption is worth to consumers. There are some details to work out (but there are still fracking details to work out, such as dealing with claims of more environmental damage) but there is a reasonable consensus on what price point Tesla needs to hit to be a major car seller. The “only” question is can they hit that price point?
There is also the question of whether the overall electric car market (and the overall commercial infrastructure in which it operates) can meet the demands of the consumer.
Electric cars have advantages and disadvantages compared to carbon fueled cars. It actually still remains to be seen whether the disadvantages in range, refueling time and refueling availability can be sufficiently mitigated or addressed, or out-weighed in the mind of the consumer by the other advantages.
Whereas the only question for fracking is “can it be profitable” and “can it avoid being regulated out of (profitable) existence”. Once fracked NG or oil has made it to the end consumer (commercial or individual), it’s indistinguishable from other sources of the same product.
I’m with bacon on this one.
I mean you’re technically right that fracked oil is (as far as I know) completely indistinguishable from conventional oil (I suspect at a highly scientific level this isn’t actually true, but by the time the end product reaches a consumer I believe it is). Whereas, electric cars are slightly less interchangeable.
That said, the overall purpose of an electric car is the same as the overall purpose of a gasoline-engine powered car. There are slight differences in terms of the method and timing of fueling, but that’s all.
Not to mention that your baseline claim seems to be that because oil is a “mature” market that makes it somewhat easy to analyze. Except that fracking itself dramatically transformed the market by making accessible huge reserves of oil previously thought unrecoverable. Which had a dramatic impact on the overall supply, production, and price of oil. Which is exactly part of the problem frackers are facing.
To be pedantic the question is can Tesla or another manufacturer make the car inexpensive enough to overcome these issues, or can the resolve these issues without adding to much expense. There is a whole lot bundled up in the “can they get it to market at a decent price point”.
But the front end is the more complex end for fracking, with the equation for each individual well (or at the very least each field) being unique. This is where the price discovery lies, Tesla has a bit more overlap with questions on the demand end as the car markets are more consumer preference driven but the same general concept is there. Can we do X at Y price, where X is well defined by the question is in hitting that price of Y.
Also the final paragrpah
That isn’t technically a conclusion, but it rides a pretty hard up to predicting a bust and being able to take credit for it if it happens without actually saying it is going to bust.
Pretty much every American knows the same “birthday song” and the same “alphabet song”. Is this true of the rest of the English-speaking world? (Aside from that whole “zed” thing, of course.)
Well, we have it in Australia, and likely the rest of the Commenwealth. “Ah ! Vous dirai-je maman” (the alphabet song/Twinkle Twinkle Little Star.) is used (ánd began) as a childrens song in a whole bunch of countries.
Happy birthday is more recent, but it’s certainly in all anglosphere countries I’ve been to.
England uses both. Old children’s books have people sing “For He’s a Jolly Good Fellow” instead of “Happy Birthday”, but I don’t think it’s still sung very much. I don’t know when it was replaced- I suspect around the 1970s?
In terms of other children’s songs, while it doesn’t really exist in English, Il était un petit navire exists in a lot of languages- I learned it in Greek, and have since found out it was originally French.
Here FHAJGF is sung after happy birthday, Followed by a hip-hip- hooray. For adults, a drinking song is often added that goes.
He’s an Aussie, He’s true blue
He’s a pisspot through and through
He’s a bastard so they say
He shoulda gone to heaven but he went the other way
He went down! Down! Down! (repeat until the person finishes their beer/cider/etc)
France uses the same alphabet song too, except I think they repeat a few letters to make it fit with the music better.
I’m realising now that I can’t remember for the life of me how exactly the alphabet song I learned in school (in the UK) went. How do you get around the problem that there’s not enough letters?
In American English, we doing “now I know my ABCs / Next time won’t you sing with me” after “y and z”, if that’s what you mean.
Yeah, so that’s one solution in American English, so you have lines ending with “vee”, “zee”, “cee” and “me”. I’ve also heard an American one that goes “now I know my ABC / 26 letters from A to Z”.
In British English it’s called “zed”, so if we did what you suggest then the second-last rhyme wouldn’t rhyme. I have a vague recollection of us rhyming “zed” with “alphabet”, but you either miss out some of the tune or you repeat letters as far as I can tell.
This is actually really bothering me. From google, and asking one other person, it seems that we do exactly the same thing as you, but we say “zed” rather than “zee” BUT THIS DOESN’T RHYME! What are we playing at? Somebody needs to sort this shit out!
EDIT: research tells me that the original is from the USA, so perhaps it makes sense that we just lazily modified it.
“Now I know my alphabet, 26 letters from A to zed” works for the end of the song, but I’m not sure what you can do about the verse before that. Rename “vee” to “ved”?
I’ve heard, but have not tried to verify because I suspect it isn’t true, that an earlier version of the song had after z “and per se and” which after some mangling is how & came to be called ampersand.
What does that mean in context? Is there a reason I should know as to why this one phrase is elevated to letter status?
Ampersand really does have that etymology, but it first appeared in 1837 (or 1830)?, just two years after the song, so it is not due to the song.
And alphabets really were written with a terminal ampersand, at least sometimes. But I suspect that the usage “and per se and,” like the usage “a per se a” was more for dictation than for reciting the alphabet, where you don’t need to distinguish the word A from the letter A. But things about the discrepancy between spoken and written language don’t leave good records.
I don’t know whether the original alphabet song had a terminal ampersand. I’m surprised I can’t find a reproduction of the original printing. Here is a different alphabet song with an ampersand, into the 20th century. (Wikisource claims to have the original, but I don’t find it very convincing.)
Canada definitely uses it, for English. I don’t remember ever singing it in Swedish, which make some sense, as the Swedish alphabet has three extra letters at the end.
In Hungary in Hungarian language, the Happy Birthday song is often sang with the same tune and a translated lyrics that is close to the original. There are also two popular birthday songs with an unrelated tune that are not folk songs and are more often played from a recording than sang: “Boldog születésnapot” performed by singer and actor Halász Judit, from the epynomous 1986 album, and “Ma van a szülinapom!” performed by the Alma együttes, from the 2007 album ”Maxikukac”. (The latter, by the way, explicitly mentions 6th birthday once.)
I don’t think I’ve heard any alphabet song before the English one came in with American culture and Sesame Street in the 1980s, and I haven’t heard any song for the Hungarian alphabet, but maybe I’m just too old to meet those. (I’m not too old to celebrate birthdays, but I know the alphabet well and don’t need a song.)
In the absence of non-human predators, Sri Lankan elephants lack the matriarchal hierarchy often claimed for all elephants.
Between this and the discovery that you can’t separate a herd of females and calves from the so-called loner fathers without boy calves becoming dysfunctional, it goes to show how little we knew about these intelligent mammals.
Anyone else watching the new Jack Ryan series on Amazon Video? I’m up to episode six, and liking it quite a bit. It’s impressive that they took the time to give the bad guys some solid motives, rather than just random ignorant hatred. I do wish they had toned Jack down a little for at least the first season. He is rather obviously holding the hero token.
No, but I did watch the Hunt for the Red October this weekend and quite enjoyed it. I didn’t even realize that it was related until I finished. I enjoyed the movie 🙂
Currently on Episode 4, and it seems to be pretty faithful to the general spirit of the books. I like it.
On the other hand, whoever did the trivia captions was amusingly ignorant of military life. One suggested that because Jack was in the Marine,s but hadn’t killed anyone before, he “may have served in a Military Occupation Specialty capacity while he was a Marine”. (For those who don’t get it, an MOS is your job code. The sentence is incoherent, and I’m pretty sure they meant he wasn’t a combat officer.)
Naval Gazing finishes looking at the Sino-Japanese war this week.
What’s the longest bridge someone could build, given today’s engineering technology? The current record holder is in China, and basically goes over a river delta, and has footings every so often. Longest span between footings is another interesting question, but my interest is in, say, crossing between Florida and Cuba. Or tying the entire East or West Indies together. Or even a highway between Hawaii and Alaska or CONUS, or both! For instance, is there a limit on how deep we can put footings? What other engineering bottlenecks exist? (Ignore political and financial bottlenecks for this question.)
A quick search suggests that engineers think a few hundred meters depth could be done, though it doesn’t sound like any bridge has done it yet. But for your Hawaii to CONUS or Hawaii to Alaska bridges, you would be talking about a few thousand meters. Couldn’t find anybody even talking about that.
I believe total bridge length is basically unlimited, provided you can put the footings in. It appears the deepest bridge foundation is currently a bridge called the Taizhou, at 57m, but there are oil platforms sitting on the seabed at 300+m, so that should be doable. Once you’re off the continental shelf, I think you can probably forget about it.
The Straits of Florida are 1800m deep and 150km wide; I think not going to happen with current technology. You could imagine a floating bridge, but I think hurricanes rule that out pretty conclusively.
For most of the bridges you list, the problem isn’t primarily the length; it’s the depth. Once you get too deep or go across too difficult undersea terrain, it’s prohibitively difficult to put in moorings, so the simplest way would probably be to construct floating bridges on pontoons. I live just across the city from the longest floating bridge in the world, at 7,710 feet; and a county away from the longest saltwater floating bridge at 6,521 feet – so there’s a ways to go.
What about hyperloop tubes going long distances, floating underwater but near the surface to protect from hurricanes?
>What’s the longest bridge someone could build, given today’s engineering technology?
I am going to take your question literately and say at least 5,000 miles. I suggest we make a bridge from the Grand Rapids Minnesota to New York City. We build down the Mississippi River, through New Orleans, over to Florida, down and around to Miami, then up the East Cost to arrive at New York. I figure its 2.3k miles down to New Orleans and at least 3k miles to get to New York. Would a super long river than coastal bridge be useful? Absolutely not. Would it be doable with today’s engineering technology? I cannot see a reason not to.
Here’s something I wondered on the BS jobs thread, but I only got to post late so very few probably saw it.
A lot of bureaucratic nonsense is supposedly due to being hyper-cautious about potential tort liability. In the 1970s New Zealand enacted a comprehensive tort reform unique in the Anglosphere: lawsuits for accidental injury were abolished entirely and instead there is universal no-fault coverage. The government-owned Accident Compensation Corporation pays all tort claims. Its funding mechanisms are roughly equivalent to the no-fault insurance systems in America: there’s a payroll tax instead of worker’s comp, a levy on automobiles instead of mandatory auto insurance, etc.
So if we have any Kiwis around, I’d like to know how this works. If businesses are freed from the eternal worry about getting sued over every minor oversight, do you get less of the bullshit in the previous posts, or does it persist for its own reasons? I’m also curious about experiences with the ACC and the public’s general opinion of it.
As an American who is involved in law, I am skeptical that tort reform would help. What you really need is discovery reform and statutory deadlines. Most obviously frivolous claims can be dismissed quickly and easily. It is the fact that normal cases (like a car crash) can get extended into 3+ year affairs that really drives up fees, and its fees that people are scared of. If an employee gets fired and they file suit for gender discrimination, the problem isn’t only the payout, its that even if the employer prevails he loses $100k.
Kiwi here. We don’t have workplace lawsuits, but ACC has built an impressively bloated bureaucracy of its own. It operates in exactly the way you would expect a compulsory monopoly to operate. For example, I have to pay ACC levies as a self-employed person, even though I have been overseas for long enough that I am not, in fact, eligible to make a claim. I wasn’t thrilled about literally being forced to buy useless insurance, so I entered into a long and baffling correspondence with ACC which could only be described as Kafkaesque. Eventually I just gave up, because it wasn’t worth the stress. Tens of thousands of people who are battling to get their claims resolved do not have this luxury.
As far as on-the-job bullshit goes, I don’t know if it makes a whole lot of difference? We had a recent reform of health and safety law which put more of the onus on natural individuals instead of companies, and massively beefed up the penalties (up to $600,000 fines, up to five years’ jail). So employers still have a pretty strong incentive to cover their asses, including all the farcical box-checking stuff.
…flawed as it may be, my guess is that universal no-fault coverage is still the much better option. Most New Zealanders don’t even bother with health insurance, because of the combined benefits of ACC and the public health system. It probably creates a bit more moral hazard, but seems like the more humane/pragmatic way to do things.
NZ here involved in a small aviation tourism business. ACC is great because we don’t worry at all about being sued, it just isn’t a topic that comes up at all. We comply with the aviation rules because they mostly make sense and we can be prosecuted if we don’t comply, though mostly we would just get audit findings from CAA as long as there is no reckless intent.
The new Health and Safety rules coming into effect have everyone in aviation here in NZ scared. We’ve been told horror stories, and there is a lot of cover-your-ass paperwork going around. One regulator from CAA told us that to comply with the H+S laws we would need to do an in depth audit of every company we subcontract to. So for example, if we sell a product that includes a flight to a remote location, board a helicopter to do a guided mountain bike tour, then we would have to audit the helicopter company and the mountain bike tour company. I asked why we couldn’t trust CAA’s audit of the heli company to be sufficient, and was told that CAA’s audit wasn’t good enough, we had to do our own, and that anyone who sold a trip to customers would be responsible for the whole trip. Completely impractical, and this was just one guy’s opinion, and no one is actually going to start doing that. But there’s a push for it.
A few years ago there was a helicopter crash resulting in at least one fatality. It was pilot error, and the pilot had a history of failed check-rides. Upper management had pushed the operations manager to keep the guy on for various reasons, until he crashed. There were no lawsuits and the company was eventually fined (less than 100k) for not filling out the load sheet properly, which was stated as not a contributing factor in the crash.
As far as moral hazard, I think that’s one of the best things about ACC. It encourages the general population to enjoy the outdoors and adventure sports, which is a great thing culturally I reckon. It facilitates a high trust society by removing the threat of being sued by strangers all the time. The value of a high-trust society and the subsidizing of adventure is pretty high and is an underused argument for those sorts of state-run insurances.
That being said, ACC is expensive as hell and I’m not an expert enough to comment on the costs side. Like most government spending, it is easier to see the benefits than the costs.
I think you have it backwards. You *have* a high trust society, *therefore* your ACC and similar social infrastructure is relatively smooth and positive.
That is certainly possible in some ways, but the way I mean is like this:
A) It is near impossible to sue anyone over an injury or death
therefore B) you can trust people not to sue you if you do them a favor or do something slightly outside the box for them and something goes wrong
ACC facilitates A. They could certainly open people up to more liability over things even with ACC in place, and they are trying to do that a bit with the new Health and Safety Act.
How do you trust e.g. your employer to not electrocute you by leaving all the worn and loose wiring in the workplace unrepaired, because repairs cost him money and, per ACC, electrocuting you costs someone else money?
It strikes me as unlikely that the NZ government actually hires enough safety inspectors to catch everyone who adopts a such a deliberately mercenary attitude re workplace safety. It seems much more likely that they don’t need to hire that many safety inspectors because people aren’t cutting corners to begin with. That is what a high-trust society means. The lack of lawsuits is a symptom, not a cause.
Hi John Schilling,
NZers are known for cutting corners and they take pride in it to a degree. See “number 8 wire” and “she’ll be right, mate.” I don’t know the workplace accident statistics but there is certainly a culture like that here.
The employer can be prosecuted for breaking the law even if they can’t be sued by individuals. They’d only be caught if there was an audit or an accident.
I hear a lot of stories about how the security of voting machines is weak, and how even casual hackers can tamper with results. Given that and the fact that many groups would be interested in doing this, where is the negative evidence suggesting that the vote in the USA has not been hacked already? If necessary, differentiate between different levels.
The best answer I can come up with myself is that we don’t have many results that seem highly implausible. The Trump election, in particular, seems to be what happens in a world where the vote is still functional.
I’d say the chances electon machines have been tampered with as far back as they exist is high.
As for plausible. that’s how tampering works. The goal is to tip marginal seats. So if the USA has a tampering problem by one or both parties, we should as low hanging fruit look at areas where specific voting machine companies/models have achieved statistically unusual results in marginal seats over decades.
Not to be That Gal, but elections have a greater than average chance of verging onto CW discourse. I’d suggest people tread carefully here.
where is the negative evidence suggesting that the vote in the USA has not been hacked already?
Is is not true that it is difficult to prove a negative?
…leaving that aside, and speaking more to solving for the underlying question – to me, there are several parts – first, what does a ‘true’ vote consist of, secondly, what fraction of elections represent something other than the ‘true’ vote, and third, what fraction of ‘non-true’ election results are due to deliberate action meant to change the outcome of the election?
For me, a discussion of ‘vote hacking’ that limited itself to long distance computer manipulation (‘hacking’*) could be interesting in and of itself, but would be dreadfully limited in terms of the actual scope of the problem. (By the same token, I reject relying on paper ballots as “essential” to democracy – they are a *useful* tool of balloting, – and may be the best option for many years to come – but paper ballots are neither necessary nor sufficient for a free and fair election.)
* hacking is not my field. Please correct with a better phrase/definition if you have one.
The same place as the negative evidence proving that you aren’t a paid Russian agent trying to cast doubt on the legitimacy of US elections. Fortunately for you, most of us here are wise enough not to use “evidence” in such a negative fashion.
That said, the stories about how the security of voting machines is weak, etc, all involve direct on-site access to each individual voting machine being hacked. The probability of a Russian agent getting caught hacking an election machine may be small, but it isn’t zero. By the time a Grand Conspiracy has hacked enough election machines to predictably sway an election, without leaving obvious spikes in the results, it is almost certain that several of their agents will have been caught red-handed.
The decentralization of the US electoral process makes it extremely difficult – impossible as far as the sorts of stories you have been seeing in the news are concerned – for Russian agents (or anyone else) to rig an election from a remote or centralized location. But there’s lots of things they can do to cast doubts on the results, including but not limited to online trolling, hacking voter registration databases, and data dumps to Wikileaks. And I’ll be wanting positive evidence before asserting that anyone has gone beyond that level.
It would not be impossible to rig the presidential election at least; given how it hinges on some key states, you don’t need to hack every voting machine, just a few (still a fuckton) ones.
I don’t think it happened in the last election, and would need some positive evidence, but I would not put this beyond the capabilities of a state actor with resources.
The problem with this logic is that it presupposes one knows which states the election hinges on.
While this seems easy enough, I don’t recall a lot of pundits talking much about Wisconsin or Michigan last time around…
I mean 538 definitely did. Plus supposedly the Clinton campaign actually knew they were weak there but refused to do anything.
I mean WI and MI were mentioned, but remember its not just them. If Clinton had won Florida which she lost 47.8% to 49% then she would have needed any one of Michigan (47.3 to 47.5), North Carolina (46.2 to 49.8), Pennsylvania (47.5 to 48.2), or Wisconsin (46.5 to 47.2) to win. That is 5 close states right there – and I don’t think there was any strong way to know (particularly sufficiently early to set up the hacking op) ex ante which would be the ones to target. Yes there are some states that are clearly unlikely, but usually around 7-10 possible swing states. That makes it really hard to swing a Presidential election this way.
Its possible we should be more concerned about swinging of local/congressional elections – but that is different.
Its certainly not good that the machines are so vulnerable, but the Presidential election is very likely unaffected.
I don’t understand your first paragraph. The point about having to multiply the success ratio for all machines you want to hack seems solid, or at least sounds convincing to me.
I mean, certainly in any ‘important’ setting, it would be unwise to make the claim without providing any positive evidence. But in terms of pure epistemology, like if I actually had to put a probability on this, I don’t think it’s obvious that the default assumption is that nothing happened.
A related question is whether we would even know it if someone had been caught. If it’s not a Russian agent but just some guy hacking one machine, then even if someone found out about it, would it be news or just be buried? That probably depends mostly on how many people are involved in the process.
Anyway, I certainly wasn’t making a hidden claim that hacking happened, I have no idea.
People who genuinely have no idea, generally ask whether there is evidence for or against a proposition. Asking only about the negative evidence implies that one feels they already have all the positive evidence they need and that the proposition should be considered true unless negative evidence is provided.
If that truly wasn’t your intention, then your phrasing was most unfortunate. Something to consider going forward.
Ah. Well, the reason I phrased it this way (it was intentional) is that I think the mainstream position on this topic is that of course everything is fine, whereas I was genuinely uncertain either way, hence I think the burden of proof is on anyone who holds the mainstream view. That’s why I basically asked, “where’s the evidence to be confident that nothing happened?”
Fwiw I’d put higher odds on ‘no large-scale attack has occurred’ but probably also one ‘some ‘hacking’ has occurred’ after reading responses.
I’m a software developer technically/barely in the computer security field, and have been looking at this for about a decade.
A few broad principles:
* The more types of access you have to something, the easier it is to make it do what you want. So being able to get complete physical access to something makes things easier than trying to access something which is in an isolated room on the other side of the planet.
* The more complex something the more likely there are flaws in it.
* There’s a difference between security, reliability, and auditability.
In a lot of cases, the “hacking” was done with uncontrolled physical access to the devices. As you might imagine, if you went to your local polling place, started disassembling the voting machines and attaching multiple laptops, somebody is going to notice and call the police. Not all of the voting machines are connected to the Internet, so that makes things more challenging.
Also, part of the challenge is in making any unauthorized access do anything useful. Most bugs in software will simply cause the system to crash. That’s annoying and possibly casts doubt on the election, but doesn’t change the results. Being able to either choose the results or turn them into garbage is harder and less likely.
You don’t need to connect multiple laptops. All it takes is a cheap microcontroller plugged in via the same USB port that was used to set it up in the first place. And nobody can see you, because it’s illegal for anybody to see you while you vote.
Edit: This is, of course, moot in the case of machines like WinVote, where you can get complete read/write access to the vote counts via wifi.
The US voting system is huge, decentralized, and balkanized. Hacking voting machine is probably trivially easy, but hacking enough of them to affect an election without getting caught would require a sizable conspiracy, the involvement of hundreds, maybe thousands, of local election workers, a conspiracy at least as large as old fashioned ballot box stuffing required.
Voting Machine Security is not about Voting Machines
No shady nation state worth their salt is going to go around every single polling place and hack every single machine by hand. What these analyses of voting machines really tell us is that they were built by organisations with only a superficial regard for security. It’s not that the holes are there, it’s that they’re really dumb.
The machines often use outdated software with known security issues, such as Windows 2000 and WEP. They use default passwords that can be found via google. They use physical locks that can be compromised using a ballpoint pen. They have exposed USB ports where you can plug in a keyboard and hit Ctrl-Alt-Del. Their tamper evident seals can often be circumvented.
We only know this because researchers can get their hands on voting machines via the resale market. But the companies that make voting machines also make (and sometimes operate) the rest of the voting infrastructure. We have no legal way of inspecting the vote tabulation machines, nor the way in which software is loaded on to the voting machines, nor the protocols by which all these things communicate. Compromising any of these systems will allow you to do far more damage than you can with the voting machines themselves.
We cannot seriously suppose that the voting machines are full of vulnerabilities, while the centralised backend infrastructure is completely secure.
Regarding evidence of vote hacking, the firms who develop the e-voting infrastructure have very little incentive to implement effective systems to detect tampering. Any state-level attacker worth their salt ought to be able to cover their tracks.
That is to say, we have no way of knowing whether or not the elections have been hacked.
Tl:dr: Voting machines are the tip of the iceberg. Upon close inspection, everything above the waterline is full of shit. No prizes for guessing what it might look like below.
Thanks (also to other responders), that was genuinely insightful.
Rigging in legitimate or semi legitimate elections is tricky, trickier that you might think. If you see an illegitimate election rigged you often see it is an incredibly obvious way where the winner gets 99% of all votes and up to 105% of the voting population showed up to the polls. I used to think that it was ridiculous and dumb to rig elections this obviously, why not go for the 60/40 strong win and get to claim legitimacy to the rest of the world? The point of the rigging in these situations is not the win, it is the demonstration of power, if you get to stand up in front of the entire country and make a bald faced lie that everyone knows is a lie and no one challenges it then you have demonstrated an extreme amount of strength and made truth telling dangerous (as an example imagine a person giving a live press conference with everyone of their exes as a live audience claiming to be the world’s greatest lover who had given every partner they had multiple orgasms multiple times, what types of actions would have preceded such a stunt to have zero of those exes contradict the speaker publicly?).
The more legitimate an election is the more the difficulty to value of rigging shifts. Rigging a blowout is either unnecessary (rigging the way it would have gone anyway) or very risky (is going to open up all kinds of questions when a wildly surprising result comes in). The closer the election becomes the easier it is to rig without forcing an inquiry (with some exceptions as elections that are close enough to automatically trigger a recount could be more dangerous than those just outside the bounds), but also the lower the value of the rigging as you might have won anyway.
Then you have a secondary issue which is also high impact, you don’t know if your particular rigging is going to be effective. Sure after an election it is obvious which locations are high leverage but prior to them there are usually multiple potential high leverage sites plus the possibility that the election won’t be close enough to have those high leverage sites. To effectively rig such an election you need to have the opportunity to rig in many sites before hand and then make the appropriate alterations as results come in, increasing the risk in two ways.
Finally rigging generally brings the most benefit to the winning candidate, which means the circumstances where rigging an election will be done without some level of consent of the candidate will be fairly rare.
I am crunched for time, but I remember reading an analysis someone did of this: here is the basic methodology, which I found convincing.
Hacking a single voting machine is fairly do-able. Hacking an entire city’s voting machines is much harder, but imaginable. But it is unlikely that you could hack all the voting machines across many jurisdictions since there are many different types in use, some create auditable print records, and so on.
So to assess the voting machine security, look at patterns of votes, by area demographics. If (e.g.) African-Americans vote in higher numbers and more favorably to a particular candidate (e.g. Obama in 2008) then the shift in voting adjusted by demographics should be similar across jurisdictions. So if you are looking at voting machine tampering, you would expect to see a break in that pattern–the votes wouldn’t shift predictably by jurisdictional demographics.
I believe someone looked at either Michigan or Wisconsin with this methodology, and concluded that there were not the kinds of anomalies one would expect hacking to produce.
Not directly related to the question of rigging voting machines, but Russian election results are known for mathematical anomalies: not only the kind where 99.9% for breathing Chechens vote for the Putin’s party, but the kind of anomalies that arise when lazy tabulators who intent to fudge the vote counts into more “believable” numbers tend to favor round numbers too often. One could conduct similar analyses for the US elections.
However, such effects might be more difficult to observe if we assume that there are two or more parties in the business of trying to rig the elections in the presence of adversary they know of (namely, each other). Thus they will avoid the obvious mistakes, like fudging unbelievable amount of round numbers. Furthermore, the seats that could plausibly go either way are the prime candidates for rigging for the both parties, and one would assume that all involved parties have equal amount of statistical tools at their disposal to identify them. (Imagine that one box of postal ballots for party A goes missing in one precinct while at the same time dead people are recorded casting votes for the same party elsewhere.)
Electronic voting infrastructure in the US (and probably everywhere else that uses it) isn’t particularly hard to corrupt; there are demonstrations every year at Defcon and Black Hat. Depending on how a particular local infrastructure is organized the weakest point might or might not be the actual voting machines; I’ve also seen exploits targeting e.g. precinct-level tabulation. But that’s not really the same thing as saying that elections are easy to hack: to swing a national election you need to corrupt infrastructure at scale, and you need to not get caught doing it. That’s a much harder problem, especially when the nation you’re looking at doesn’t have a national standard for voting infrastructure.
(Source: am software engineer working in the security space. Haven’t worked on this problem directly, but coworkers have, and it’s frequently discussed.)
Your mission, should you choose to accept it, is to cause the evacuation of a city of at least 1 million people without, directly injuring anyone. How will you do this?
Advertise free beer in the next city over.
I guess it depends on what you mean by injured but I’d spread around some medium level radioactive materials. At a level where no one is likely to get immediately sick from radiation poisoning but they can’t let everyone stay there.
Have you tried anthracite?
(Okay, not a city of 1 million, but still, a pretty impressive near-total depopulation)
Why bother with that crap when you can just hose the place down with methyl mercaptan? My grandfather told me a story of how they spilled a whole dewar of the stuff that was being used to odorize utility natural gas, and the entire town just left because of the smell.
Maybe a few water-bombers full of putrescine. Hose down the city with the smell of rot.
The Evil League of Evil won’t turn down our application this time!
After your previous failed applications, the League requires actual murder this time. So unless the smell of rot comes from actual rotting corpses, no go.
My idea is simply to offer people money. $1000 to any resident of Edmonton AB who is willing to be anywhere else next Thanksgiving. Coordinate with major hotel chains to handle the logistics. $1000 a head is real money for a family; I be any number of tourist destinations would offer package deals. Still, at a billion dollars this one is pricy.
I was considering something similar with a poor Muslim city and the Hajj, in which case the logistics are largely pre-handled and lots of people would probably do it just for the price of the ticket.
Saudi Arabia offers limited and hotly-in-demand tickets for the Hajj, so you’d need to bribe them to get the tickets to everyone in this city as well.
I don’t think this would work. The cost for a family to leave the city for a substantial time at a random time will be more than $1000 a head. Imagine the wages lost, people who have family hospitalized, people who contractually cannot leave barring some major emergency, people who just don’t need $1000 a head…you’d be left with a ton of people who’d rather stay in the city, far above the 99% cut off.
I’d personally go with a fake industrial accident. Select a city with a huge factory on it, claim it is throwing asbestos into the air, get some experts to declare on TV that everyone must go. Call corrupt any state officer that refuses to go with the story.
Depending on the city, you could get a huge circle inside it evacuated in a v short time. This would also solve the hospital problem. They’d have to get people out. Enough for you to film key scenes for a zombie movie or whatever it is you are doing with an empty city for a day or two.
I think the hospital issue would mean that you’d get some people injured tho. May be avoided by careful selection of the place.
Hospitals are a good point. Some people just can’t safely be moved.
I think that means we have to make it possible to stay if it’s a matter of life and death. We can’t, under the rules, make the city a lethal environment. But we can make it an unpleasant or scary place. If the only people who stay are the ICU patients and a few nurses to care for them, that’s still well over the 99% bar. We can leave 10,000 people in the city and still succeed.
I think you’re onto something here. We convince the inhabitants that the entire city is haunted! I’ll need a few hundred volunteers and some bed sheets with eye-holes cut out.
I had a co-worker who tells me that when Chevron had a release in Richmond his relatives called each other to come towards the release so that they’d have detectable levels on them in the hope of a payout, “Yeah Doc, I’ve got the petrochemical carcinogens real hard”
What in the actual fuck.
“I’ll need a few hundred volunteers and some bed sheets with eye-holes cut out.”
This may be mis-interpreted and result in an *influx* of protestors, journalists, etc.
Assuming by ‘evacuation’ you mean ‘complete removal of all humans from a metro area to a distance over over 26 miles (over the horizon) over time of less than one month’ – I would reject the mission, as it can not be done.
*cue entrance by Ethan Hunt*
(At least, it has not been shown to be capable of being done, in the USA, in my lifetime.)
To expand – the year round population of the FL Keys and of the NC Outer Banks is about 75k and 60k, respectively, and even assuming the population doubles in the summer, that doesn’t come close to a million, and neither has been successfully evacuated in the last fifty years.
Centralia still has seven holdouts (or did, 5 years ago) and the state gave up on them.
Wikipedia gives about 50 metro areas of 1 million residents – some of them are likely easier than others – but all of them have severe transport bottlenecks. (Most of them being coastal towns with built-in movement restrictions on at least a third of the circumference.)
Having said all that – the best way to effect that evacuation would be to practice it – and interspace the ‘exercises’ with ‘minor’ real world incidents that got everyone’s attention. But all in all, I think that many planners – and that’s what you’re asking for here, municipal emergency planners – underestimate the resistance of the population to respond uniformly to *anything*. A situation that convinces Marge and Henry to pack up the kids and get out of town isn’t going to make Great Aunt Sally stir out of her house, and something significant enough to have Uncle Joe come *get* Sally is also going to bring Cousin Pete and Joe Jr in from the countryside to gawk.
So we could play around with weather threats and firestorms and nukes and riots as potential tools, but until one has a handle on exactly what people are going to do in response, it’s not actually engaging with the mission, imo.
Yes, some people won’t go. But I would be satisfied with 99% of the population gone for a day.
Step 1: Steal a scheduled rail shipment of especially toxic chemicals that’s headed through Dallas. I don’t need to keep it, just divert it to a yard or something.
Step 2: Substitute shipment of rail cars with some heavy oil (or something else appropriately flammable and smoky when burned). Derail and overturn this in the worst possible place, in or upwind of Dallas — the train crew is in on this, so they’ve gotten off .
Step 3: Set overturned shipment afire.
Pick Houston instead. After Harvey last year evacuation would be high in the minds of the residents, and hopefully plans have been recently reviewed and updated by officials and emergency responders. A ship would be easier to steal and use to stage an incident in the harbor. Or something goes wrong at an oil refinery.
Possibly a better idea: Jacksonville, FA. It only just meets the 1M requirement (as opposed to ~6M in Houston/Dallas), has much less of a congestion problem, and evacuation plans in place because of hurricane risk.
Hack some kind of disaster early-warning system and fabricate a false alarm.
Candidates include: NOAA tsunami warning. Hard to fabricate the seismic data. Maybe fake footage of a meteor in the middle of the pacific.
Volcanoes. Causing an explosion to make fake smoke is an option.
Dam: fabricate data to simulate imminent structural failure upstream of a city. (or build a dam just downstream of a city. Does it count if the evacuation is planned?)
Does it even have to be a false alarm? There must be someone you can point to who caused the evacuation around Mt. St. Helens.
So an alternate option would be to become a senior vulcanologist in somewhere like Naples, then wait.
Tsunami warning is actually easy, as long as you are happy with not being able to time it to your liking.
Just wait for an decent earthquake on a place across the sea to the city you want to evacuate, then bribe the select people who are supposed to sound the evacuation alarm. Will be credible for a little while.
You’d probably not get to a million evacuated, and won’t last for more than a few hours tho. And people will fucking notice that their towns weren’t submerged. Most coastal cities will only evacuate, well, the actual coasts. For example, the japan evacuations in 2011 seems to not have gotten as far as a million people in any single city…I think.
Indonesia or the Philippines should be able to get a cold million in a coastal urban area with ease.
Another idea is to mess with the water supply. The water for NYC passes through three big aqueducts. It shouldn’t be all that difficult to wreck them badly enough to require a couple of weeks of repairs. I would expect most people to clear out until the tap water flows again.
Am I crazy? I wouldn’t necessarily expect that. I’d expect enough people to come in from out of town with bottled water that plenty of people would just stay and make do. Am I underestimating the role of tap water in my life?
Take a look at your water bill. How many gallons do you use on a weekly basis? Now imagine having to lay in all of that as storage.
In the US (at least everywhere I’ved lived) the water for your sink, toilet, and shower all come in on the same main pipe. How comfortable are you washing clothing by hand or not being able to use indoor toilets? How comfortable are you forgoing bathing for that time period? How comfortable are you with the folks around you going bathing for that time period? Having been through more than one water shortage or water shut-off, I can tell you that baby-wipe bathing only does so much.
Grey-water recycling can mitigate some of this, such as the toilet situation, but without a steady supply of fresh water, it’s going to be unpleasant for a lot of people.
I was wondering based on an exchange I had on r/books about the Goodreads reading challenge what the typical reading speed of an SSC reader is?
According to the internet for adult humans:
250 for an eighth grader
300 for an average adult
450 for a college student
675 for a college professor
1500 for a speed reader
For me I have some sort of fiction/non-fiction split:
1000-1200 for STEMish non-fiction
1500 for history
1800-2000 for recreational reading of fiction
Well I’ve never measured it. Words per page can vary too much; even within a set of books from the same publisher.
According to the staples reading test, I read 396 words per minute, which is 58% faster than average, but over 1000 sounds insane. Sounds like an android. If it was just informational books I could understand it, but if you read fiction that way, the succession of images in your head would be speeded up to a level that would overshoot how fast those events are supposed to be occurring realistically, or so I would imagine.
Hmm, staples puts me at only 1500 even though its fiction. Weird. I have been up late though, could just be tired. Possibly its because the text is so short compared to the other texts. I only read for like 8 seconds or something compared to some tests that have a couple minutes of text.
I have a friend who reads about that quickly. Or at least, she did when she was 14 — she read Memoirs of a Geisha in about 3 hours. (My mom didn’t believe friend so asked her several reading comprehension questions which she apparently answered satisfactorily.) If This site is correct that’s about 1000 wpm.
I have no idea how she did it or what she was imagining as she read, but I would not guess that subject experience of time when one is imagining the events of the book have much to do with reading speed.
Perhaps fast fiction readers experience slowed down imagination time.
I read fiction at a pretty fast rate. Often I read as a mental release/reset/escape. I read in part to be overwhelmed with the story. I don’t see mental images, I follow the words on the page and I experience the story as I read. Sometimes, you spend a page to cover a few seconds of a fight, sometimes you skip months. I experience imagination time at the speed I read.
The speed at which you’re reading doesn’t affect the pacing of the book, at least not for me. It’s like a dream; you can feel like you spent minutes or hours or years in the dream, but in reality you were dreaming for however long you were in REM.
(woop woop unintentional arrogance warning woop woop)
When I was a kid and I read books a lot, once I got in to the flow of the book I didn’t see the pages or “hear” words; I’d just kinda experience the scenes. That went away as I got older, probably because computers happened and I stopped reading long-form fiction.
It’s like, imagine if you were having memories implanted in you. The rate at which data is being written won’t affect the subjective experience of time in the memory – similarly, when I (used to) read fiction, the narrative just streamed in to my mind; I perceived time in the story exactly as the author intended me to.
Actually, I guess this is one of those “universal human experiences” thing; is reading not an automatic process for some folks here? I remember my father reading pre-school assignments over my shoulder and saying “you can’t possibly be reading that fast” and I thought “you mean you don’t just… look at the page and know what it says?”
Don’t like mind fallacy me dude!
Not everyone translates words into images in their head, and even those that do don’t all translate every scene and every portion of a described scene into the image.
Related; has anyone here ever tried spritzing, or used it practically?
2000 seems a bit crazy? That’s over 200 pages per hour for a typical book. You’d finish The Hobbit in 47 minutes at that pace. Surely you’re missing out on a lot of stuff…
Per Wikipedia: “The World Championship Speed Reading Competition stresses reading comprehension as critical. The top contestants typically read around 1,000 to 2,000 words per minute with approximately 50% comprehension or above. The six time world champion Anne Jones is recorded for 4200wpm with previous exposure to the material and 67% comprehension. The recorded number of words the eye can see in single fixation is three words.”
50% comprehension seems very low to me, combined with the possibility that when you read as fast as you can facts are held in your memory only for the short-term. But then again I can’t rightfully judge — I only read several hundred words per minute so I don’t have the time to look up sample passages/questions from the WCSRC.
The current world record is another guy who did 4750 I think. As far as comprehension considering the way some people can read the same book 20 times and insist they come across new stuff each time, and a couple studies I saw linked from someone, 60% comprehension for an average reader is pretty typical depending on how dense the information is and how specific the test questions. 67% comprehension at 14 times the average reading speed is still something like 15-16 times more time efficient compared to the average person.
It really depends on the book. I’ll bet I could read Pale Fire twenty times and come across new stuff every time (though I’m only on three or four right now), but I don’t think I could read Harry Potter and the Chamber of Secrets twenty times and find new stuff every time. Some books are just denser and twistier than others.
So many people read Harry Potter books dozens of times. Its so insane to me.
For a certain type of breezy-reading prose popular in certain types of fiction, 3 pages a minute is definitely doable.
The key thing about this kind of prose is about imitating the pacing of visual media. Authors are often trying to find the words to convey the TV show/film/comic vision in their brains. Dialogue, especially, is imitating the rhythms from their favorite real-time media, rather than something that would require Shakespearean training to make sound less stiff. But description/exposition is also heavily rooted in perspective, so only what the character would notice in that amount of time would be put in the text, because it’s the author doing a text equivalent of an establishing/reaction shot.
The anti-thesis of “talking/exposition/description is a free action,” basically.
But the emphasis of “reading the events takes about the same time as if the action were happening in real time” means that, like films, the full narrative can be completed in 1-3 hours.
The Hobbit is not written with that kind of prose, and so cannot be read at the same pace.
As a child, I came across an ad for a program purporting to teach speed reading, and discovered my reading speed was already in the range they were claiming to teach. But I don’t remember the specific numbers, and I’m fairly sure my reading speed is way down from its peak – I just don’t spend enough time reading, and haven’t over the past decades.
Like others, I have different speeds for light fiction and serious academic works, but in my case a lot of non-fiction intended for a general audience winds up being read at closer to fiction speeds. My peak reading speed in French (second language) never got as high as my reading speed in English, but probably approached the ‘normal’ range for non-speed-readers reading their native language.
And in answer to the person concerned about the time taken to visualize images – I don’t. I’ve never thought in pictures. While thinking in images is reported to be the most common mode, it’s a plurality, not even a majority.
One other comment – in graduate school, an attempt was made to teach us to read a small portion of a book – selected to give a general picture – and call that “reading the book”. That technique may be mis-equated with speed reading, but should properly be called organized skimming or similar. But this may be a cause of some implausible results – or not, since there seems to be huge variance in reading speeds without it.
A lot of the world reading championship stuff refers to comprehension levels from 50-70% because apparently at the average person’s “natural” reading speed they have 60% comprehension or less. I’ve been considering trying to speed read purely for word count without trying to hard to actually comprehend what I read in the moment. I’m curious how fast I could read with only 60% on the followup tests as a target.
Those seem really, really fast (or not corrected for comprehension). I did a speed-reading class in High School, and I think typical is somewhere between 150 and 250. I was in the 330s when I started, and got up to ~700 by the time I was done, but it wasn’t pleasant. This might be unadjusted (we had a 10-question quiz on our material, and the actual speed was multiplied by the score on that), but that almost suggests that the people higher on your list are skipping more material.
Comprehension around 70% is considered perfectly acceptable and at or above the level achieved by an average person reading 300 wpm. According to Forbes purely at speed the average is 300 for a normal person, 450 for college students, and around 675 for college professors. I don’t think the article specified comprehension level.
The test tells me one thing, but it’s not the same as “real” reading. Based on experience, ~100 pages an hour is the standard for me with fiction.
That is around my comfortable reading speed too. I remember in college I could push it up above that when I had a deadline coming, but that was what I comfortably read.
360 wpm. When I was in middle school I could read at 1200 wpm but I found it unsatisfying and unproductive. At 1200 wpm I couldn’t immerse myself in the world of a fictional book and I couldn’t understand anything but the most superficial facts in a nonfiction book. I like to read nonfiction I can ponder.
237 wpm according to the linked test. Always did read slowly, albeit with very good comprehension.
I haven’t been able to get the test to work on my phone, but I’m inclined to believe that I read far slower than the rates most other commenters have reported.
I’m at something like 380, which I’m disappointed in. I recall as a child being able to read extremely quickly, sometimes reading 3 or 4 YA-type novels over the course of a day. These days when reading fiction at least I like to take my time and cement the imagery in my mind, but I can’t help like feeling that that is just an excuse and I just got rusty with lack of practice.
These days I have to rely on my typing speed for bragging rights.