AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.
MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.
Seattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide here
.
B4X is a free and open source developer tool that allows users to write apps for Android, iOS, and more.
80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.
Giving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.
The COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.
Substack is a blogging site that helps writers earn money and readers discover articles they'll like.
Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.
Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here
Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.
Support Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.
Jane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.
Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page
They need to make a version for Less Wrong. No more meta posts, half-baked solutions to Pascal’s Wager, orphaned links to SMBC comics, or armchair psychologists.
No more faraway Meetup announcements, commentless Seq Rerun posts, or inane LW Women Submissions either. Sounds lovely.
This would be amazing.
LW women submissions are innane? Bullshit. It’s mostly the back-and-forth in the comments that’s innane, or maybe the intended conclusions.
You really don’t need an app. Facebook has all these settings baked in already if you know how to use them. If you click the arrow next to someone’s post you’ll see an option to hide, and in that pulldown it’ll let you select what kinds of posts you can see from a person, or if they show up in your feed at all.
Wait a minute, did you read the comment thread? The story is a lot more balanced that Facebook being evil. In particular, the app used to be called Facebook purity, which facebook (understandably in my opinion) didn’t like.
You know what I’d like? Machine classification of Facebook content along axes that I care about. I quite enjoy some of the pictures my friends share. There are just specific families of Facebook post that annoy the ever-living piss out of me.
Specifically, I would like to teach a machine to recognise:
– Poorly constructed infographics
– Schmaltzy / twee / power-of-positivity content (This is a remarkably coherent cluster in my head, and probably in most other people’s heads, but I find it very hard to define specifically)
– Any photo with a politician’s face in
I think knocking all three of those out would reduce my blood pressure drastically.
I like the politics stuff, myself… I’ve “liked” Huffington Post, for example.
What I would love is something to wipe off all the bloody “Hey, why don’t you friend these people?” suggestions and the “Why don’t you try looking for more friends?” notices.
I’ve friended all those I want to friend, and I don’t need sixty different strangers gawping out at me, thanks all the same.
Though speaking of which – shameless plug for my cousin’s venture into screen-writing: if you’re on Facebook and you see a thing called “Lord of Tears”, give it a like, please?