Home

Anna Salamon

The webpage below is out of date -- I am now co-launching the Center For Applied Rationality, figuring out how to apply probability theory, and the scientific study of human error patterns, to make humans better at daily life.  Updated webpage coming soon.

---

How likely is humanity to be around in fifty years? In two hundred years?

How can we increase the odds?

As a research fellow at the Singularity Institute for Artificial Intelligence, I've been taking my best swing at these questions — together with some of the best thinkers I've ever had a chance to talk to and work with. You can see a basic summary of our answers (it's a video of a talk I gave at the Singularity Summit; the talk summarizes many peoples' work); you can also read a short written version.

 

 

 

 

My Work

Conference Presentations and Invited Talks

  1. Shaping the Intelligence Explosion (from the 2009 Singularity Summit)
  2. How much it matters to know what matters: a back of the envelope explanation. (Also from the 2009 Singularity Summit)
  3. Changing the frame of AI futurism; from story lines to heavy-tailed, high dimensional probability functions. (European conference of computing and philosophy, 2009).
  4. Long-term AI forecasting: Building methodologies that work (Invited talk at a Santa Fe Institute conference on forecasting, 2009.)
  5. How intelligible is intelligence?  Implications for AI forecasting  (European conference of computing and philosophy, 2010).  You can also read Anders Sandberg's description of the session.  
  6. Economic implications of software minds.  (I was third author; most of the work was done by Steven Kaas, Peter Salamon, and Steve Rayhawk.)
  7. Survival in the margins of the Singularity? Why the future may not have accidents.  (Presentation for the UK H+ Summit.)  You can also watch my, Anders', and Amnon Eden's joint panel discussion.

Probabilistic forecasting web app:

I was part of a team that built The Uncertain Future, a web application that asks you questions and then, after adding and multiplying for you, lets you know when AI is likely to arrive "according to you". It is currently in beta.

Blog posts

I sometimes post to the community rationality blog Less Wrong, which is worth checking out in its own right. Selected posts:

  1. Cached Selves (co-written with Steve Rayhawk)
  2. Selective processes bring tag-alongs (co-written with Steve Rayhawk)
  3. The ethic of hand-washing and community epistemic practice (co-written with Steve Rayhawk)
  4. Share likelihood ratios, not posterior beliefs (co-written with Steve Rayhawk)
  5. Humans are not automatically strategic
  6. Compartmentalization in epistemic and instrumental rationality
  7. Making your explicit reasoning trustworthy
  8. Goals for which Less Wrong does and doesn't help
  9. Were atoms real?
  10. Branches of rationality
  11. Make your training useful

I also attempted a sequence on Newcomb's Problem and decision theory. It is less coherent than my other writing, largely because the concepts are less settled ("if we knew what we were doing, we wouldn't call it research"), and because I ended up dropping the sequence in the bustle around the Singularity Summit.

  1. An outline of the sequence I was planning
  2. Confusion about Newcomb is confusion about counterfactuals
  3. Decision theory: Why we need to reduce "could", "would", "should"
  4. Decision theory: Why Pearl helps reduce "could" and "would", but still leaves us with at least three alternatives

Interested?

If you’re looking for interesting research tackling “big picture” questions, or if you want a high-impact way to help many folks, existential risk reduction work is something to consider.  

In particular, you might wish to:

  1. Email me.  Being in conversation is the fastest way to get up to speed, and the best way to ensure that your plans make sense in light of others’ knowledge.  New contacts are always welcome, and a two-line “hi” email is fine; you can reach me at anna at rationality dot org.
  2. Read background material, and, with that as a starting-point, try to form your own picture of existential risks.  What poses the biggest risks?  Where are they concentrated?  What are the most promising avenues for intervention today?  As you read, put your thoughts out there for others’ critiques and responses, either online (Less Wrong discussion is a good place for this) or by starting an in-person study/research group.
  3. Connect with others who have similar aims, online or in person; strategy grows best in conversation with others.  If you’d like to find others in your city, send me an email; there are folks with these interests in many major cities.
  4. Consider what sort of involvement you might like, long term.  Direct research, organizational roles, and donation (while you earn money doing whatever you’re best at, or whatever you enjoy most) are all worth considering.  All three happen best in community; for example, if you’re interested in long-term donation (whether you’re on the job market or still a student), the Existential Risk Reduction Career Network can be a helpful source of career boosts.

Berkeley

Passing through the SF Bay area? Share some interests and aims? Come say "hi". Whether you’re new or experienced, and whether you agree with us or have something we should learn, I’d love to meet you.  I live in Berkeley with my husband Carl Shulman, good friends and collaborators, and many whiteboards. You'll probably find us an interesting bunch to visit.

Or, drop me an email from afar: anna at rationality dot org.

You may also wish to check out our visiting fellows program.