Learning new things
Science is hard work. You’re trying to make sense of the
universe we live in, and you hope to learn new things in the process. These
days, we gain knowledge mainly incrementally, with thousands of people making
thousands of relatively small steps to figure out what’s going on. These small
steps can confirm what we know, or they make us change our minds. With every
headline saying “Einstein was right”, or “Water found on Mars”, we are gaining
more confidence that general relativity describes the effects of gravity well,
and that water actually flowed on Mars at some point. Alternatively, every
headline ending with “… than previously thought” or “… baffles scientists”, makes
us reconsider what we thought we knew.
Suppose you’re on an expedition to find unicorns, since
you’ve seen a lot of them on the internet, and you think all these people might
be onto something. After days of trekking, you finally spot a most magnificent
creature that looks remarkably like a unicorn. When we learn something new, our
new worldview (unicorns could really exist!) might depend on how much
confidence we had in our old worldview (unicorns may exist, or they may not),
how much the new piece of information is in line with our old worldview (what
are the chances of seeing an actual unicorn, when all you’ve ever seen are
horses), and how much confidence we have in this new piece of information (I
think I definitely see a horn. And rainbows. I think.). These four terms look
pretty similar to the four terms in Bayes’ theorem*, which is a relationship that
pops up in just about everywhere in science. For example, the much-used Carl
Sagan quote “extraordinary claims require
extraordinary evidence” can be seen as Bayes in disguise: the further
away a new idea is from the old ideas you have, the more reliable the new piece
of evidence needs to be for you to be convinced. Bayes can also be recognised
in public debates. There is a tremendous amount of information that supports
the idea that humans are causing the Earth to heat up, but if you have an awful
lot of confidence in your worldview that humans cannot change the climate,
you’re not going to change your mind. Similarly, the information in a single
tweet is rather limited, making you rely fairly heavily on your own preconceptions when interpreting the intention of a tweet.
Modellers vs. observers
There are several ways of learning new things about the
universe. In theory, you apply the scientific method, where you make a
hypothesis (a potential worldview) and try to confirm or disprove this
hypothesis using new pieces of information. The real world is messier and scientists
often do not apply this method very strictly. Sometimes they get an insight
without having a hypothesis, they stumble into something interesting in a
dataset, or they just want to play with this shiny new tool they have lying
around.
To capture what we know about something you’re interested
in, you can make a model. These models can be a bunch of mathematical formulas on paper, they can be
complicated computer models, or they can be scale models that you can play
around with in your hands. These models can then be used to generate the
hypotheses than can be tested with new pieces of information. You mainly get
new pieces of information about the universe by measuring things about it.
Making the models and doing the observations are often done
by different people: “modellers” or “theorists” on the one hand, and
“observers” on the other hand. (I’m now brutally neglecting people who build
instruments. Also, people who measure things in labs are often both builders
and observers, and sometimes theorists as well.) In my limited experience
of working in different fields, the gap between modellers and observers can be
particularly large in Astronomy. Observers like to look through their
telescopes and make their data usable, but then sometimes only offer a very
limited interpretation of the data. Or they pass their pretty data to a modeller
friend, who then needs to make sense of the observations. On the other side,
modellers can make beautiful models without thinking too much about how the
model might be tested by observations.
When models and observations meet
It is always great when model predictions turn out right,
but perhaps more interesting things happen when model predictions do not match
observation. An observer might then say to the modeller: “My beautiful data
shows your model is wrong!” A modeller might reply: “Not at all, your data is showing
something completely unphysical, and hence must be wrong!”, and a lifelong feud can be born. Whether either side
is convinced by the arguments of the other again depends on the quality of the
data, the confidence in the model, and the departure of the data from the model
predictions.
When the data points to faults in the model, the model can
improve. Models become more reliable when they are tested by more and by better
data, as well as new insights, new lab data, or new mathematics. The model will
evolve and expand as you know more and more about the thing you’re interested
in. You can make the model as complex as you want (did you include magnetic
fields?), but with every layer of complexity, you also raise the uncertainty that
all parts of the model are doing things correctly. In fact, you can model until
the spherical cows come home, but if the model is not tested with sufficiently
accurate observations, there is still a good chance your model will not be a
very good representation of the universe. That doesn’t mean the model isn’t
useful, but that does mean one has to keep an open mind towards other
possible models.
Something in between
Besides just comparing your model predictions to the data,
there are more things you can do to gain insight into your object of study. This
can be especially useful if your model, and the thing you’re studying, is
extremely complex. Take the atmosphere of a planet. An atmosphere is
continuously changed by many physical processes, all linked together.
Temperatures, winds, concentrations of gases, and clouds all greatly influence
each other in complex feedbacks. These feedbacks make that the weather in the
Netherlands is hard to predict, even if you have good observations of how the
atmosphere looked like in the past. Missing feedbacks can also make your model
wildly inaccurate, even though you think you know all the processes well. For
instance, the seemingly boring and irrelevant polar stratospheric clouds turned
out to have a massive role in breaking down ozone in the Earth’s atmosphere.
One thing you can do is take a much simpler, but much more
reliable, model and ask: what does my model input need to be to match the
observations? If we know exactly what the atmospheres of the Earth and other
planets look like, we can very accurately compute what a measurement of the
atmosphere from a satellite would be. If you do the reverse, in something called a retrieval or inverse model, you will learn what the
atmosphere needs to look like to match observations. However, in theory, there
are an infinite ‘correct’ atmospheres that match observations. So, in practise,
people limit the range of solutions by doing things like demanding that temperatures
vary smoothly with altitude, or have gas concentrations fairly close to some
initial guess, or assume specific gas concentrations are constant with altitude.
If you have very good observations, retrievals reveal the
state of the atmosphere better than a more complex model can. This is great if
this is all you wanted to know, like when you want to monitor air pollution.
Retrievals can also show where complex models need some more work to match
reality. What retrievals cannot do is go deeply into the physics of why the
atmosphere looks the way it does, or make predictions that can test hypotheses.
You still need the more complex physical models to do that, but the retrievals
can give you a snapshot of the thing you’re studying, and a qualitative sense
of the ongoing processes.
Another thing you can do is take your complex model, and
move its output towards the observations by brute force. In this way, you’re
introducing an unphysical ‘hand of God’, but at least your complex model will
move away only slightly from what is actually happening. This is called data assimilation and is used in things
like weather prediction and reconstructions of the past climate. The great
thing here is that you still have access to the detailed physical processes in
your complex model, as well as the model’s predictive powers.
Retrievals for strange planets
Several retrieval and data assimilation techniques were
first developed for the Earth’s atmosphere, where satellite data is abundant
and of good quality. Retrievals have also been very useful in studying solar
system planet atmospheres. Not only the complexity is a problem here, but we
often don’t know all the processes that are active on these planets. Every time
a new spacecraft was sent to a planet, the improved data was generally not exactly
compatible with the models that were available at the time. In my own work with
retrievals of Titan’s atmosphere, we have found clouds of unknown composition,
clouds in places where they shouldn’t be, weird temperature behaviour, and
unexpected gas concentrations. And this does not even include the surface,
of which we had very little idea about before the year 2000, but now has people
studying its geology in quite some detail. Retrievals have been a great
intermediate step for seeing what is there in the measurements without having
to run complex models. This also makes the task of figuring out where the
complex machinery can be improved a lot easier.
In the last decade or so, retrievals have also started to be
used for planets around other stars, or exoplanets. Measuring the atmosphere of
an exoplanet is extremely hard, since the planet is very far away and you have
to disentangle the planet light from the much brighter starlight. But there has
been tremendous progress in getting some information from exoplanet
atmospheres.
The use of retrievals for exoplanets has been met with some
enthusiasm, but also with a lot of reservation. I think part of this has to do
with the background of the scientists. A lot of scientists in the exoplanet
field have an astronomy background and astronomers are generally not familiar
with the word retrieval. They are generally familiar with curve fitting though,
which is not that different from retrievals, except that in most curve fitting
the model for the curve tends to be simpler. In astronomy, the data is also
often of worse quality than for the Earth or solar system planets, because the objects are so far away. I also suspect that a lot of the things that astronomers are
interested in, like stars, are in a way simpler and more governed by
fundamental processes than planets, making it easier for a completely
physics-based model to do a good job.
From my point of view, some of the theorists initially
seemed to view retrievals as a competing way of understanding exoplanet
atmospheres, as if the observers were saying: “Hah, we don’t need your complex
models anymore, we have retrievals now!”. Perhaps some observers were even
thinking this. I don’t think it helped that some initial retrieval studies put
perhaps too much faith in the observations, and ignored certain important
variables, such as clouds, which led them to come to some conclusions that were
later disputed or disproven.
The future of exoplanet retrievals
Some extra thinking is often still required when doing
retrievals, especially when the results are unexpected. For the Earth,
retrievals are actually very much biased towards expected results, since we
know from weather forecasts roughly what to expect. For planets, and especially
exoplanets, we don’t have good prior knowledge, but retrievals can be set up
such that they are not very dependent on prior information. Not knowing what to
expect, together with data of poor quality, can give retrieval results that do
not teach us much, since they leave pretty much all options open. This has been
the case for much of exoplanet history. In such a situation, the theorists’
models give the best guess of what’s going on. That does not mean that we
actually know with confidence what is going on. The confidence in the
knowledge from these models probably depends on whether you’re an observer or a theorist. In any
case, we probably need better observations to find out more.
Fortunately, better observations are coming soon! The James
Webb Space telescope will be launched within in a few years, a statement that
has been true for the last few decades or so. When it will be actually launched,
it should give observations of exoplanet that are better than the ones we have now. In
the 2020s, extremely large telescopes, such as the European Extremely Large Telescope (yes, really), will be built on the ground, and
dedicated exoplanet spectroscopy satellites will be launched that will give
great new measurements of exoplanet atmospheres. Retrievals should be able to
help in figuring out what many of these exoplanets are like. I also suspect
they will show many surprising things that were not predicted by complex models,
especially for relatively small planets. In fact, I would be deeply
disappointed if these hundreds of planets could be well described by everything
we know right now. My bet is that the universe is much more creative than we are.
*Some inspiration for the Bayes' Theory part comes from Sean
Carroll's excellent book "the Big Picture"