Limits to growth
the 30-year update
Donella Meadows, Jorgen Randers, Dennis Meadows
Unpublished comments August 2008
Funny how you acquire a distorted idea of things. The original Limits to growth came out in 1972, and I had certainly heard of it before reading the update, but I had somehow got the impression it wasn't terribly reputable. It was produced by something called the "Club of Rome". Come on - that can't be serious can it? Sounds like a bunch of rich playboys naively riding the 1970's wave of eco-panic.
That shows how wrong one can be - still, I was only in my early teens at the time. The Club of Rome was some sort of informal group of distinguished folk but this is not particularly relevant because they merely commissioned the report. The work was actually done by a team at MIT which is about as scientifically blue-chip as you can get. Furthermore the work behind the report did not consist of qualitative beard-stroking, but of computer modelling. In essence they built a picture of the world in numbers, and ran it through a computer to examine various 'what-ifs'.
The problem with this process is obvious: the world is big, our knowledge is imperfect and any model contains all sorts of guesstimates and assumptions. To add to the imperfect pictures of the size of reserves and the speed of processes there is the fact that everything interacts with everything else, or, to use the official terms, it behaves as a system and you have to use systems thinking. As far as I can tell, Phillip Stott, our favourite homegrown climate contrarian, relies on the argument that models of intricate and incompletely understood processes cannot give a reliable warning because the whole shebang is just too complicated. (And that's just climate models - the Limits to growth team include people in their model).
I really don't buy the anti-modelling line - and Stott's basic stance amounts to smiling indulgently and saying "oh it'll be all right; people are pretty smart and nature's pretty tough you know". Complicated as things are, there is enough data to make some sort of meaningful quantitative picture. Sure there is massive uncertainty but we do have an idea of the range of that uncertainty. This book explains the principles of computer modelling in a style reminiscent of the best Open University textbooks (very high praise, that) but starts with the flat statement that such models do not and cannot predict the future. What a model can do is sketch out the span of possibilities. The modellers run the program hundreds of times, changing the assumptions each time so that if so-and-so turns out to be true now, then such-and-such is likely to happen.
So what's the conclusion? Are we running out of resources? Is doom around the corner? Well not exactly, but things do not look terribly good and most of the scenarios modelled end with a disastrous population crash when the unsustainable chickens finally come home to roost. (That'll be some time in the present century, since you ask). Behaviour does respond to warning signs but we operate on too short a loop and daren't take long term decisions. But the situation is by no means lost and there are a few combinations of circumstances in which the human tribe comes out with a decent and sustainable living standard and a relatively undegraded environment. Plus we have the shining example of CFCs and the ozone layer to show us that successful action can be taken
This book ultimately left me feeling optimistic. It is nice to read a book about sustainability that understands that technology can't solve everything yet does not use the words "greed" or "paradigm". For me at least I found the cool and level tone that the authors use is more inspiring to action that any amount of preaching.
Intro ... Home ... Sitemap