Posted by Danny TarlowWhether you believe it or not, you have probably heard in some way or another that our current habits of energy use are not sustainable. That is, if we remove fossil fuels from the picture, we are a long way from being able to provide the energy that we as a society use (and take for granted) on a daily basis.
For those of us interested in the numbers, a great starting point is David MacKay's book (free online), Sustainable Energy - Without the Hot Air. He goes to great length to present an unbiased, rigorously derived analysis of the energy used, available, and potentially available for the UK in the future under different scenarios. Everything is presented in common units, so you start to get an intuition about, say, the relative energy savings of unplugging your phone charger when you're not using it versus carpooling to work.
In one of his thousand word summaries, he lays out the problem plainly:
Even if we imagine strong efficiency measures and smart technology-switches that halved our energy consumption [from 125 kWh per day per person to 60 kWh per day] (which would be lower than the per-capita consumption of any developed country today), we should not kid ourselves about the challenge of supplying 60 kWh per day without fossil fuels. Among the low-carbon energy supply options, the three with the biggest potential are wind power, nuclear power, and concentrating solar power in other peoples' deserts. And here is the scale that is required if (for simplicity) we wanted to get one third from each of these sources: we would have to build wind farms with an area equal to the area of Wales; we would have to build 50 Sizewells of nuclear power; and we would need solar power stations in deserts covering an area twice the size of Greater London.Finding new sources of energy is half of the picture, and MacKay does a good job covering it. The other half of the picture comes from finding ways to use less energy. One of the most important places to look here is in energy consumed by buildings, which -- by most measures I've heard -- account for around 40% of all energy consumption in the developed world.
There are other good reasons for making buildings more energy efficient; namely, if you operate a number of large buildings (think Walmart, McDonalds, Disney), it can save you quite a bit of money, especially if energy costs rise. It is also good for a company's public image.
As with the energy generation side, there are plenty of suggestions for how to reduce energy use in buildings, from adding glazing or shading devices to windows, increasing insulation in the walls, using more efficient light bulbs, using natural ventilation for cooling, reusing heat produced from other sources to heat the building, storing energy in ice, water, or other mediums, etc.
Unfortunately, every building is unique due to its design, location, surrounding climate, construction imperfections, and operational patterns, so there is no one-size-fits-all solution that gives the most energy savings bang for the buck. For example, adding extra insulation to a building that has its door or windows left open all day (say, a store that wants to attract customers inside), is going to have very little effect.
In light of this, how do we decide what energy-saving measures to implement?
One approach that has been gaining popularity in recent years is called LEED, which stands for Leadership in Energy and Environmental Design.
LEED is a checklist system; implementing a specific measure gives you a set number of points. A building designer can pick and choose any subset of measures, and if the sum of points is above a certain level, the building is called "LEED Certified":
In LEED 2009 there are 100 possible base points plus an additional 6 points for Innovation in Design and 4 points for Regional Priority.(from Wikipedia)
Buildings can qualify for four levels of certification:
- Sustainable Sites (26 Possible Points)
- Water Efficiency (10 Possible Points)
- Energy and Atmosphere (35 Possible Points)
- Materials and Resources (14 Possible Points)
- Indoor Environmental Quality (15 Possible Points)
- Innovation in Design (6 Possible Points)
- Regional Priority (4 Possible Points)
- Certified - 40-49 points
- Silver - 50-59 points
- Gold - 60-79 points
- Platinum - 80 points and above
The New York Times recently wrote an article on a study highlighting the shortcomings of LEED:
But in its own study last year of 121 new buildings certified through 2006, the Green Building Council found that more than half — 53 percent — did not qualify for the Energy Star label and 15 percent scored below 30 in that program, meaning they used more energy per square foot than at least 70 percent of comparable buildings in the existing national stock.We begin to see the problem. Building designers like LEED because it is simple: check off the items you want, and you get the certification. Unfortunately, LEED doesn't reliably predict the actual energy savings that you will see in a building.
A second approach, which is actually used as a subroutine in some of the LEED checkboxes, is to use building simulation tools to predict how much energy a building will use given its design, location, and purpose. The leading community that develops building simulation tools is the International Building Performance Simulation Association (IBPSA, pronounced "ih-bip-suh"), which organizes several conferences and a journal, and its members are active in developing most of the popular building simulation tools, such as EnergyPlus and ESP-r.
The tools operate at an impressive level of detail and simulate a great deal of physics, but it is still surprisingly difficult to predict how changing a building will affect how much energy it consumes. The greatest difficulty is in modeling human occupancy and operational patterns, but there are also significant difficulties dealing with weather, construction imperfections, and model simplifications that are made to speed up the computations. Further and perhaps most insidiously, the person doing the energy modeling is often unsure about the inputs to give a tool, since it can be a serious challenge to track down exact material and equipment properties. Worse, these tools do not really deal with uncertainty, so the output is an estimate of energy usage, but there is no attempt to quantify how certain the model is, and errors of 50-100% are not that uncommon.
Now before I go too far criticizing these building simulation tools, I think it's important to repeat that making good predictions for any building we can imagine is a very hard problem. Each building really is a little world in itself, and no two are the same. The number of options for materials, equipment, and configurations is huge, and peculiarities of any piece of the system can have a large impact on the outcome. The people who work on these tools are aware of their shortcomings, and many very smart people are working on ways to improve them.
I was recently at a panel discussion on Validation at Building Simulation 2009, though, and I think I fundamentally disagree with most of what I heard there. The big question at play was how to go forward validating building simulation tools. Current practice relies heavily on validating tools against themselves, arguing that the first step in checking a tool is to see that it gets the physics correct and doesn't diverge from the existing tools. There is some validation on real buildings, but it is quite limited.
At the beginning of the discussion, I was encouraged. Each of the panelists was very critical of current validation methods, and they acknowledged that the errors made in predictions are one of the major reasons these tools haven't seen more widespread adoption outside the community. The way forward as they seemed to see it, though, was to focus more on the physics and thermodynamics. They argued that the best thing to do would be to build a synthetic test building, outfitted with a huge array of sensors that could measure every internal component in a model, then simulation tools could be validated based on whether their internal state matched the measured state for the test building.
Though this would undoubtedly produce interesting data, it would be expensive, and I don't think it's the most important type of data to collect. If it were possible to know all of the inputs and occupancy patterns of a building, the tools would already do a pretty good job.
Instead, I think there needs to be more of a focus on the generative model -- what the characteristics of the building stock are, and how the characteristics are related. For example, a tool should know that if a building is grocery store in Arizona, it's probably running the air conditioner extremely high in the summer, or that if a building was built in the 1950's in Chicago, it probably uses material X and is more likely than average to have leaky window installations. Maybe people in New York are more likely to leave their heaters on over night even when buildings aren't occupied.
In order to learn these types of things, we need a different type of data. Rather than complete, extremely detailed data about one controlled building, we need a huge amount of possibly incomplete data about real buildings from all over the world with all sorts of designs, constructions, and operational patterns. The problem is not that we don't understand the core physics. The problem is that we don't know how to take data that we've collected and analysis that we've done on one building and apply it to other buildings.
I raised my hand and asked during the panel session if anybody had considered creating a central respository to hold data from all of the case studies that were presented at these conferences, so that we could evaluate simulation tools as they are actually used. My argument was (a) if the tools came up with good estimates of energy use across tens of thousands of real buildings, then validation would be pretty much done, and we could be fairly confident that they would come up with good estimates of energy use for the buildings that we're interested in modeling; and (b) even if they estimates are way off, at least the tool would be able to report that it has no confidence in its answer -- it could know that it has large errors on certain types of buildings in certain locations (e.g., using cross validation).
I was surprised to be met with quite a bit of resistance. The one liner that I got in return was, "maybe it would be right, but it would probably be right for all the wrong reasons." So my question is this: would you rather be right for all the wrong reasons or wrong for all the right reasons?
To be continued...