Monday, August 31, 2009

Why is it so hard to reduce energy usage in buildings?

Posted by Danny Tarlow
Whether you believe it or not, you have probably heard in some way or another that our current habits of energy use are not sustainable. That is, if we remove fossil fuels from the picture, we are a long way from being able to provide the energy that we as a society use (and take for granted) on a daily basis.

For those of us interested in the numbers, a great starting point is David MacKay's book (free online), Sustainable Energy - Without the Hot Air. He goes to great length to present an unbiased, rigorously derived analysis of the energy used, available, and potentially available for the UK in the future under different scenarios. Everything is presented in common units, so you start to get an intuition about, say, the relative energy savings of unplugging your phone charger when you're not using it versus carpooling to work.

In one of his thousand word summaries, he lays out the problem plainly:
Even if we imagine strong efficiency measures and smart technology-switches that halved our energy consumption [from 125 kWh per day per person to 60 kWh per day] (which would be lower than the per-capita consumption of any developed country today), we should not kid ourselves about the challenge of supplying 60 kWh per day without fossil fuels. Among the low-carbon energy supply options, the three with the biggest potential are wind power, nuclear power, and concentrating solar power in other peoples' deserts. And here is the scale that is required if (for simplicity) we wanted to get one third from each of these sources: we would have to build wind farms with an area equal to the area of Wales; we would have to build 50 Sizewells of nuclear power; and we would need solar power stations in deserts covering an area twice the size of Greater London.
Finding new sources of energy is half of the picture, and MacKay does a good job covering it. The other half of the picture comes from finding ways to use less energy. One of the most important places to look here is in energy consumed by buildings, which -- by most measures I've heard -- account for around 40% of all energy consumption in the developed world.

There are other good reasons for making buildings more energy efficient; namely, if you operate a number of large buildings (think Walmart, McDonalds, Disney), it can save you quite a bit of money, especially if energy costs rise. It is also good for a company's public image.

As with the energy generation side, there are plenty of suggestions for how to reduce energy use in buildings, from adding glazing or shading devices to windows, increasing insulation in the walls, using more efficient light bulbs, using natural ventilation for cooling, reusing heat produced from other sources to heat the building, storing energy in ice, water, or other mediums, etc.

Unfortunately, every building is unique due to its design, location, surrounding climate, construction imperfections, and operational patterns, so there is no one-size-fits-all solution that gives the most energy savings bang for the buck. For example, adding extra insulation to a building that has its door or windows left open all day (say, a store that wants to attract customers inside), is going to have very little effect.

In light of this, how do we decide what energy-saving measures to implement?

One approach that has been gaining popularity in recent years is called LEED, which stands for Leadership in Energy and Environmental Design.

LEED is a checklist system; implementing a specific measure gives you a set number of points. A building designer can pick and choose any subset of measures, and if the sum of points is above a certain level, the building is called "LEED Certified":
In LEED 2009 there are 100 possible base points plus an additional 6 points for Innovation in Design and 4 points for Regional Priority.
  • Sustainable Sites (26 Possible Points)
  • Water Efficiency (10 Possible Points)
  • Energy and Atmosphere (35 Possible Points)
  • Materials and Resources (14 Possible Points)
  • Indoor Environmental Quality (15 Possible Points)
  • Innovation in Design (6 Possible Points)
  • Regional Priority (4 Possible Points)
Buildings can qualify for four levels of certification:
  • Certified - 40-49 points
  • Silver - 50-59 points
  • Gold - 60-79 points
  • Platinum - 80 points and above
(from Wikipedia)

The New York Times recently wrote an article on a study highlighting the shortcomings of LEED:
But in its own study last year of 121 new buildings certified through 2006, the Green Building Council found that more than half — 53 percent — did not qualify for the Energy Star label and 15 percent scored below 30 in that program, meaning they used more energy per square foot than at least 70 percent of comparable buildings in the existing national stock.
We begin to see the problem. Building designers like LEED because it is simple: check off the items you want, and you get the certification. Unfortunately, LEED doesn't reliably predict the actual energy savings that you will see in a building.

A second approach, which is actually used as a subroutine in some of the LEED checkboxes, is to use building simulation tools to predict how much energy a building will use given its design, location, and purpose. The leading community that develops building simulation tools is the International Building Performance Simulation Association (IBPSA, pronounced "ih-bip-suh"), which organizes several conferences and a journal, and its members are active in developing most of the popular building simulation tools, such as EnergyPlus and ESP-r.

The tools operate at an impressive level of detail and simulate a great deal of physics, but it is still surprisingly difficult to predict how changing a building will affect how much energy it consumes. The greatest difficulty is in modeling human occupancy and operational patterns, but there are also significant difficulties dealing with weather, construction imperfections, and model simplifications that are made to speed up the computations. Further and perhaps most insidiously, the person doing the energy modeling is often unsure about the inputs to give a tool, since it can be a serious challenge to track down exact material and equipment properties. Worse, these tools do not really deal with uncertainty, so the output is an estimate of energy usage, but there is no attempt to quantify how certain the model is, and errors of 50-100% are not that uncommon.

Now before I go too far criticizing these building simulation tools, I think it's important to repeat that making good predictions for any building we can imagine is a very hard problem. Each building really is a little world in itself, and no two are the same. The number of options for materials, equipment, and configurations is huge, and peculiarities of any piece of the system can have a large impact on the outcome. The people who work on these tools are aware of their shortcomings, and many very smart people are working on ways to improve them.

I was recently at a panel discussion on Validation at Building Simulation 2009, though, and I think I fundamentally disagree with most of what I heard there. The big question at play was how to go forward validating building simulation tools. Current practice relies heavily on validating tools against themselves, arguing that the first step in checking a tool is to see that it gets the physics correct and doesn't diverge from the existing tools. There is some validation on real buildings, but it is quite limited.

At the beginning of the discussion, I was encouraged. Each of the panelists was very critical of current validation methods, and they acknowledged that the errors made in predictions are one of the major reasons these tools haven't seen more widespread adoption outside the community. The way forward as they seemed to see it, though, was to focus more on the physics and thermodynamics. They argued that the best thing to do would be to build a synthetic test building, outfitted with a huge array of sensors that could measure every internal component in a model, then simulation tools could be validated based on whether their internal state matched the measured state for the test building.

Though this would undoubtedly produce interesting data, it would be expensive, and I don't think it's the most important type of data to collect. If it were possible to know all of the inputs and occupancy patterns of a building, the tools would already do a pretty good job.

Instead, I think there needs to be more of a focus on the generative model -- what the characteristics of the building stock are, and how the characteristics are related. For example, a tool should know that if a building is grocery store in Arizona, it's probably running the air conditioner extremely high in the summer, or that if a building was built in the 1950's in Chicago, it probably uses material X and is more likely than average to have leaky window installations. Maybe people in New York are more likely to leave their heaters on over night even when buildings aren't occupied.

In order to learn these types of things, we need a different type of data. Rather than complete, extremely detailed data about one controlled building, we need a huge amount of possibly incomplete data about real buildings from all over the world with all sorts of designs, constructions, and operational patterns. The problem is not that we don't understand the core physics. The problem is that we don't know how to take data that we've collected and analysis that we've done on one building and apply it to other buildings.

I raised my hand and asked during the panel session if anybody had considered creating a central respository to hold data from all of the case studies that were presented at these conferences, so that we could evaluate simulation tools as they are actually used. My argument was (a) if the tools came up with good estimates of energy use across tens of thousands of real buildings, then validation would be pretty much done, and we could be fairly confident that they would come up with good estimates of energy use for the buildings that we're interested in modeling; and (b) even if they estimates are way off, at least the tool would be able to report that it has no confidence in its answer -- it could know that it has large errors on certain types of buildings in certain locations (e.g., using cross validation).

I was surprised to be met with quite a bit of resistance. The one liner that I got in return was, "maybe it would be right, but it would probably be right for all the wrong reasons." So my question is this: would you rather be right for all the wrong reasons or wrong for all the right reasons?

To be continued...

Thursday, August 20, 2009

Python for Scientific Computing Conference

Posted by Danny Tarlow
The SciPy 2009 conference is happening right now in Pasadena, California. I am unfortunately in the wrong region of California to be attending, but the contents of the conference -- and particularly the tutorials -- look great.

The ones I would be most excited to attend would be the sympy and pyCUDA tutorials.

I've long been a big fan of sympy for hairy symbolic math computations. There have been some frustrations using it, but it is growing to be a quite usable library, and I've found the support via the mailing list to be top notch.

I'd never heard of pyCUDA before today, but it promises to be a clean, friendly, Python interface to GPU programming. It is based on the Nvidia CUDA API, which looks to have been used to significantly speed up some cool and relevant applications, like sliding window-based object detection.

The best part? Twitter is buzzing that the tutorial videos are avaiable online:
http://www.archive.org/search.php?query=scipy

City data: Toronto and San Francisco

Posted by Danny Tarlow
My year of "sabbatical" is quickly coming to an end, so I'm packing up my San Francisco apartment and getting ready to head back to Toronto.

The upsides of moving back to Toronto are
(a) I'll have more time to focus on research, and
(b) I need to take a few more classes.

I'm obviously excited about (a).

I'm looking forward to (b), mostly because the (only) class I've decided on at this point is Greg Wilson's software consulting project course. It is supposed to be a cool course every year, but this year it sounds particularly interesting: each project will somehow make use of data that the city of Toronto is getting ready to release. I'm teaming up with another one of my AI group buddies, so together we make the graduate student contingent.

We're now brainstorming interesting projects -- hopefully somehow incorporating cool machine learning. Of course, the difficulty so far is that we don't yet know exactly what data is going to be available to us. Interestingly, though, my other city -- the city of San Francisco -- just released its own variant of publicly available city data:
http://www.datasf.org/

Lately I've found myself arguing quite a bit that we should be collecting and releasing more data. Now that these cities seem to be doing it, hopefully we can turn around and find something interesting to do with it. If you have great ideas, I'd love to hear them.

Thursday, August 13, 2009

Mutable heaps in C++ (for updates to priority_queue priorities)

Posted by Danny Tarlow
I'm usually very happy with the STL C++ data structures and libraries, so I was a bit surprised to find that the built-in heap doesn't support updating a value. This comes up, for example, in iterative algorithms where in each iteration we make a change to a small number of variables at the front of a queue. The changes may affect some other non-front variable values as well, but we want to reuse most of the sorting work that we've done in previous iterations. In this case, we keep the same heap from iteration to iteration and just call update() on the changed values when necessary at a cost of O(log N) for binary heaps.

After not finding what I needed in the core STL, I turned to the Boost C++ libraries and found the mutable_queue. The class lies way far off in the generic programming way of doing things, but I won't comment on that.

Anyway, to define a mutable_queue, there are four template arguments that need to be filled in:
  template <class IndexedType, 
            class RandomAccessContainer = std::vector<IndexedType>, 
            class Comp = std::less<typename RandomAccessContainer::value_type>,
            class ID = identity_property_map >
  class mutable_queue { ...
Conveniently, though, when we are working with basic types, we can just give it one template argument, and the defaults will work for the other template parts. This worked great for me when I tried it out with a simple mutable_queue<double>, and it was pretty easy to get it running and working correctly.

What I need, however, is a mutable_queue of Node * objects. So it seemed fairly straightforward to fill in the template arguments:
typedef Node * entry_t;
typedef vector<entry_t> storage_t;
typedef CompareNodeStar comp_t;
typedef identity_property_map prop_map_t;
typedef mutable_exposed_queue<entry_t, storage_t,
    comp_t, prop_map_t> queue_t;

queue_t *q = new queue_t(N, comp_t(), prop_map_t());
Which says give me a queue of Node *'s, use a vector of Node *'s as the underlying storage, compare elements using the CompareNodeStar comparison function that I defined, and use the default property_map*.

Unfortunately, this gives me errors, as do several other attempts at filling in property_map types. *By the way, here's what a property map is:
The property map interface consists of a set of concepts (see definition of "concept" in [1] and [2]) that define a syntax for mapping key objects to corresponding value objects. Since the property map operations are global functions (actually they don't have to be global, but they are always called unqualified and may be found via argument dependent lookup), it is possible to overload the map functions such that nearly arbitrary property map types and key types can be used. The interface for property maps consists of three functions: get(), put(), and operator[].
At this point, I'd already sunk a couple hours into the boost library, and I was getting a bit frustrated -- why should I have to learn about the internal details of property_maps when all I want to do is apply the data structure in a fairly natural way? Yes, maybe I'm being lazy, and granted I don't use C++ as much as I used to, but it seems to me that this should be easier than it is. (Maybe somebody can come along and point out the correct way of doing this -- I put in a reasonable effort to figure it out, but by no means am I confident that I'm even on the right track).

The other source of my frustration is probably that heaps are not that complicated. To update a heap, all you really have to do is figure out whether the element is ok where it is, needs to be pushed up, or needs to be pushed down. So I changed tactics and found a less generic (but still pretty generic) heap update implementation, called HEAPPlus:
http://www.cse.ust.hk/~lihw/code/heapplus/

For my purposes, I like this implementation quite a bit better. It uses the same style of interface as STL heaps, where you have a vector as the core storage structure, then heaps are just a set of algorithms that maintain a proper ordering of elements in the vector. All that heapplus.h does is define update_heap(start_iterator, end_iterator, edit_position_iterator, new_value).

This worked (almost) on the first try for me for doubles, Nodes, and Node *'s, but it's still not exactly what I want. I want to be able to change the value of an element in the heap separately from the call to updating the heap. This comes up in the iterative algorithm example, where the iterative algorithm maintains pointers to the Node elements, so it can change their priority directly. Granted, you could have the algorithm change the priority, then pass a pointer to the changed node as the new_value, but that seems redundant to me. It's more natural for me to think of leaving the last argument off: update_heap_pos(start_iterator, end_iterator, edit_position_iterator).

So that's what I did. I took the HEAPPlus implementation and defined a couple new functions to support update_heap_pos, then I wrote some simple examples to show the usage. Here's the test file, test.cpp:
// test.cpp
// Used to test heapplus.h
// lihw
// 2006.12.18
// ChangeLog:
//  2009.08.10  - Added tests for Node and Node * heaps, and added Comparator
//                argument to print_heap, so that pop_heap works as expected.
//                (Danny Tarlow: dtarlow@cs.toronto.edu)
//  2006.12.18  - Initially created (lihw)

#include <iostream>
#include <ostream>
#include <functional>
#include <algorithm>
#include <vector>
#include "heapplus.h"

#include <ctime>
#include <cstdlib>

using namespace std;


template<typename Container>
inline
void PRINT_ELEMENTS(const Container& container, const char* title= 0) 
{
    if (title)
        std::cout << title << std::endl;

    typename Container::const_iterator iter, beg = container.begin(),
             end = container.end();
    for (iter = beg; iter != end; ++iter) 
        std::cout << *iter << " ";
    std::cout << std::endl;
}

template<typename Container, typename Compare>
void print_heap(const Container& container, Compare comp, const char* title = 0)
{
    if (title)
        std::cout << title << std::endl;

    std::vector<typename Container::value_type> temp(container.begin(),
            container.end());
    while (temp.size()) {
        std::cout << temp[0] << " ";
        std::pop_heap(temp.begin(), temp.end(), comp);
        temp.pop_back();
    }
    std::cout << std::endl;
}


/**
 *  A very simple node class just to test the update_heap_pos functions
 * and to show an example usage.  Each node lies on an x-y grid, and it
 * has a priority associated with it, which we assume will be computed and
 * and modified by some procedure outside the scope of these functions.
 *
 * This example shows how to maintain a queue of grid points, ordered by
 * priority, where updates take at worst O(lg N) time.
 **/

class Node {

  friend ostream &operator<<(ostream &os, Node &n);
  friend bool operator<(Node a, Node b);
  friend bool operator==(Node &a, Node &b);
  friend ostream &operator<<(ostream &os, Node *n);

public:

  Node() : x_(0), y_(0), priority_(0.0) { }

  Node(int x, int y) : x_(x), y_(y), priority_(0.0) { }

  double Priority() { return priority_; }

  void SetPriority(double priority) {
    priority_ = priority;
  }

protected:
  double priority_;
  int x_;
  int y_;
};

ostream &operator<<(ostream &os, Node &n) {
  os << "(" << n.x_ << "," << n.y_ << "):" << n.Priority();
  return os;
}
bool operator<(Node a, Node b) {
  return (a.Priority() < b.Priority());
}
bool operator==(Node &a, Node &b) {
  return (a.x_ == b.x_ && a.y_ == b.y_);
}
ostream &operator<<(ostream &os, Node *n) {
  os << "(" << n->x_ << "," << n->y_ << "):" << n->Priority();
  return os;
}
struct CompareNodeStar {
  bool operator()(Node *a, Node *b) {
    return (a->Priority() < b->Priority());
  }
};


int main(void) {

    int side = 4;
    int N = side * side;

    vector<double> doubleheap(0);
    vector<Node> nodeheap(0);
    vector<Node *> nodestarheap(0);

    for (int x = 0; x < side; x++) {
      for (int y = 0; y < side; y++) {
 Node node = Node(x, y);
 Node *nodestar = new Node(x, y);
 double priority = (x - side / 2.0) * (y - side / 4.0); 

 node.SetPriority(priority);
 nodestar->SetPriority(priority);

 doubleheap.push_back(priority);
 nodeheap.push_back(node);
 nodestarheap.push_back(nodestar);
      }
    }

    cout << "heap<double>" << endl;
    //PRINT_ELEMENTS(doubleheap, "Initial heap");
    print_heap(doubleheap, less<double>());


    make_heap(doubleheap.begin(), doubleheap.end());
    //PRINT_ELEMENTS(doubleheap, "Made heap");
    print_heap(doubleheap, less<double>());

    doubleheap[4] = 100;
    //PRINT_ELEMENTS(doubleheap, "edited");
    print_heap(doubleheap, less<double>());

    vector<double>::iterator itdouble = doubleheap.begin() + 4;
    update_heap_pos(doubleheap.begin(), doubleheap.end(), itdouble);
    //PRINT_ELEMENTS(doubleheap, "Updated and sorted");
    print_heap(doubleheap, less<double>());



    cout << endl;
    cout << "heap<Node>" << endl;
    print_heap(nodeheap, less<Node>());

    make_heap(nodeheap.begin(), nodeheap.end());
    print_heap(nodeheap, less<Node>());

    nodeheap[4].SetPriority(100);
    print_heap(nodeheap, less<Node>());

    vector<Node>::iterator it = nodeheap.begin() + 4;
    update_heap_pos(nodeheap.begin(), nodeheap.end(), it);
    print_heap(nodeheap, less<Node>());



    cout << endl;
    cout << "heap<Node *>" << endl;
    //PRINT_ELEMENTS(nodestarheap, "Initial heap");
    print_heap(nodestarheap, CompareNodeStar());

    make_heap(nodestarheap.begin(), nodestarheap.end(), CompareNodeStar());
    //PRINT_ELEMENTS(nodestarheap, "Made heap");
    print_heap(nodestarheap, CompareNodeStar());

    nodestarheap[4]->SetPriority(100);
    //PRINT_ELEMENTS(nodestarheap, "edited");
    print_heap(nodestarheap, CompareNodeStar());

    vector<Node *>::iterator itstar = nodestarheap.begin() + 4;
    update_heap_pos(nodestarheap.begin(), nodestarheap.end(), itstar, CompareNodeStar());
    //PRINT_ELEMENTS(nodestarheap, "Updated and sorted");
    print_heap(nodestarheap, CompareNodeStar());


    return 0;
}

And here is the modified heapplus.h file:
// Heap Plus implementation 1.01alpha
// lihw 
// 2006.12.18
// ChangeLog:
//  2009.08.10  - Added update_heap_pos functions so that we can adjust
//                heap values outside of update_heap functions -- e.g., if
//                we have external pointers into the heap entries -- then
//                call update_heap on the position only, regardless of the value.
//                (Danny Tarlow: dtarlow@cs.toronto.edu)
//  2006.12.18  - Initially created (lihw)

#ifndef HEAPPLUS_H_
#define HEAPPLUS_H_

#include <iterator>

namespace std {

template<typename _RandomAccessIterator, typename _Distance, typename _Tp, typename _Compare>
inline void
__up_heap(_RandomAccessIterator __first, _RandomAccessIterator __end, _RandomAccessIterator __pos,
        _Distance, _Tp __value,  _Compare __comp)
{
    _Distance __parent = (__pos - __first - 1) /  2;
    _Distance __index = __pos - __first;

    while (__index > 0 && __comp(*(__first + __parent), __value)) {
        *(__first + __index) = *(__first + __parent);
        __index = __parent;
        __parent = (__parent - 1) / 2;
    }
       
    if (__pos != (__first + __index))   
        *(__first + __index) = __value;
}

template<typename _RandomAccessIterator, typename _Distance, typename _Tp, typename _Compare>
inline void
__down_heap(_RandomAccessIterator __first, _RandomAccessIterator __last, _RandomAccessIterator __pos,
        _Distance, _Tp __value, _Compare __comp)
{
    _Distance __len = __last - __first;
    _Distance __index = __pos - __first;
    _Distance __left = __index * 2 + 1;
    _Distance __right = __index * 2 + 2;
    _Distance __largest;

    while (__index < __len) {
        if (__left < __len && __comp(*(__first + __right), *(__first + __left))) {
            __largest = __left;
        } else {
            __largest = __right;
        }
        if (__largest < __len && __comp(__value, *(__first + __largest))) {
            *(__first + __index) = *(__first + __largest);
            __index = __largest;
            __left = __index * 2 + 1;
            __right = __index * 2 + 2;
        } else
            break;
    }

    if (__pos != (__first + __index))   
        *(__first + __index) = __value;
}

template<typename _RandomAccessIterator, typename _Distance, typename _Tp, 
    typename _Compare>
inline void
__update_heap(_RandomAccessIterator __first, _RandomAccessIterator __last,
        _RandomAccessIterator __pos, _Distance, _Tp __value, _Compare __comp) 
{
    *(__pos) = __value;
    
    _Distance __index = (__pos - __first);
    _Distance __parent = (__index - 1) / 2;

    if (__index > 0 && __comp(*(__first + __parent), __value))
        __up_heap(__first, __last, __pos, _Distance(), __value, __comp);
    else
        __down_heap(__first, __last, __pos, _Distance(), __value, __comp);
}

template<typename _RandomAccessIterator, typename _Distance, typename _Compare>
inline void
__update_heap(_RandomAccessIterator __first, _RandomAccessIterator __last,
        _RandomAccessIterator __pos, _Distance, _Compare __comp) 
{
    _Distance __index = (__pos - __first);
    _Distance __parent = (__index - 1) / 2;

    if (__index > 0 && __comp(*(__first + __parent), *(__pos)))
      __up_heap(__first, __last, __pos, _Distance(), *(__pos), __comp);
    else
      __down_heap(__first, __last, __pos, _Distance(), *(__pos), __comp);
}

template<typename _RandomAccessIterator, typename _Tp>
inline void
update_heap(_RandomAccessIterator __first, _RandomAccessIterator __last,
        _RandomAccessIterator __pos, _Tp __value) 
{
      typedef typename iterator_traits<_RandomAccessIterator>::value_type _ValueType;
      typedef typename iterator_traits<_RandomAccessIterator>::difference_type _DistanceType;

      __update_heap(__first, __last, __pos, _DistanceType(), __value, less<_ValueType>());
}

template<typename _RandomAccessIterator, typename _Tp, typename _Compare>
inline void
update_heap(_RandomAccessIterator __first, _RandomAccessIterator __last,
        _RandomAccessIterator __pos, _Tp __value, _Compare __comp) 
{
      __update_heap(__first, __last, __pos, __value, __comp);
}


template<typename _RandomAccessIterator>
inline void
update_heap_pos(_RandomAccessIterator __first, _RandomAccessIterator __last,
  _RandomAccessIterator __pos) {
      typedef typename iterator_traits<_RandomAccessIterator>::value_type _ValueType;
      typedef typename iterator_traits<_RandomAccessIterator>::difference_type _DistanceType;

      __update_heap(__first, __last, __pos, _DistanceType(), less<_ValueType>());
}


template<typename _RandomAccessIterator, typename _Compare>
inline void
update_heap_pos(_RandomAccessIterator __first, _RandomAccessIterator __last,
  _RandomAccessIterator __pos, _Compare __comp) {
      typedef typename iterator_traits<_RandomAccessIterator>::value_type _ValueType;
      typedef typename iterator_traits<_RandomAccessIterator>::difference_type _DistanceType;

      __update_heap(__first, __last, __pos, _DistanceType(), __comp);
}



}; // namespace std

#endif // !HEAPPLUS_H_

And finally, here is the output from compiling and running the test case:
heap<double>
2 2 -0 1 1 0 0 0 0 -0 -1 -0 -1 -2 -2 -4 
2 2 1 1 0 0 -0 0 0 -0 -0 -1 -1 -2 -2 -4 
2 2 100 1 0 0 -0 0 0 -0 -0 -1 -1 -2 -2 -4 
100 2 2 1 0 0 -0 0 0 -0 -0 -1 -1 -2 -2 -4 

heap<Node>
(0,0):2 (3,3):2 (0,1):-0 (1,0):1 (3,2):1 (2,2):0 (3,1):0 (2,1):0 (2,3):0 (2,0):-0 (3,0):-1 (1,1):-0 (1,2):-1 (0,2):-2 (1,3):-2 (0,3):-4 
(3,3):2 (0,0):2 (3,2):1 (1,0):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (1,2):-1 (3,0):-1 (0,2):-2 (1,3):-2 (0,3):-4 
(3,3):2 (0,0):2 (1,0):100 (3,2):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (3,0):-1 (1,2):-1 (1,3):-2 (0,2):-2 (0,3):-4 
(1,0):100 (3,3):2 (0,0):2 (3,2):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (3,0):-1 (1,2):-1 (1,3):-2 (0,2):-2 (0,3):-4 

heap<Node *>
(0,0):2 (3,3):2 (0,1):-0 (1,0):1 (3,2):1 (2,2):0 (3,1):0 (2,1):0 (2,3):0 (2,0):-0 (3,0):-1 (1,1):-0 (1,2):-1 (0,2):-2 (1,3):-2 (0,3):-4 
(3,3):2 (0,0):2 (3,2):1 (1,0):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (1,2):-1 (3,0):-1 (0,2):-2 (1,3):-2 (0,3):-4 
(3,3):2 (0,0):2 (1,0):100 (3,2):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (3,0):-1 (1,2):-1 (1,3):-2 (0,2):-2 (0,3):-4 
(1,0):100 (3,3):2 (0,0):2 (3,2):1 (3,1):0 (2,3):0 (1,1):-0 (2,2):0 (2,1):0 (2,0):-0 (0,1):-0 (3,0):-1 (1,2):-1 (1,3):-2 (0,2):-2 (0,3):-4 


Update: Yanbin Lu has taken this code and updated it further. See the software section from his homepage for the details.

Tuesday, August 4, 2009

Dead simple parallelization in Python

Posted by Danny Tarlow
I'm at the stage in one research project where I've done all of the brainstorming, derivations, and prototyping, and most indicators suggest that we might have a good idea. So the next step is to figure out how it works on a lot of real-world data. For me, this usually means running the program on lots of different inputs and/or with many combinations of parameters (e.g., for cross validation). Further, right now I'm working a lot with images and doing computations at the individual pixel level -- which can get pretty expensive pretty quickly if you're not careful.

I started this phase (as I usually do, actually) by re-writing all of my code in C++. I like to do this first to speed things up, but it also is useful to me as a debugging and sanity check stage, because I have to go through everything I've done in great detail, and I get to think about whether it's still the right idea.

Anyhow, when the task just involves lots of executions of an executable with different inputs, there's no real need to do any fancy parallelization. Instead, I use a dead simple version, where a script spawns off N threads, waits for them to finish, then spawns off N more threads. It keeps doing this until it has exhausted the stack of commands.

Now I know there are plenty of other ways to do this, but for my purposes, the following Python code works great for me.
import os
from threading import Thread

NUM_THREADS = 7

class ThreadedCommand(Thread):

    def __init__(self, shell_call):
        Thread.__init__(self)
        self.shell_call = shell_call

    def run(self):
        os.system(command)


commands = []
for parameter_1 in [5, 25, 100]:
    for parameter_2 in [5, 25, 100]:
        for parameter_3 in [.1, .25, .35]:
            command = "./command do_stuff --parameter_1 %s --parameter_2 %s--parameter_3 %s" % (parameter_1, parameter_2, parameter_3)
            commands.append(command)
                                                                                                                                                                                                                                                                                                                                   

while len(commands) > 0:
    running_threads = []
    for i in range(NUM_THREADS):
        try:
            command = commands.pop(0)
            new_thread = ThreadedCommand(command)                                                                                                                                                                                                                                                                                                                            
            running_threads.append(new_thread)
            new_thread.start()
        except:
            continue

    for thread in running_threads:
        thread.join()
        print "Joined %s" % thread.shell_call

The quest for artificial intelligence

Posted by Danny Tarlow
Nils Nilsson is a Professor Emeritus in Computer Science at Stanford. Anybody who has taken a course on algorithms knows about A* search -- Nils co-invented it. He was also instrumental in developing one of the earliest mobile robots, SHAKEY, back in 1966. It's pretty amazing to look at the old pictures and read about the project.
From http://www.ai.sri.com/shakey/


If you're looking for somebody who lived through and helped shape modern robotics, machine learning, and artificial intelligence, you'd be hard-pressed to find a better exemplar than Nils.

He's also an extremely friendly and interesting guy. I know because I spent the summer after my third year of undergrad as one of his two research assistants. We were looking at how to do unsupervised learning in an incremental way, so that we could use what we'd already learned to build up progressively richer and more meaningful representations of the world, but that's a story for another day.

While I was working with him -- which was about four years ago now -- Nils told me about a book project he was working on, trying to round up information and write a history of artificial intelligence and robotics. I learned today that he's finished the book, which is set to be published later in the year. The best part is that the book is available free online in the meantime:

THE QUEST FOR ARTIFICIAL INTELLIGENCE: A HISTORY OF IDEAS AND ACHIEVEMENTS
Nils Nilsson
http://ai.stanford.edu/~nilsson/QAI/qai-webpage.html
I haven't sunk my teeth into it, but I skimmed through most of the chapters. It looks to be a great read -- Nils has always seemed to be conscious of the big picture, and I think it shows in this book. Chapter 1 starts by chronicling dreams of artificial intelligence in ancient literature. It's a great way to start the book. It is also very comprehensive, weighing it at 700 pages. I'll likely be buying a hard copy when it hits the presses.