APPROXIMATE DYNAMIC PROGRAMMING WARREN POWELL SOLUTION MANUAL



Approximate Dynamic Programming Warren Powell Solution Manual

Dynamic Programming Models and Algorithms for the Mutual. CASTLE Labs works to advance the development of modern analytics for solving a wide range of applications that involve decisions under uncertainty., We consider the problem of adapting approximate dynamic programming tech-niques to the inverted pendulum task. This is a particularly challenging task as we work with a relatively uninformative reinforcement signal and have no a priori information about our system. Success in this task requires an e ective solution to.

Perspectives of approximate dynamic programming SpringerLink

Recursive economics Wikipedia. Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research., Approximate Dynamic Programming Solving the Curses of Dimensionality Warren B. Powell.pdf 25M Bazaraa M.S., Jarvis J.J. Linear programming and network flows (Wiley, 1977)(T)(K)(ISBN 0471060151)(574s.djv.

[MUSIC] I'm going to illustrate how to use approximate dynamic programming and reinforcement learning to solve high dimensional problems. This is something that arose in the context of truckload trucking, think of this as Uber or Lyft for a truckload freight where a truck moves an entire load of freight from A to B from one city to the next. Approximate Dynamic Programming- I: Modeling . By Warren B. Powell. Abstract. The first step in solving a stochastic optimization problem is providing a mathematical model. How the problem is modeled can impact the solution strategy. In this chapter, we provide a flexible modeling framework that uses a classic control-theoretic framework, avoiding devices such as onestep transition matrices

Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a … 11/05/2016 · Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd Edition (Wiley Series in Probability and Statistics) [Warren B. Powell] on Amazon.com. *FREE* shipping on qualifying offers. Praise for the First Edition Finally, a book devoted to dynamic programming and written using the language of operations research (OR)!

Introduction to ADP Notes: В» When approximating value functions, we are basically drawing on the entire field of statistics. В» Choosing an approximation is primarily an art. Cite this reference as: Warren B. Powell, Reinforcement Learning and Stochastic Optimization and Learning: A Unified Framework, Department of Operations Research and Financial Engineering, Princeton University, 2019. A very short presentation illustrating the jungle of stochastic optimization (updated April 12, 2019). The last slide shows the

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. Exploiting monotonicity Monotonicity in dynamic programming (cont’d) » Health • Dosage of a diabetes drug increases with blood sugar. • Dosage of statins (for reducing cholesterol) increase as the

• M. Petrik and S. Zilberstein. Constraint relaxation in approximate linear programs. In Proceedings of the Twenty-Sixth International Conference on Machine Learning, pages 809-816, Montreal, Canada, 2009. • W. B. Powell. Approximate Dynamic Programming : Solving the Curses of Dimensionality, 2nd Edition. John Wiley & Sons, Hoboken, New Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner.

CASTLE Labs works to advance the development of modern analytics for solving a wide range of applications that involve decisions under uncertainty. He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation.

MIE1615 Markov Decision Processes

approximate dynamic programming warren powell solution manual

Introduction to Approximate Dynamic Programming. Approximate Dynamic Programming: A Melting Pot of Methods . By Warren B. Powell. Abstract. CASTLE Laboratory which specializes in the solution of large-scale stochastic optimization, with considerable experience in freight transportation. This work led to the development of methods to integrate mathematical programming and simulation within the framework of approximate dynamic programming, A complete and accessible introduction to the real-world applications of approximate dynamic programming With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximate Dynamic Programming is a result of the author's decades of experience working in large ….

Approximate Dynamic Programming Solving the Curses of

approximate dynamic programming warren powell solution manual

Warren Powell Approximate Dynamic Programming for Fleet. Approximate Dynamic Programming - II: Algorithms Warren B. Powell December 8, 2009 . Abstract Approximate dynamic programming is a powerful class of algorithmic strategies for solving stochastic optimization problems where optimal decisions can be characterized using Bellman’s optimality equa-tion, but where the characteristics of the problem make solving Bellman’s equation computationally Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics Book 931) - Kindle edition by Warren B. Powell. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in.

approximate dynamic programming warren powell solution manual


Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems Warren Powell Department of Operations Research and Financial Engineering Princeton University Wednesday, May 2, 2007 4:30 - 5:30 PM Terman Engineering Center, Room 453 Abstract: Stochastic resource allocation problems produce dynamic programs with state, information and action variables with thousands or … Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro

Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a … Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research.

Video created by University of Alberta, Alberta Machine Intelligence Institute for the course "Fundamentals of Reinforcement Learning". This week, you will learn how to compute value functions and optimal policies, assuming you have the MDP \Approximate Dynamic Programming", Warren Powell, Wiley, 2007. \Neuro-Dynamic Programming", Dimitri Bertsekas and John Tsitsiklis, Athena Scienti c, 1996. Instructor: Chi-Guhn Lee, MC322, 946-7867, [email protected] O ce Hours: By appointment. Course Description: This is a course to introduce the students to theories and applications of Markov decision processes. Emphasis will be on the

Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems Warren Powell Department of Operations Research and Financial Engineering Princeton University Wednesday, May 2, 2007 4:30 - 5:30 PM Terman Engineering Center, Room 453 Abstract: Stochastic resource allocation problems produce dynamic programs with state, information and action variables with thousands or …

The project required bringing together years of research in approximate dynamic programming, merging math programming with machine learning, to solve dynamic programs with extremely high-dimensional state variables. The result was a model that closely calibrated against real-world operations and produced accurate estimates of the marginal value D o n o t u s e w e a t h e r r e p o r t U s e w e a th e r s r e p o r t F o r e c a t s u n n y. 6 Rain .8 -$2000 Clouds .2 $1000 Sun .0 $5000 Rain .8 -$200 Clouds .2 -$200 Sun .0 -$200

Video created by University of Alberta, Alberta Machine Intelligence Institute for the course "Fundamentals of Reinforcement Learning". This week, you will learn how to compute value functions and optimal policies, assuming you have the MDP Approximate Dynamic Programming: A Melting Pot of Methods . By Warren B. Powell. Abstract. CASTLE Laboratory which specializes in the solution of large-scale stochastic optimization, with considerable experience in freight transportation. This work led to the development of methods to integrate mathematical programming and simulation within the framework of approximate dynamic programming

Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a … Exploiting monotonicity Monotonicity in dynamic programming (cont’d) » Health • Dosage of a diabetes drug increases with blood sugar. • Dosage of statins (for reducing cholesterol) increase as the

[PDF] An Approximate Dynamic Programming Algorithm for

approximate dynamic programming warren powell solution manual

Castle Labs – ComputAtional STochastic optimization and. Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner., We consider the problem of adapting approximate dynamic programming tech-niques to the inverted pendulum task. This is a particularly challenging task as we work with a relatively uninformative reinforcement signal and have no a priori information about our system. Success in this task requires an e ective solution to.

An adaptive-learning framework for semi-cooperative multi

Approximate Dynamic Programming by Practical Examples. Warren B. Powell Abstract We propose a dynamic hedging strategy for jet fuel which strikes a balance between hedging against jumps in the price of jet fuel and placing bets that the price will, Approximate Dynamic Programming: A Melting Pot of Methods . By Warren B. Powell. Abstract. CASTLE Laboratory which specializes in the solution of large-scale stochastic optimization, with considerable experience in freight transportation. This work led to the development of methods to integrate mathematical programming and simulation within the framework of approximate dynamic programming.

The recursive paradigm originated in control theory with the invention of dynamic programming by the American mathematician Richard E. Bellman in the 1950s. Bellman described possible applications of the method in a variety of fields, including Economics, in the introduction to his 1957 book. Stuart Dreyfus, David Blackwell, and Ronald A. Howard all made major contributions to the approach in Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics Book 931) - Kindle edition by Warren B. Powell. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in

Numerical dynamic programming Approximate dynamic programming Dynamic programming SDP in discrete time, continuous state The Bellman equation The three curses of dimensionality DYNAMIC PROGRAMMING Dynamic programming is arguably the most powerful optimization strategy available. In principle, it can be used to deal with: discrete and continuous Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics Book 931) - Kindle edition by Warren B. Powell. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in

\Approximate Dynamic Programming", Warren Powell, Wiley, 2007. \Neuro-Dynamic Programming", Dimitri Bertsekas and John Tsitsiklis, Athena Scienti c, 1996. Instructor: Chi-Guhn Lee, MC322, 946-7867, [email protected] O ce Hours: By appointment. Course Description: This is a course to introduce the students to theories and applications of Markov decision processes. Emphasis will be on the Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research.

Two solution approaches: Simulation-based approach (de Farias and Van Roy, 2003; also Powell, 2007) Mathematical programming-based approach (Adelman, 2003; Adelman, 2007) Dan Zhang, Spring 2012 Approximate Dynamic Programming 15. Approximate Policy Iteration Policy evaluation algorithm can be di cult when the problem is \large". Idea: Carry out policy iteration approximately. Dan Zhang, Spring Approximate Dynamic Programming- I: Modeling . By Warren B. Powell. Abstract. The first step in solving a stochastic optimization problem is providing a mathematical model. How the problem is modeled can impact the solution strategy. In this chapter, we provide a flexible modeling framework that uses a classic control-theoretic framework, avoiding devices such as onestep transition matrices

We formulate the problem as a dynamic program, but this encounters the classic curse of dimensionality. To overcome this problem, we propose a provably convergent approximate dynamic programming algorithm. We also adapt the algorithm to an online environment, requiring no knowledge of the probability distributions for rates of return and This discussion puts approximate dynamic programming in the context of a variety of other algorithmic strategies by using the modeling framework to describe Stochastic optimization problems pose unique challenges in how they are represented mathematically. These problems arise in a number of different communities, often in the context of

11/05/2016В В· Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd Edition (Wiley Series in Probability and Statistics) [Warren B. Powell] on Amazon.com. *FREE* shipping on qualifying offers. Praise for the First Edition Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro

Dynamic Programming Solutions for Lean Burn Engine Aftertreatmen t Jun-Mo Kang 1, Ily a Kolmano vsky 2 and J. W. Grizzle 3 Abstract The comp etition to deliv er fuel e cien t and en viron-men tally friendly v ehicles is driving the automotiv e in-dustry to consider ev er more complex p o w ertrain sys-tems. Adequate p erformance of these new highly in ter-activ e systems can no longer b e Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro

CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): This paper proposes a distributed solution approach to a certain class of dynamic resource allocation problems and develops a dynamic programming-based multi-agent decision making, learning and communication mechanism. In the class of dynamic resource allocation problems we consider, a set of reusable resources of Approximate Dynamic Programming for High-Dimensional Resource Allocation Problems Warren Powell Department of Operations Research and Financial Engineering Princeton University Wednesday, May 2, 2007 4:30 - 5:30 PM Terman Engineering Center, Room 453 Abstract: Stochastic resource allocation problems produce dynamic programs with state, information and action variables with thousands or …

The term “dynamic programming” (DP) was coined by Richard Bellman in 1950 to denote the recursive process of backward induction for finding optimal policies (or decision rules) to wide class of dynamic, sequential decision making problems under uncertainty.1 Bellman claimed he invented the term to hide © 2007 Hugo P. Simão Slide 1 Approximate Dynamic Programming for a Spare Parts Problem: The Challenge of Rare Events INFORMS Seattle November 2007

01/01/2007В В· A complete and accessible introduction to the real-world applications of approximate dynamic programming With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. We formulate the problem as a dynamic program, but this encounters the classic curse of dimensionality. To overcome this problem, we propose a provably convergent approximate dynamic programming algorithm. We also adapt the algorithm to an online environment, requiring no knowledge of the probability distributions for rates of return and

Merging AI and OR to solve high-dimensional stochastic optimization problems using approximate dynamic programming Warren Buckler Powell Operations Research & Financial Engineering What You Should Know About Approximate Dynamic Programming Warren B. Powell Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey 08544 Received 17 December 2008; accepted 17 December 2008 DOI 10.1002/nav.20347 Published online 24 February 2009 in Wiley InterScience (www.interscience.wiley.com).

In this research, we develop a general mathematical model for distributed, semi-cooperative planning and suggest a solution strategy which involves decomposing the system into subproblems, each of which is specified at a certain period in time and controlled by an agent. The agents communicate marginal values of resources to each other, possibly with distortion. We design experiments to 11/05/2016В В· Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd Edition (Wiley Series in Probability and Statistics) [Warren B. Powell] on Amazon.com. *FREE* shipping on qualifying offers. Praise for the First Edition Finally, a book devoted to dynamic programming and written using the language of operations research (OR)!

Aggregation-Based Learning in the Inverted Pendulum Problem

approximate dynamic programming warren powell solution manual

Has dynamic programming improved decision making?. Exploiting monotonicity Monotonicity in dynamic programming (cont’d) » Health • Dosage of a diabetes drug increases with blood sugar. • Dosage of statins (for reducing cholesterol) increase as the, CASTLE Labs works to advance the development of modern analytics for solving a wide range of applications that involve decisions under uncertainty..

Dynamic Programming Models and Algorithms for the Mutual

approximate dynamic programming warren powell solution manual

Approximate Dynamic Programming II Algorithms. Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm (i.e., backward induction or value iteration), may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate dynamic programming (ADP)..

approximate dynamic programming warren powell solution manual


Powell and Topaloglu: Approximate Dynamic Programming 2 INFORMS|New Orleans 2005, В°c 2005 INFORMS This chapter presents a modeling framework for large-scale resource allocation problems, along with a fairly В°exible algorithmic framework that can be used to obtain good solutions for them. Our modeling framework is motivated by transportation We formulate the problem as a dynamic program, but this encounters the classic curse of dimensionality. To overcome this problem, we propose a provably convergent approximate dynamic programming algorithm. We also adapt the algorithm to an online environment, requiring no knowledge of the probability distributions for rates of return and

Approximate Dynamic Programming for Large-Scale Resource Allocation Problems Huseyin Topaloglu School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853, USA, [email protected] Warren B. Powell Department of Operations Research and … The project required bringing together years of research in approximate dynamic programming, merging math programming with machine learning, to solve dynamic programs with extremely high-dimensional state variables. The result was a model that closely calibrated against real-world operations and produced accurate estimates of the marginal value

CASTLE Labs works to advance the development of modern analytics for solving a wide range of applications that involve decisions under uncertainty. Powell and Topaloglu: Approximate Dynamic Programming 2 INFORMS|New Orleans 2005, В°c 2005 INFORMS This chapter presents a modeling framework for large-scale resource allocation problems, along with a fairly В°exible algorithmic framework that can be used to obtain good solutions for them. Our modeling framework is motivated by transportation

In this research, we develop a general mathematical model for distributed, semi-cooperative planning and suggest a solution strategy which involves decomposing the system into subproblems, each of which is specified at a certain period in time and controlled by an agent. The agents communicate marginal values of resources to each other, possibly with distortion. We design experiments to Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro

He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation. Introduction to ADP Notes: В» When approximating value functions, we are basically drawing on the entire field of statistics. В» Choosing an approximation is primarily an art.

© 2007 Hugo P. Simão Slide 1 Approximate Dynamic Programming for a Spare Parts Problem: The Challenge of Rare Events INFORMS Seattle November 2007 Welcome to PENSA at Princeton University. PENSA is the home of the SAP Initiative for Energy Systems Research at Princeton University. Our goal is to bring advanced analytical thinking to the development of new energy technologies, the rigorous study of energy policy, and the efficient management of …

The term “dynamic programming” (DP) was coined by Richard Bellman in 1950 to denote the recursive process of backward induction for finding optimal policies (or decision rules) to wide class of dynamic, sequential decision making problems under uncertainty.1 Bellman claimed he invented the term to hide Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a …

Warren Powell, Princeton Abstract: Reinforcement learning has attracted considerable attention with its successes in mastering advanced games such as chess and Go. This attention has ignored major successes such as landing SpaceX rockets using the tools of optimal control, or optimizing large fleets of trucks and trains using tools from operations research and approximate dynamic programming. Reinforcement learning, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms.In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro

When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm (i.e., backward induction or value iteration), may no longer be effective in finding a solution within a reasonable time frame, and thus we are forced to consider other approaches, such as approximate dynamic programming (ADP). In this research, we develop a general mathematical model for distributed, semi-cooperative planning and suggest a solution strategy which involves decomposing the system into subproblems, each of which is specified at a certain period in time and controlled by an agent. The agents communicate marginal values of resources to each other, possibly with distortion. We design experiments to

Dynamic Programming Solutions for Lean Burn Engine Aftertreatmen t Jun-Mo Kang 1, Ily a Kolmano vsky 2 and J. W. Grizzle 3 Abstract The comp etition to deliv er fuel e cien t and en viron-men tally friendly v ehicles is driving the automotiv e in-dustry to consider ev er more complex p o w ertrain sys-tems. Adequate p erformance of these new highly in ter-activ e systems can no longer b e Warren Powell, Princeton Abstract: Reinforcement learning has attracted considerable attention with its successes in mastering advanced games such as chess and Go. This attention has ignored major successes such as landing SpaceX rockets using the tools of optimal control, or optimizing large fleets of trucks and trains using tools from operations research and approximate dynamic programming.

Warren Powell, Princeton Abstract: Reinforcement learning has attracted considerable attention with its successes in mastering advanced games such as chess and Go. This attention has ignored major successes such as landing SpaceX rockets using the tools of optimal control, or optimizing large fleets of trucks and trains using tools from operations research and approximate dynamic programming. 01/01/2007В В· A complete and accessible introduction to the real-world applications of approximate dynamic programming With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems.

11/05/2016В В· Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd Edition (Wiley Series in Probability and Statistics) [Warren B. Powell] on Amazon.com. *FREE* shipping on qualifying offers. Praise for the First Edition Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! \Approximate Dynamic Programming", Warren Powell, Wiley, 2007. \Neuro-Dynamic Programming", Dimitri Bertsekas and John Tsitsiklis, Athena Scienti c, 1996. Instructor: Chi-Guhn Lee, MC322, 946-7867, [email protected] O ce Hours: By appointment. Course Description: This is a course to introduce the students to theories and applications of Markov decision processes. Emphasis will be on the

approximate dynamic programming warren powell solution manual

This discussion puts approximate dynamic programming in the context of a variety of other algorithmic strategies by using the modeling framework to describe Stochastic optimization problems pose unique challenges in how they are represented mathematically. These problems arise in a number of different communities, often in the context of What You Should Know About Approximate Dynamic Programming Warren B. Powell Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey 08544 Received 17 December 2008; accepted 17 December 2008 DOI 10.1002/nav.20347 Published online 24 February 2009 in Wiley InterScience (www.interscience.wiley.com).