S = simulate(M,D,Range,...)
[S,Flag,AddF,Discrep] = simulate(M,D,Range,...)
M
[ model ] - Solved model object.
D
[ struct | cell ] - Input database or datapack from which the initial conditions and shocks from within the simulation range will be read.
Range
[ numeric ] - Simulation range.
S
[ struct | cell ] - Database with simulation results.Flag
[ cell | empty ] - Cell array with exit flags for non-linearised simulations.
AddF
[ cell | empty ] - Cell array of tseries with final add-factors added to first-order approximate equations to make non-linear equations hold.
Discrep
[ cell | empty ] - Cell array of tseries with final discrepancies between LHS and RHS in equations marked for non-linear simulations by a double-equal sign.
'anticipate='
[ true
| false
] - If true
, real future shocks are anticipated, imaginary are unanticipated; vice versa if false
.
'contributions='
[ true
| false
] - Decompose the simulated paths into contributions of individual shocks.
'dbOverlay='
[ true
| false
| struct ] - Use the function dboverlay
to combine the simulated output data with the input database, (or a user-supplied database); both the data preceeding the simulation range and after the simulation range are appended.
'deviation='
[ true
| false
] - Treat input and output data as deviations from balanced-growth path.
'dTrends='
[ @auto
| true
| false
] - Add deterministic trends to measurement variables.
'ignoreShocks='
[ true
| false
] - Read only initial conditions from input data, and ignore any shocks within the simulation range.
'nonlinearize='
[ numeric | 0
] - Number of periods (from the beginning of the simulation range) in which selected equations will be simulated to hold in their original nonlinear forms.
'plan='
[ plan ] - Specify a simulation plan to swap endogeneity and exogeneity of some variables and shocks temporarily, and/or to simulate some of the non-linear equations accurately.
'progress='
[ true
| false
] - Display progress bar in the command window.
'sparseShocks='
[ true
| *false*
] - Store anticipated shocks (including endogenized anticipated shocks) in a sparse array.
'solver='
[ 'plain'
| @fsolve
| @lsqnonlin
] - Solution algorithm; see Description.'addSstate='
[ true
| false
] - Add steady state levels to simulated paths before evaluating non-linear equations; this option is used only if 'deviation=' true
.
'display='
[ true
| false
| numeric | Inf ] - Report iterations on the screen; if 'display=' N
, report every N
iterations; if 'display=' Inf
, report only final iteration.
'error='
[ true
| false
] - Throw an error whenever a non-linear simulation fails converge; if false
, only an warning will display.
'lambda='
[ numeric | 1
] - Step size (between 0
and 1
) for add factors added to non-linearised equations in every iteration.
'reduceLambda='
[ numeric | 0.5
] - Reduction factor (between 0
and 1
) by which lambda
will be multiplied if the non-linear simulation gets on an divergence path.
'upperBound='
[ numeric | 1.5
] - Multiple of all-iteration minimum achieved that triggers a reversion to that iteration and a reduciton in lambda
.
'maxIter='
[ numeric | 100
] - Maximum number of iterations.
'tolerance='
[ numeric | 1e-5
] - Convergence tolerance.
'optimSet='
[ cell | struct ] - Optimization Tbx options.Time series in the output database, S
, are are defined on the simulation range, Range
, plus include all necessary initial conditions, i.e. lags of variables that occur in the model code. You can use the option 'dboverlay='
to combine the output database with the input database (i.e. to include a longer history of data in the simulated series).
By default, both the input database, D
, and the output database, S
, are in full levels and the simulated paths for measurement variables include the effect of deterministic trends, including possibly exogenous variables. The default behavior can be changed by changing the options 'deviation='
and 'dTrends='
.
The default value for 'deviation='
is false. If set to true
, then the input database is expected to contain data in the form of deviations from their steady state levels or paths. For ordinary variables (i.e. variables whose log status is false
), it is $x_t-\Bar x_t$, meaning that a 0 indicates that the variable is at its steady state and e.g. 2 indicates the variables exceeds its steady state by 2. For log variables (i.e. variables whose log status is true
), it is $x_t/\Bar x_t$, meaning that a 1 indicates that the variable is at its steady state and e.g. 1.05 indicates that the variable is 5 per cent above its steady state.
The default value for 'dTrends='
is @auto
. This means that its behavior depends on the option 'deviation='
. If 'deviation=' false
then deterministic trends are added to measurement variables, unless you manually override this behavior by setting 'dTrends=' false
. On the other hand, if 'deviation=' true
then deterministic trends are not added to measurement variables, unless you manually override this behavior by setting 'dTrends=' true
.
Use the option 'contributions=' true
to request the contributions of shocks to the simulated path for each variable; this option cannot be used in models with multiple alternative parameterizations or with multiple input data sets.
The output database, S
, contains Ne+2 columns for each variable, where Ne is the number of shocks in the model:
the first columns 1...Ne are the contributions of the Ne individual shocks to the respective variable;
column Ne+1 is the contribution of initial condition, th econstant, and deterministic trends, including possibly exogenous variables;
column Ne+2 is the contribution of nonlinearities in nonlinear simulations (it is always zero otherwise).
The contributions are additive for ordinary variables (i.e. variables whose log status is false
), and multplicative for log variables (i.e. variables whose log status is true
). In other words, if S
is the output database from a simulation with 'contributions=' true
, X
is an ordinary variable, and Z
is a log variable, then
sum(S.X,2)
(i.e. the sum of all Ne+2 contributions in each period, i.e. summation goes across 2nd dimension) reproduces the final simulated path for the variable X
, whereas
prod(S.Z,2)
(i.e. the product of all Ne+2 contributions) reproduces the final simulated path for the variable Z
.
If you simulate a model with N
parameterisations and the input database contains K
data sets (i.e. each variable is a time series with K
columns), then the following happens:
The model will be simulated a total of P = max(N,K)
number of times. This means that each variables in the output database will have P
columns.
The 1st parameterisation will be simulated using the 1st data set, the 2nd parameterisation will be simulated using the 2nd data set, etc. until you reach either the last parameterisation or the last data set, i.e. min(N,K)
. From that point on, the last parameterisation or the last data set will be simply repeated (re-used) in the remaining simulations.
Put formally, the I
-th column in the output database, where I = 1, ..., P
, is a simulation of the min(I,N)
-th model parameterisation using the min(I,K)
-th input data set number.
In nonlinear simulations, the solver tries to find add-factors to nonlinear equations (i.e. equations with =#
instead of the equal sign in the model file) in the first-order solution such that the original nonlinear equations hold for simulated trajectories (with expectations replaced with actual leads).
Two numerical approaches are available, controlled by the option 'solver='
:
'plain
' - which is a fast but less robust method;
@fsolve
, @lsqnonlin
- which are standard Optimization Tbx routines, slower but likely to converge for a wider variety of simulations.