DDA C++ code generator

C++ code generation is a major feature of the PyDDA code. The generated code is built from a string template and has the following features:

  • Standalone code: No further dependencies (beyond standard libc, lm and STL).

  • Lightweight object oriented: Uses classes (structures) to hold the different variables (basically AoS instead of SoA approach). Little C++ templating.

  • Organization in a few functions which allows to edit the generated C++ code manually afterwards without going mad.

  • CSV or binary output, or no output at all. Output is always made to stdout. Information messages are always sent to stderr.

  • Debugging facilities built right into the code for setting NaNs and abortion in case of floating point exceptions.

  • Runtime arguments via the commandline (argv): Parsing and passing.


C++17 is required for building the C++ code. This is because we use variadic templates.

For the C++ runtime arguments, we support so far:

  • Simulation steering: Selection of number of integration iterations and frequency of dumping the solution.

  • Query based plotting: Selection which variables shall be outputted at runtime.

  • Further Flags and Numeric arguments as well as a useful --help message.

  • Initial data and time step sizes can also be chosen at run time.

  • Introspection capabilities, for instance one can ask the binary about the evolution quantities built in.

Basically the equation structure is the only thing left hardcoded at C++ code generation time.

dda.cpp_exporter.to_cpp(state, number_precision=inf, constexpr_consts=True)[source]

Given a state, returns standalone C++ code as string.

This code can be written to a file, compiled with a recent C++ compiler and then solves the differential equation system when executed.

The algorithm is basically:

  1. linearize the state (this can raise)

  2. determine all the C++ template fields

  3. Return the filled out template

We plan to add logging for non-fatal information about the C++ code quality (see TODOs in the code).

The argument number_precision currently has no effect.

dda.cpp_exporter.compile(code, c_filename='generated.cc', compiler='g++', compiler_output='./a.out', options='--std=c++17 -Wall')[source]

Small helper function to compile C++ code from python.

Write string code to c_filename and run the compiler on that, afterwards. Will raise an error if compilation fails.

dda.cpp_exporter.runproc(command, decode=False)[source]

Helper to run external command and slurp its output to a binary array


in order to know which fields have been read, slurp all variables

dda.cpp_exporter.run(command='./a.out', binary=False, arguments={}, fields_to_export=[])[source]

Small helper function to execute a code generated by this module.

Runs command on the command line, with given dict arguments in --foo=bar fashion and fields_to_export just as a sequential argument list. If no fields_to_export is given, command --list_all_variables will be run to query all default fields.

Pipes stdout to a string, which is returned. Stderr will just be passed. The function will return once the binary finished or raise in case of error.

If you set binary=True, raw data instead of CSV files will be passed between the spawned command and this python program. This decreases the runtime significantly if you write a lot of data (since CSV generating and parsing overload is gone).

Example usage:

>>> from dda import *
>>> state = State()
>>> state["x"] = Symbol("int", Symbol("neg", state["x"]), 0.2, 1)
>>> state
State({'x': int(neg(x), 0.2, 1)})
>>> cpp_code = to_cpp(state)
>>> print(cpp_code)  
// This code was generated by PyDDA.
#include <cmath> /* don't forget -lm for linking */
#include <cfenv> /* for feraisexcept and friends */
#include <limits> /* for signaling NAN */
#include <vector>
>>> compile(cpp_code, compiler_output="foo.exe")
>>> res = run("./foo.exe", arguments={'max_iterations':10}, fields_to_export=['x']) 
Running: ./foo.exe --max_iterations=10 x
TODO: Doctesting this doesn't work good due to stderr (cf https://stackoverflow.com/a/61533524)
>>> print(res) 
dda.cpp_exporter.numpy_read(stdout, binary=False, return_ndarray=True, return_recarray=False, fields_to_export=[])[source]

Postprocessing to fill the gap between the C++ output and a suitable numpy array. In order to so, this function has to know whether your output was binary or text. Furthermore, you need to tell him how many fields you had. You can use list_all_variables()

Old Text:

This option only makes real sense if you set (the default) return_ndarray=True. Note that if you don’t pass the fields_to_export option but set binary=True, in the moment the returned array is one-dimensional (a warning will be printed). If you like even more structured data be returned, turn on return_recarray=True. It will return a numpy.recarray, the same data type which you get when you read CSV data with column headers. return_recarray=True implies return_ndarray=True.

class dda.cpp_exporter.Solver(dda_state_or_code, *runtime_fields_to_export, constexpr_consts=True, **runtime_arguments)[source]

Syntactic sugar for a more concise OOP feeling. Instead of calling export(to="C"), compile() and run() you can just write Solver(state, runtime_arguments). This object will even clean up after running.

run(*runtime_fields_to_export, binary=False, cleanup=True, **runtime_arguments)[source]

Chaining and Syntactic sugar for delayed argument setting/overwriting


Return run results as a np.ndarray (i.e. like a table without headers, typically 2D data)


Return run results as a np.recarray (i.e. like CSV table with named headers)