Circuit simulation

Lucipy ships two approaches to simulate a circuit configuration which both solve circuit as a first order differential equation system using idealized math elements.

  • The method Routing.to_sympy() provides an export to a system of equations written symbolically, translating the analog circuit to its idealized math counterpart. This system can then be solved analytically or numerically.

  • The class Simulation does something similar but without making use of Sympy, instead directly working on the UCI matrix. It computes the right hand side function by loop unrolling the LUCIDAC multipliers.

Both approaches are currently limited to the LUCIDAC standard configuration of Math block elements. This section concentrates on the approach provided by Simulation. Furher approaches are discussed in the section Alternative simulation approaches.

Frequent issues when using the Simulator

Misunderstanding Scipy’s solve_ivp

Note that scipy’s solve_ivp might lead to surprising results if you never used it before. In order to not misinterpret the results, a few rules should be kept in mind:

First of all, the ODE solver can fail. Typical examples are ill-configured circuits/equations where infinite values and NaNs enter the simulation domain. Therefore, before looking into other errors, better first check the status field in the result:

res = Simulation(circuit).solve_ivp(t_final)
assert res.status == 0, "ODE solver failed"
assert res.t[-1] == t_final

Second, solve_ivp uses method for automatically determining the ideal time step sizes adaptively. Normally this is very clever and thus provides only a small number of support points within the domain. Therefore, you are urged to use the res.t field to get an idea of the times where the solution vector res.y is defined on. In particular, a frequent mistake is a matplotlib command such as

plot(res.y[0])

which will do linear interpolation between the support points. In problems where solve_ivp could optimize a lot, this will yield very wrong looking results. Instead, plot at least with

plot(res.t, res.y[0], "o-")

but ideally, you want to use the dense_output interpolation property of the solve_ivp code. Consider this example:

from lucipy import Circuit, Simulation
import matplotlib.pyplot as plt
import numpy as np

c = Circuit()
i0, i1, i2 = c.ints(3)
cnst = c.const()
c.connect(cnst, i0, weight=-1)
c.connect(i0, i1, weight=-2)
c.connect(i1, i2, weight=-3)

res = Simulation(c).solve_ivp(1, dense_output=True)

p0 = plt.plot(res.t, res.y[0], "o--", label="$t$")
p1 = plt.plot(res.t, res.y[1], "o--", label="$t^2$")
p2 = plt.plot(res.t, res.y[2], "o--", label="$t^3$")

t = np.linspace(0, 1)  # a "denser" array of times
densey = res.sol(t) # interpolated solution on denser time

plt.plot(t, densey[0], color=p0[0].get_color())
plt.plot(t, densey[1], color=p1[0].get_color())
plt.plot(t, densey[2], color=p2[0].get_color())

plt.title("Computing polynoms with LUCIDAC")
plt.xlabel("Simulation time")
plt.ylabel("Analog units")
plt.legend()

(Source code, png, hires.png, pdf)

../_images/simulation-1.png

The little demonstration computed a few powers of the linear curve $f(t)=t$ by chaining a few integrators. In this extreme example, the three signals show each the naively interpolated solution (dashed line) on the few support points (dots). They are, however, completely different then the actual solution (solid lines).

Getting access to further system properties

The simulator always keeps all 8 integrators as the system state. That means the solution vector y in the object returned by solve_ivp is always of shape (8, num_solution_points).

Despite it can be handy to index this by the computing elements defined before, as in

circuit = Circuit()
i = circuit.int()
m = circuit.mul()

# actual circuit skipped in this example

res = Simulation(circuit).solve_ivp(some_final_time)
evolution_for_i = res.y[i.id] # this works
evolution_for_m = res.y[m.id] # this does NOT work!

Note how i.id only resolves to an index which “by coincidence” is within the range [0,8] which both are fine for addressing within the MIntBlock and the solution vector.

However, m.id also resolves to such an index which is however meaningless when applied to the MIntBlock! Since the unused integrators in the simulation have a zero input, their output is always zero (0.0) and thus the resulting error might assume that the multiplier results in zero output, which is a wrong interpretation!

The correct way to get access to the multipliers is the Mul_out() method, i.e. in this way:

circuit = Circuit()
i = circuit.int()
m = circuit.mul()

# actual circuit skipped in this example

sim = Simulation(circuit)
res = sim.solve_ivp(some_final_time)
evolution_for_i = res.y[i.id] # this works
evolution_for_m = [ sim.Mul_out(ryt)[m.id] for ryt in res.y.T ] # this works

Similar mapping methods available to obtain the computer state, derived from the integrator state, exist, such as

  • adc_values(): Obtain the output of the eight ADCs.

  • acl_out_values(): Obtain the output at the front panel (ACL_OUT).

  • mblocks_output(): Obtain the output of all two MBlocks, i.e. both the integrator and the multiplier in a single array.

Note that if you decide to use the Emulator API, you will always only get access to the ADC values, the same way as in a real LUCIDAC.

Guiding principle of this simulator

This Simulator adopts the convention with the column order

M0 = Integrator Block (8 Integrators)
M1 = Multiplier Block (4 Multipliers, 4 identity elements)

The basic idea is to relate the standard first order differential equation \(\dot{\vec x} = \vec f(\vec x)\) with the LUCIDAC system matrix \(M = UCI\) that relates Math-block inputs with Math-block outputs. We cut the circuit at the analog integrators and identity \(\dot x^{out} = f(x^{in})\). Diagrammatically,

+---> dot x -->[  MInt ]--> x ---+
|                                |
|                                |
+---------...-[ UCI Matrix ]<----+

This feedback network is linearized as in

ic = state^OUT -> U C I -> state^IN -> M -> state^OUT -> ...

In sloppy words this means that \(f := M~x^{in}\). However, it is not as simple as that, because the LUCIDAC also contains non-integrating compute elements, in particular the multipliers. By splitting the matrix \(M \in \mathbb{R}^{32\times 32}\) into four smaller ones \(A, B, C, D \in \mathbb{R}^{16 \times 16}\) as in

\[\begin{split}\begin{pmatrix} I^{in} \\ M^{in} \end{pmatrix} = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{pmatrix} I^{out} \\ M^{out} \end{pmatrix}\end{split}\]

Note how this notation maps implicit summing of the UCI matrix on the summing property of a matrix multiplication.

We can easily seperate the actual state vector variables \(\vec x := I\) from the derived variables \(M\). This is done by loop unrolling, which means to compute the effect of the Mul-Block while evaluating \(f\).

First, let us write out the vectors

\[\begin{split}\begin{aligned} M^{in} &= \left( M^{in}_{0a}, M^{in}_{0b}, M^{in}_{1a}, M^{in}_{1b}, \dots, M^{in}_{3a} \right) \\ M^{out} &= \left( M^{out}_0, M^{out}_1, M^{out}_2, M^{out}_3, M^{in}_{0a}, M^{in}_{0b}, M^{in}_{1a}, M^{in}_{1b} \right) \\ I^{in} &= \left( I^{in}_0, \dots, I^{in}_7 \right) \\ I^{out} &= \left( I^{out}_0, \dots, I^{out}_7 \right) \end{aligned}\end{split}\]

This is a definition for REV0 where the superfluous Math block outputs are used for identity elements.

The algorithm is as following: The set of equations is written out as

\[\begin{split}\begin{aligned} I^{in}_i &= A_{ij} ~ I^{out}_j + B_{ij} ~ M^{out}_{ij} \quad&&\text{(eq I)} \\ M^{in}_i &= C_{ij} ~ I^{out}_j + D_{ij} ~ M^{out}_{ij} \quad&&\text{(eq II)} \end{aligned}\end{split}\]

and then compute (eq I) \(M^{in} = g(I^{out})\) with an initial guess \(M^{out}=0\) and then iteratively reinserting the solution for \(M^{out}\). (eq II) boils then down to \(I^{in} = f(I^{out})\) and thus solving the RHS from the beginning.

Alternative simulation approaches

For sure there are many other ways how one could simulate the LUCIDAC. A few approaches are presented here with their advantages and disadvantages.

Graph/Netlist simulation

The prefered way of electronics simulation is to set up the actual netlist graph. Given that the UCI matrix is basically the adjacency matrix for this graph, this is not too hard. One can then linarize this graph, making it a forest of compute trees each leading to the computation of a single state variable. Such a linearization can happen at compile time or at evaluation/run time. The disadvantage of this approach is that the actual matrix structure of the LUCIDAC is rather lost, in particular the implicit summing structure.

The tensorial simulation approach

Loop unrolling at compile time results in some tensorial structure (Einstein sum convention applies)

\[\begin{split}\begin{aligned} I^{in}_i = &\phantom{+} D_{ij} I^{out}_j \\ &+ E_{ijk} I^{out}_j I^{out}_k \\ &+ F_{ijkl} I^{...}_j I_k I_l \\ &+ G_{ijklm} I_j I_k I_l I_m \\ &+ H_{ijklmn} I_j I_k I_l I_m I_n \end{aligned}\end{split}\]

Computing E, F, G from A, B, C is definetly possible and would allow for a “full closed” (yet spare matrix) description of the LUCIDAC. In such a formulation

  • \(D\) collects all linear terms

  • \(E\) collects all quadratic terms (think of \(\dot x = xy\)), i.e. computations that require one multipler.

  • \(F\) collects all terms requiring two multipliers. Think of \(\dot x = m_2\) and \(m_2 = x m_1\) and \(m_1 = x x\).

  • \(G\) collects all terms requiring three multipliers

  • \(H\) collects all terms requiring all four multipliers of the system.

Despite \(H\) is a tensor of rank 6, there are only a handful of realizations of this matrix possible, given the four multipliers of the system. That means: A lot of indices for actually performing very little work. Despite a theoretical study, such a “compilation” step has little advantage. For sure in the numpy model, setting up large spare matrices before doing a “hot” computation might probably save time. However, the systems for LUCIDAC are nevertheless so small that no digital computer will have a hard time simulating even in vanilla python.

Including imperfection

A first option to model the non-idealities of analog computing hardware is to introduce transfer functions which model what computing elements are doing. We have a Matlab/Simulink simulator available which uses this kind of modeling. More realistic models are available with Spice but require to describe both the reconfigurable hardware as well as its configuration within Spice. However, we provide software to generate these files soon.

API Reference

class lucipy.simulator.Simulation(circuit, realtime=False)[source]

A simulator for the LUCIDAC. Please refer to the documentation for a theoretical and practical introduction.

The Simulator has an API which is suitable for getting a LUCIDAC circuit solved in Python as easy as possible. The usage can boil down to something like integrator_data = Simulation(circuit).solve_ivp(t_final). In particular, the Simulator will allow to inspect the LUCIDAC and its internal states as much as you want.

If you want to use an API which is simliar to the LUCIDAC Hybrid Controller, use the Emulation instead. It uses the Simulation under the head but does not expose it but instead just returns the requested ADC sampling values when using it in a way like e = Emulation(); e.set_circuit(circuit.generate()); data = e.start_run(...). This way, you can steer for instance the sampling times. In contrast, if not requested explicitely with suitable ADC settings, it won’t return the full integrator state and also won’t provide any support for ACL_IN/OUT or further device inspection, given the limited JSON_API of the LUCIDAC. See there for getting a list of limitations.

Important properties and limitations:

  • Currently only understands mblocks M0 = Int and M1 = Mul in REV1-Hardware fashion

  • Unrolls Mul blocks at evaluation time, which is slow

  • Note that the system state is purely hold in the integrators. It always evolves all 8 integrators and returns all 8 integrators. It is up to the user to use the provided mapping functions to derive the neccessary output.

  • Note that by default k0 is implemented in a way that 1 time unit = 10_000 and k0 slow results are thus divided by 100 in time. Use realtime to measure simulation time in seconds instead.

  • Does not implement anything calibration-related, since it assumes ideal computing elements anyway

Parameters
  • circuit – An circuits.Circuit object. We basically only need it in order to make use of the to_dense_matrix() call.

  • realtime – If set, the simulation time unit will be 1sec. If not, k0=10.000 equals time unit 1. That means, in this case time is measured in multiples of 10us. Such a time unit can be more natural for applications. You can set the time factor later by overwriting the int_factor property.

Note, here is a tip to display the big matrices in one line:

>>> import numpy as np
>>> np.set_printoptions(edgeitems=30, linewidth=1000, suppress=True)
Mul_out(Iout, t=0)[source]

Determine Min from Iout, the ‘loop unrolling’ way.

Parameters
  • Iout – Output of MathInt-Block. This is a list with 8 floats. This is also the current system state.

  • t – Simulation time, as in rhs(). Is only needed to be passed to the acl_in callback.

Returns

Mout, the output of the MathMul-Block. Numpy array of shape (8,)

nonzero()[source]

Returns the number of nonzero entries in each 2x2 block matrix. This makes it easy to count the different type of connections in a circuit (like INT->INT, MUL->INT, INT->MUL, MUL->MUL).

rhs(t, state, clip=False)[source]

Evaluates the Right Hand Side (rhs) as in d/dt state=rhs(t,state)

mblocks_output(Iout, Mout=None)[source]

Returns the full two-Math block outputs as continous array, with indices from 0 to 16.

Current limitation (as in the overall Simulation): M0 and M1 positions are hardcoded.

Parameters
  • Iout – The system state, i.e. the integrators as in the rhs()

  • Mout – The output of the MMul-block. If not given, it is derived from the system state.

adc_values(state, adc_channels=None)[source]

Return the adc values for a given rhs state and requested ADC channels.

Parameters
  • state – The system state, i.e. the integrators as in the rhs()

  • adc_channels – The ADC matrix configuration, i.e. the crosslanes which shall be mapped onto the ADC channels. If none is given, the one from the configuration is used. If the configuration has no ADC channels given, an exception is raised.

acl_out_values(state)[source]

Returns the ACL out values (i.e. the front panel outputs of LUCIDAC) for a given system state. The function always returns the full 8 ACL lanes, i.e. a numpy array with shape (8,), i.e. a list of size 8.

Note that ACL_OUT is always connected in LUCIDAC and acts independently of ACL_IN. That means you can probe ACL_OUT without having to replace this signal with the equivalent ACL_IN circuit.

set_acl_in(callback=None)[source]

Feed in external signals into the simulator by feeding via the Frontpanel.

This function expects callback to be a function with signature up_to_eight_acl_in_values = callback(simulation_instance, t, state), i.e. a similar shape as the rhs(). This way, you have full control wether you restrict yourself to a real LUCIDAC ACL_IN/OUT by only accessing acl_out_values() or by doing something a real LUCIDAC cannot do, exploiting the overall inner states.

If you want to remove the ACL_IN callback function, call the method with argument None.

Warning

This is merely a stub, the ACL Input is NOT YET fully implemented.

solve_ivp(t_final, clip=False, ics=None, ics_sign=- 1, **kwargs_for_solve_ivp)[source]

Solves the initial value problem defined by the LUCIDAC Circuit.

Good-to-know options for solve_ivp:

Parameters
  • t_final – Final time to run simulation to. Start time is always 0. Units depend on realtime=True/False in constructor.

  • ics – Initial Conditions to start with. If none given, the MIntBlock configuration is used. If given, a list with 0 <= size <= 8 has to be provided.

  • clip – Whether to carry out bounded-in-bounded-out value clipping as a real analog computer would do

  • dense_output – value True``allows for interpolating on ``res.sol(linspace(...))

  • method – value LSODA is good for stiff problems

  • t_eval – In order to get a solution on equidistant time, for instance you can pass this option an np.linspace(0, t_final, num=500)

  • ics_sign – The overall sign for the integrator initial conditions. Since the real LUCIDAC (REV1) has negating integrators as the classical integrators but the numerical simulation simulates this sign, a -1 is correct here. Better don’t touch it to remain compatible to the hardware.

Note

Clipping can have a weird effect in (very) linear problems because the underlying ODE Solver will be smart, evaluate the RHS seldomly and thus clipping does not happen fast enough, producing artefacts in the result. Therefore use it with care or make sure to make very small time steps.

Quick usage example:

>>> from lucipy import Circuit, Simulation
>>> e = Circuit()
>>> ramp  = e.int(ic = -1)  # makes an Integrator
>>> const = e.const()       # makes a  Constant giver
>>> e.connect(const, ramp, weight = 0.1)
Route(uin=14, lane=16, coeff=0.1, iout=0)
>>> result = Simulation(e).solve_ivp(500)
>>> ramp_result = result.y[0] # unpack the first integrator output
>>> plt.plot(result.t, ramp_result) # plot against solution times     

If you are interested in the output which you can actually probe on LUCIDAC, i.e. the ADCs or Front panel output (ACLs), you can map the resulting state vector (at any soution time) throught adc_values() and acl_out_values().