General Workflow

Set up the environment

AutoEIS relies on EquivalentCircuits.jl package to perform the EIS analysis. The package is not written in Python, so we need to install it first. AutoEIS ships with julia_helpers module that helps to install and manage Julia dependencies with minimal user interaction. For convenience, installing Julia and the required packages is done automatically when you import autoeis for the first time. If you have Julia installed already (discoverable in system PATH), it’ll get detected and used, otherwise, it’ll be installed automatically.

Note

If this is the first time you’re importing AutoEIS, executing the next cell will take a while, outputting a lot of logs. Re-run the cell to get rid of the logs.

[1]:
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import pandas as pd
import seaborn as sns
from IPython.display import display

import autoeis as ae

ae.visualization.set_plot_style()

# Set this to True if you're running the notebook locally
interactive = True

Load EIS data

Once the environment is set up, we can load the EIS data. You can use `pyimpspec <https://vyrjana.github.io/pyimpspec/guide_data.html>`__ to load EIS data from a variety of popular formats. Eventually, AutoEIS requires two arrays: Z and freq. Z is a complex impedance array, and freq is a frequency array. Both arrays must be 1D and have the same length. The impedance array must be in Ohms, and the frequency array must be in Hz.

For convenience, we provide a function load_test_dataset() in autoeis.io to load a test dataset. The function returns a tuple of freq and Z.

[2]:
freq, Z = ae.io.load_test_dataset()

Note

If your EIS data is stored as text, you can easily load them using numpy.loadtxt. See NumPy’s documentation for more details.

Now let’s plot the data using AutoEIS’s built-in plotting function. The function takes the impedance array and the frequency array as inputs. It will plot the impedance spectrum in the Nyquist plot and the Bode plot. All plotting functions in AutoEIS can either be directly called or an Axes object can be passed in to specify the plotting location.

[3]:
fig, ax = ae.visualization.plot_impedance_combo(freq, Z)

# Alternatively, you can manually create a subplot and pass it to the plotting function
# Make sure to create two columns for the two plots (Nyquist and Bode)
# fig, ax = plt.subplots(ncols=2)
# ae.visualization.plot_impedance_combo(freq, Z, ax=ax)
../_images/examples_basic_workflow_10_0.png

Preprocess impedance data

Before performing the EIS analysis, we need to preprocess the impedance data. The preprocessing step is to remove outliers. AutoEIS provides a function to perform the preprocessing. As part of the preprocessing, the impedance measurements with a positive imaginary part are removed, and the rest of the data are filtered using linear KK validation. The function returns the filtered impedance array and the frequency array.

[4]:
freq, Z, aux = ae.core.preprocess_impedance_data(freq, Z, tol_linKK=5e-2, return_aux=True)
res_real, res_imag = aux.res.real, aux.res.imag

fig, ax = ae.visualization.plot_linKK_residuals(freq, res_real, res_imag)
[14:43:27] WARNING  More than 10% of the data was filtered out.
../_images/examples_basic_workflow_12_1.png

Generate candidate equivalent circuits

In this stage, AutoEIS generates a list of candidate equivalent circuits using a customized genetic algorithm (done via the package EquivalentCircuits.jl). The function takes the filtered impedance array and the frequency array as inputs. It returns a list of candidate equivalent circuits. The function has a few optional arguments that can be used to control the number of candidate circuits and the circuit types. The default number of candidate circuits is 10, and the default circuit types are resistors, capacitors, constant phase elements, and inductors. The function runs in parallel by default, but you can turn it off by setting parallel=false.

Note

Since running the genetic algorithm can be time-consuming, we will use a pre-generated list of candidate circuits in this demo to get you started quickly. If you want to generate the candidate circuits yourself, set use_pregenerated_circuits=False in the cell below.

[5]:
use_pregenerated_circuits = True

if use_pregenerated_circuits:
    circuits_unfiltered = ae.io.load_test_circuits()
else:
    kwargs = {
        "iters": 24,
        "complexity": 12,
        "population_size": 100,
        "generations": 30,
        "tol": 1e-2,
        "parallel": True
    }
    circuits_unfiltered = ae.core.generate_equivalent_circuits(freq, Z, **kwargs)
    # Since generating circuits is expensive, let's save the results to a CSV file
    circuits_unfiltered.to_csv("circuits_unfiltered.csv", index=False)
    # To load from file, uncomment the next 2 lines (line 2 is to convert str -> Python objects)
    # circuits_unfiltered = pd.read_csv("circuits_unfiltered.csv")
    # circuits_unfiltered["Parameters"] = circuits_unfiltered["Parameters"].apply(eval)

circuits_unfiltered
[5]:
circuitstring Parameters
0 [P1-R2,R3] {'P1w': 1.986535009696659e-06, 'P1n': 0.937307...
1 [P1,R2]-R3 {'P1w': 1.986654419964975e-06, 'P1n': 0.937307...
2 P1-[L2,R3]-[P4,R5] {'P1w': 0.007186637598734201, 'P1n': 5.5757649...
3 [R1,L2]-[P3,R4] {'R1': 575681380.7164713, 'L2': 2.552295283120...
4 [R1,[R2,R3]-P4] {'R1': 3340406.185539958, 'R2': 891728744.7719...
... ... ...
113 [P1-[L2,P3],[[R4-C5,L6-P7],R8]] {'P1w': 2.2534535379254496e-06, 'P1n': 0.87445...
114 R1-[P2,[R3,P4]] {'R1': 142.06322312672248, 'P2w': 1.7374203222...
115 [R1,P2]-R3 {'R1': 4629850.951700082, 'P2w': 1.98665442152...
116 [P1,R2]-R3 {'P1w': 1.986654419725023e-06, 'P1n': 0.937307...
117 R1-[P2,R3] {'R1': 139.14713051260458, 'P2w': 1.9866544182...

118 rows × 2 columns

Filter candidate equivalent circuits

Note that all these circuits generated by the GEP process probably fit the data well, but they may not be physically meaningful. Therefore, we need to filter them to find the ones that are most plausible. AutoEIS uses “statistical plausibility” as a proxy for gauging “physical plausibility”. To this end, AutoEIS provides a function to filter the candidate circuits based on some heuristics (read our paper for the exact steps and the supporting rationale).

[6]:
circuits = ae.core.filter_implausible_circuits(circuits_unfiltered)
# Let's save the filtered circuits to a CSV file as well
circuits.to_csv("circuits_filtered.csv", index=False)
# To load from file, uncomment the next 2 lines (line 2 is to convert str -> Python objects)
# circuits = pd.read_csv("circuits_filtered.csv")
# circuits["Parameters"] = circuits["Parameters"].apply(eval)
circuits
[6]:
circuitstring Parameters
0 [P1,R2]-R3 {'P1w': 1.986654419964975e-06, 'P1n': 0.937307...
1 R1-P2-[P3,R4] {'R1': 130.21245496329337, 'P2w': 2.7265188132...
2 P1-R2-[P3,R4]-[L5,R6] {'P1w': 3.060047763952672e-05, 'P1n': 0.865667...
3 R1-[R2-L3,P4] {'R1': 139.14713070316256, 'R2': 4629850.95146...
4 R1-[R2-[R3,P4],[L5,R6]-P7] {'R1': 141.23906880866465, 'R2': 2757765.10158...
5 P1-[P2,R3]-L4-R5 {'P1w': 3.060047791227673e-05, 'P1n': 0.865667...
6 [P1,P2]-[P3,R4]-R5 {'P1w': 1.4706292096441107e-06, 'P1n': 0.34376...
7 R1-[R2,P3-R4] {'R1': 35.07220572671824, 'R2': 4629955.027337...
8 P1-R2-[P3,R4]-R5 {'P1w': 2.7265188153219155e-06, 'P1n': 0.86814...
9 L1-[P2,R3]-[R4,L5]-P6-R7 {'L1': 7.336357952082608e-20, 'P2w': 2.0355768...
10 [P1,P2-R3]-R4 {'P1w': 1.8172983928967454e-06, 'P1n': 0.95491...
11 R1-[P2-R3,[L4,R5]-P6] {'R1': 141.24023286281513, 'P2w': 2.2755039323...
12 R1-[R2,P3-L4] {'R1': 139.14713051318796, 'R2': 4629850.95049...
13 [P1,P2-[R3,L4]]-R5 {'P1w': 9.503614832162238, 'P1n': 1.0, 'P2w': ...
14 R1-[P2,[R3,P4]] {'R1': 142.06322312672248, 'P2w': 1.7374203222...

Perform Bayesian inference

Now that we have narrowed down the candidate circuits to a few good ones, we can perform Bayesian inference to find the ones that are statistically most plausible.

[7]:
mcmc_results = ae.core.perform_bayesian_inference(circuits, freq, Z)
mcmcs, status = zip(*mcmc_results)

Visualize results

Now, let’s take a look at the results. perform_bayesian_inference returns a list of MCMC objects. Each MCMC object contains all the information about the Bayesian inference, including the posterior distribution, the prior distribution, the likelihood function, the trace, and the summary statistics.

Before we visualize the results, let’s take a look at the summary statistics. The summary statistics are the mean, the standard deviation, and the 95% credible interval of the posterior distribution. The summary statistics are useful for quickly gauging the uncertainty of the parameters.

[8]:
for mcmc, stat, circuit in zip(mcmcs, status, circuits.circuitstring):
    if stat == 0:
        ae.visualization.print_summary_statistics(mcmc, circuit)
                   [P1,R2]-R3, 0/1000 divergences                    
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P1n │ 6.27e-01 │ 1.33e-03 │ 6.27e-01 │ 6.25e-01 │ 6.30e-01 │
│        P1w │ 1.44e-06 │ 4.33e-09 │ 1.44e-06 │ 1.44e-06 │ 1.45e-06 │
│         R2 │ 7.39e+06 │ 2.66e+04 │ 7.39e+06 │ 7.35e+06 │ 7.43e+06 │
│         R3  2.35e+01  2.85e+01  1.37e+01  8.83e-01  8.09e+01 │
│ sigma_imag │ 1.11e+04 │ 7.28e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
│ sigma_real │ 1.11e+04 │ 7.11e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                  R1-P2-[P3,R4], 0/1000 divergences                  
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P2n │ 3.51e-01 │ 6.14e-04 │ 3.51e-01 │ 3.50e-01 │ 3.52e-01 │
│        P2w │ 1.00e-06 │ 2.78e-09 │ 1.00e-06 │ 1.00e-06 │ 1.01e-06 │
│        P3n │ 5.15e-01 │ 2.89e-01 │ 5.14e-01 │ 6.31e-02 │ 9.53e-01 │
│        P3w  1.34e+10  6.63e+10  9.91e+08  2.84e+07  3.74e+10 │
│         R1  2.40e+01  3.15e+01  1.20e+01  5.47e-01  8.10e+01 │
│         R4  2.31e+09  1.70e+10  1.15e+08  2.44e+06  5.16e+09 │
│ sigma_imag │ 1.58e+04 │ 7.70e+01 │ 1.58e+04 │ 1.57e+04 │ 1.59e+04 │
│ sigma_real │ 1.95e+04 │ 8.15e+01 │ 1.95e+04 │ 1.94e+04 │ 1.97e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
              P1-R2-[P3,R4]-[L5,R6], 0/1000 divergences              
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L5  4.77e-19  3.07e-18  3.31e-20  7.40e-22  1.99e-18 │
│        P1n │ 5.55e-01 │ 1.67e-03 │ 5.55e-01 │ 5.52e-01 │ 5.58e-01 │
│        P1w │ 5.92e-06 │ 6.75e-08 │ 5.92e-06 │ 5.82e-06 │ 6.04e-06 │
│        P3n │ 8.94e-01 │ 1.38e-03 │ 8.94e-01 │ 8.92e-01 │ 8.97e-01 │
│        P3w │ 2.41e-06 │ 1.21e-08 │ 2.41e-06 │ 2.39e-06 │ 2.43e-06 │
│         R2  1.52e+01  1.75e+01  8.91e+00  7.71e-01  5.35e+01 │
│         R4 │ 3.21e+06 │ 1.16e+04 │ 3.21e+06 │ 3.19e+06 │ 3.23e+06 │
│         R6  9.39e+09  4.72e+10  8.11e+08  2.37e+07  3.01e+10 │
│ sigma_imag │ 3.50e+03 │ 3.93e+01 │ 3.50e+03 │ 3.44e+03 │ 3.56e+03 │
│ sigma_real │ 5.27e+03 │ 4.20e+01 │ 5.27e+03 │ 5.20e+03 │ 5.34e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                  R1-[R2-L3,P4], 0/1000 divergences                  
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L3  7.06e-08  5.42e-07  3.47e-09  7.25e-11  1.51e-07 │
│        P4n │ 6.27e-01 │ 1.42e-03 │ 6.27e-01 │ 6.25e-01 │ 6.30e-01 │
│        P4w │ 1.44e-06 │ 4.80e-09 │ 1.44e-06 │ 1.44e-06 │ 1.45e-06 │
│         R1  2.40e+01  3.31e+01  1.19e+01  8.29e-01  8.69e+01 │
│         R2 │ 7.39e+06 │ 2.68e+04 │ 7.39e+06 │ 7.35e+06 │ 7.43e+06 │
│ sigma_imag │ 1.11e+04 │ 7.10e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
│ sigma_real │ 1.11e+04 │ 6.88e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
           R1-[R2-[R3,P4],[L5,R6]-P7], 0/1000 divergences            
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L5  1.28e-15  7.33e-15  1.04e-16  3.32e-18  3.93e-15 │
│        P4n │ 6.08e-01 │ 3.02e-03 │ 6.08e-01 │ 6.03e-01 │ 6.13e-01 │
│        P4w │ 6.48e-06 │ 1.01e-07 │ 6.48e-06 │ 6.33e-06 │ 6.66e-06 │
│        P7n │ 9.28e-01 │ 6.65e-04 │ 9.28e-01 │ 9.27e-01 │ 9.29e-01 │
│        P7w │ 1.82e-06 │ 1.20e-09 │ 1.82e-06 │ 1.82e-06 │ 1.83e-06 │
│         R1  4.17e+01  5.46e+01  2.24e+01  1.06e+00  1.37e+02 │
│         R2 │ 3.53e+06 │ 9.86e+03 │ 3.53e+06 │ 3.51e+06 │ 3.54e+06 │
│         R3 │ 1.22e+07 │ 1.89e+05 │ 1.22e+07 │ 1.19e+07 │ 1.25e+07 │
│         R6  1.01e+10  5.67e+10  8.35e+08  2.30e+07  3.39e+10 │
│ sigma_imag │ 1.67e+03 │ 3.01e+01 │ 1.67e+03 │ 1.62e+03 │ 1.72e+03 │
│ sigma_real │ 1.52e+03 │ 3.13e+01 │ 1.52e+03 │ 1.47e+03 │ 1.57e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                P1-[P2,R3]-L4-R5, 0/1000 divergences                 
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L4  1.05e-19  6.07e-19  6.46e-21  1.66e-22  2.77e-19 │
│        P1n │ 5.55e-01 │ 1.60e-03 │ 5.55e-01 │ 5.52e-01 │ 5.58e-01 │
│        P1w │ 5.93e-06 │ 6.47e-08 │ 5.93e-06 │ 5.82e-06 │ 6.03e-06 │
│        P2n │ 8.94e-01 │ 1.28e-03 │ 8.94e-01 │ 8.92e-01 │ 8.96e-01 │
│        P2w │ 2.41e-06 │ 1.12e-08 │ 2.41e-06 │ 2.39e-06 │ 2.43e-06 │
│         R3 │ 3.21e+06 │ 1.10e+04 │ 3.21e+06 │ 3.19e+06 │ 3.23e+06 │
│         R5  1.54e+01  1.80e+01  9.89e+00  5.43e-01  5.06e+01 │
│ sigma_imag │ 3.50e+03 │ 3.80e+01 │ 3.50e+03 │ 3.44e+03 │ 3.57e+03 │
│ sigma_real │ 5.27e+03 │ 4.47e+01 │ 5.27e+03 │ 5.20e+03 │ 5.35e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
               [P1,P2]-[P3,R4]-R5, 1/1000 divergences                
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P1n │ 3.61e-01 │ 1.82e-03 │ 3.61e-01 │ 3.58e-01 │ 3.64e-01 │
│        P1w │ 1.62e-06 │ 2.56e-08 │ 1.62e-06 │ 1.58e-06 │ 1.66e-06 │
│        P2n │ 9.97e-01 │ 3.28e-03 │ 9.98e-01 │ 9.90e-01 │ 1.00e+00 │
│        P2w │ 3.87e-06 │ 2.42e-07 │ 3.85e-06 │ 3.54e-06 │ 4.28e-06 │
│        P3n │ 9.41e-01 │ 3.37e-03 │ 9.41e-01 │ 9.36e-01 │ 9.47e-01 │
│        P3w │ 2.95e-06 │ 8.82e-08 │ 2.95e-06 │ 2.81e-06 │ 3.09e-06 │
│         R4 │ 2.23e+06 │ 4.70e+04 │ 2.23e+06 │ 2.16e+06 │ 2.31e+06 │
│         R5  2.00e+02  2.03e+02  1.35e+02  4.46e+00  6.18e+02 │
│ sigma_imag │ 1.85e+03 │ 3.00e+01 │ 1.85e+03 │ 1.80e+03 │ 1.90e+03 │
│ sigma_real │ 1.69e+03 │ 2.82e+01 │ 1.69e+03 │ 1.64e+03 │ 1.73e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                  R1-[R2,P3-R4], 0/1000 divergences                  
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P3n │ 6.27e-01 │ 1.37e-03 │ 6.27e-01 │ 6.25e-01 │ 6.29e-01 │
│        P3w │ 1.44e-06 │ 4.54e-09 │ 1.44e-06 │ 1.44e-06 │ 1.45e-06 │
│         R1  1.66e+01  2.19e+01  7.97e+00  5.33e-01  5.86e+01 │
│         R2 │ 7.39e+06 │ 2.58e+04 │ 7.39e+06 │ 7.35e+06 │ 7.44e+06 │
│         R4  1.99e+01  2.40e+01  1.15e+01  5.89e-01  6.95e+01 │
│ sigma_imag │ 1.11e+04 │ 7.02e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
│ sigma_real │ 1.11e+04 │ 7.06e+01 │ 1.11e+04 │ 1.10e+04 │ 1.13e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                P1-R2-[P3,R4]-R5, 0/1000 divergences                 
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P1n │ 3.51e-01 │ 6.04e-04 │ 3.51e-01 │ 3.50e-01 │ 3.52e-01 │
│        P1w │ 1.00e-06 │ 2.69e-09 │ 1.00e-06 │ 1.00e-06 │ 1.01e-06 │
│        P3n │ 5.13e-01 │ 2.90e-01 │ 5.14e-01 │ 6.68e-02 │ 9.55e-01 │
│        P3w  5.72e+09  2.39e+10  4.50e+08  9.38e+06  2.27e+10 │
│         R2  1.90e+01  2.32e+01  1.11e+01  7.42e-01  6.70e+01 │
│         R4  5.89e+09  4.72e+10  3.48e+08  9.95e+06  1.35e+10 │
│         R5  1.80e+01  2.67e+01  8.44e+00  4.83e-01  7.25e+01 │
│ sigma_imag │ 1.58e+04 │ 7.68e+01 │ 1.58e+04 │ 1.57e+04 │ 1.59e+04 │
│ sigma_real │ 1.95e+04 │ 8.67e+01 │ 1.95e+04 │ 1.94e+04 │ 1.97e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
            L1-[P2,R3]-[R4,L5]-P6-R7, 0/1000 divergences             
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L1  1.26e-18  1.51e-17  6.96e-20  2.12e-21  2.85e-18 │
│         L5  4.82e-19  2.24e-18  4.73e-20  1.29e-21  2.08e-18 │
│        P2n │ 8.94e-01 │ 1.42e-03 │ 8.94e-01 │ 8.92e-01 │ 8.97e-01 │
│        P2w │ 2.41e-06 │ 1.24e-08 │ 2.41e-06 │ 2.39e-06 │ 2.43e-06 │
│        P6n │ 5.55e-01 │ 1.73e-03 │ 5.55e-01 │ 5.52e-01 │ 5.58e-01 │
│        P6w │ 5.93e-06 │ 7.04e-08 │ 5.93e-06 │ 5.81e-06 │ 6.04e-06 │
│         R3 │ 3.21e+06 │ 1.20e+04 │ 3.21e+06 │ 3.19e+06 │ 3.23e+06 │
│         R4  2.83e+10  3.92e+11  9.25e+08  1.87e+07  4.66e+10 │
│         R7  1.51e+01  1.78e+01  9.27e+00  6.46e-01  4.93e+01 │
│ sigma_imag │ 3.50e+03 │ 3.75e+01 │ 3.50e+03 │ 3.44e+03 │ 3.56e+03 │
│ sigma_real │ 5.27e+03 │ 4.38e+01 │ 5.27e+03 │ 5.21e+03 │ 5.35e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                  [P1,P2-R3]-R4, 0/1000 divergences                  
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P1n │ 9.55e-01 │ 7.99e-04 │ 9.55e-01 │ 9.54e-01 │ 9.56e-01 │
│        P1w │ 1.81e-06 │ 1.59e-09 │ 1.81e-06 │ 1.81e-06 │ 1.81e-06 │
│        P2n │ 4.19e-01 │ 1.14e-03 │ 4.19e-01 │ 4.17e-01 │ 4.21e-01 │
│        P2w │ 2.54e-06 │ 2.03e-08 │ 2.54e-06 │ 2.51e-06 │ 2.57e-06 │
│         R3 │ 2.86e+06 │ 1.03e+04 │ 2.86e+06 │ 2.84e+06 │ 2.88e+06 │
│         R4 │ 1.55e+03 │ 3.09e+02 │ 1.56e+03 │ 1.06e+03 │ 2.04e+03 │
│ sigma_imag │ 2.08e+03 │ 3.18e+01 │ 2.08e+03 │ 2.03e+03 │ 2.13e+03 │
│ sigma_real │ 1.84e+03 │ 3.04e+01 │ 1.84e+03 │ 1.79e+03 │ 1.89e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
              R1-[P2-R3,[L4,R5]-P6], 0/1000 divergences              
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L4  1.02e-19  9.80e-19  9.43e-21  2.58e-22  3.05e-19 │
│        P2n │ 4.19e-01 │ 1.15e-03 │ 4.19e-01 │ 4.17e-01 │ 4.20e-01 │
│        P2w │ 2.54e-06 │ 2.05e-08 │ 2.54e-06 │ 2.50e-06 │ 2.57e-06 │
│        P6n │ 9.55e-01 │ 8.15e-04 │ 9.55e-01 │ 9.54e-01 │ 9.57e-01 │
│        P6w │ 1.81e-06 │ 1.57e-09 │ 1.81e-06 │ 1.81e-06 │ 1.81e-06 │
│         R1 │ 1.53e+03 │ 3.19e+02 │ 1.53e+03 │ 1.01e+03 │ 2.08e+03 │
│         R3 │ 2.86e+06 │ 1.04e+04 │ 2.86e+06 │ 2.84e+06 │ 2.88e+06 │
│         R5  8.75e+09  5.72e+10  7.28e+08  1.40e+07  3.01e+10 │
│ sigma_imag │ 2.08e+03 │ 3.10e+01 │ 2.08e+03 │ 2.03e+03 │ 2.13e+03 │
│ sigma_real │ 1.84e+03 │ 2.66e+01 │ 1.84e+03 │ 1.79e+03 │ 1.88e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                  R1-[R2,P3-L4], 0/1000 divergences                  
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L4  4.76e-20  2.48e-19  4.48e-21  1.19e-22  1.77e-19 │
│        P3n │ 6.27e-01 │ 1.34e-03 │ 6.27e-01 │ 6.25e-01 │ 6.29e-01 │
│        P3w │ 1.44e-06 │ 4.57e-09 │ 1.44e-06 │ 1.44e-06 │ 1.45e-06 │
│         R1  2.37e+01  2.90e+01  1.24e+01  9.99e-01  7.64e+01 │
│         R2 │ 7.39e+06 │ 2.53e+04 │ 7.39e+06 │ 7.35e+06 │ 7.44e+06 │
│ sigma_imag │ 1.11e+04 │ 7.09e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
│ sigma_real │ 1.11e+04 │ 6.81e+01 │ 1.11e+04 │ 1.10e+04 │ 1.12e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
               [P1,P2-[R3,L4]]-R5, 0/1000 divergences                
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│         L4  6.78e+01  3.30e+02  4.73e+00  1.12e-01  2.62e+02 │
│        P1n │ 4.96e-01 │ 2.97e-01 │ 4.96e-01 │ 4.78e-02 │ 9.48e-01 │
│        P1w  3.06e+02  4.79e+03  8.80e+00  1.80e-01  4.36e+02 │
│        P2n │ 5.06e-01 │ 2.92e-01 │ 5.12e-01 │ 5.60e-02 │ 9.42e-01 │
│        P2w  7.71e+09  2.89e+10  8.68e+08  2.39e+07  2.81e+10 │
│         R3  1.56e+10  1.02e+11  9.84e+08  2.90e+07  4.18e+10 │
│         R5 │ 7.31e+05 │ 6.64e+03 │ 7.31e+05 │ 7.20e+05 │ 7.42e+05 │
│ sigma_imag │ 3.68e+04 │ 1.09e+02 │ 3.68e+04 │ 3.66e+04 │ 3.70e+04 │
│ sigma_real │ 4.95e+04 │ 1.27e+02 │ 4.95e+04 │ 4.93e+04 │ 4.97e+04 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘
                 R1-[P2,[R3,P4]], 0/1000 divergences                 
┏━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┓
┃  Parameter      Mean       Std    Median      5.0%     95.0% ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━┩
│        P2n │ 1.00e+00 │ 2.46e-05 │ 1.00e+00 │ 1.00e+00 │ 1.00e+00 │
│        P2w │ 1.55e-06 │ 2.94e-09 │ 1.55e-06 │ 1.54e-06 │ 1.55e-06 │
│        P4n │ 2.08e-01 │ 3.25e-04 │ 2.08e-01 │ 2.07e-01 │ 2.08e-01 │
│        P4w │ 4.98e-07 │ 6.69e-10 │ 4.98e-07 │ 4.97e-07 │ 4.99e-07 │
│         R1  4.05e+01  5.09e+01  2.16e+01  1.51e+00  1.45e+02 │
│         R3  2.15e+11  4.76e+11  8.85e+10  2.21e+10  7.43e+11 │
│ sigma_imag │ 5.07e+03 │ 4.59e+01 │ 5.07e+03 │ 4.99e+03 │ 5.14e+03 │
│ sigma_real │ 4.71e+03 │ 4.60e+01 │ 4.71e+03 │ 4.64e+03 │ 4.79e+03 │
└────────────┴──────────┴──────────┴──────────┴──────────┴──────────┘

Note that some rows have been highlighted in yellow, indicating that the standard deviation is greater than the mean. This is not necessarily a bad thing, but it screams “caution” due to the high uncertainty. In this case, we need to check the data and the model to see if there is anything wrong. For example, the data may contain outliers, or the model may be overparameterized.

Before we investigate the posteriors for individual circuit components for each circuit, let’s take a bird’s eye view of the results, so you have a general feeling about which circuits are generally better, and which ones are worse. For this purpose, we first need to evaluate the circuits based on some common metrics, and then rank them accordingly:

[9]:
# We first need to augment the circuits dataframe with MCMC results
circuits["MCMC"] = mcmcs
circuits["success"] = list(map(lambda x: x == 0, status))
circuits["divergences"] = [m.get_extra_fields()["diverging"].sum() for m in mcmcs]

# Now, we can compute the fitness metrics, then rank/visualize accordingly
circuits = ae.core.compute_fitness_metrics(circuits, freq, Z)
ae.visualization.print_inference_results(circuits)
[9]:
                                           Inference results                                           
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━┓
┃                    Circuit  WAIC (re)  WAIC (im)  R2 (re)  R2 (im)  MAPE (re)  MAPE (im)  Np ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━┩
│ R1-[R2-[R3,P4],[L5,R6]-P7] │  3.91e+03 │  3.67e+03 │   1.000 │   1.000 │  3.15e+02 │  1.18e+03 │  9 │
│         [P1,P2]-[P3,R4]-R5 │  4.36e+03 │  4.13e+03 │   1.000 │   1.000 │  3.48e+02 │  1.30e+03 │  8 │
│      R1-[P2-R3,[L4,R5]-P6] │  4.93e+03 │  4.06e+03 │   1.000 │   1.000 │  4.83e+02 │  1.47e+03 │  8 │
│              [P1,P2-R3]-R4 │  5.03e+03 │  4.07e+03 │   1.000 │   1.000 │  4.84e+02 │  1.47e+03 │  6 │
│           P1-[P2,R3]-L4-R5 │  8.73e+03 │  6.47e+03 │   0.999 │   0.999 │  1.28e+03 │  2.52e+03 │  7 │
│      P1-R2-[P3,R4]-[L5,R6] │  8.87e+03 │  6.37e+03 │   0.999 │   0.999 │  1.28e+03 │  2.52e+03 │  8 │
│   L1-[P2,R3]-[R4,L5]-P6-R7 │  9.12e+03 │  6.50e+03 │   0.999 │   0.999 │  1.28e+03 │  2.52e+03 │  9 │
│            R1-[P2,[R3,P4]] │  7.84e+03 │  1.05e+04 │   0.999 │   0.996 │  9.66e+02 │  3.57e+03 │  6 │
│              R1-[R2,P3-L4] │  1.59e+04 │  2.20e+04 │   0.988 │   0.960 │  2.79e+03 │  7.93e+03 │  5 │
│              R1-[R2,P3-R4] │  1.59e+04 │  2.22e+04 │   0.988 │   0.960 │  2.79e+03 │  7.94e+03 │  5 │
│                 [P1,R2]-R3 │  1.58e+04 │  2.28e+04 │   0.988 │   0.959 │  2.79e+03 │  7.94e+03 │  4 │
│              R1-[R2-L3,P4] │  1.59e+04 │  2.28e+04 │   0.988 │   0.959 │  2.79e+03 │  7.94e+03 │  5 │
│              R1-P2-[P3,R4] │  2.34e+04 │  2.21e+04 │   0.938 │   0.885 │  1.43e+04 │  1.50e+04 │  6 │
│           P1-R2-[P3,R4]-R5 │  2.34e+04 │  2.23e+04 │   0.938 │   0.885 │  1.43e+04 │  1.50e+04 │  7 │
│         [P1,P2-[R3,L4]]-R5 │  5.90e+04 │  4.10e+04 │  -0.001 │  -0.455 │  1.87e+05 │  2.60e+04 │  7 │
└────────────────────────────┴───────────┴───────────┴─────────┴─────────┴───────────┴───────────┴────┘

Now, let’s take one step further and visualize the results. To get an overview of the results, we can plot the posterior distributions of the parameters as well as the trace plots. It’s an oversimplification, but basically, a good posterior distribution should be unimodal and symmetric, and the trace plot should be stationary. In probabilistic terms, this means that given the circuit model, the data are informative about the parameters, and the MCMC algorithm has converged.

On the other hand, if the posterior distribution is multimodal or skewed, or the trace plot is not stationary, it means that the data are not informative about the parameters, and the MCMC algorithm has not converged. In this case, we need to check the data and the model to see if there is anything wrong. For example, the data may contain outliers, or the model may be overparameterized.

Note

For the following cell to work, you need to set interactive=True at the beginning of the notebook. It’s turned off by default since GitHub doesn’t render interactive plots.

[10]:
def plot_trace(samples):
    """Plots the posterior and trace of a variable in the MCMC sampler."""
    output = widgets.Output()
    with output:
        fig, ax = plt.subplots(ncols=2, figsize=(9, 3))
        log_scale = bool(np.std(samples) / np.median(samples) > 2)
        kwargs_hist = {
            "stat": "density",
            "log_scale": log_scale,
            "color": "lightblue",
            "bins": 25,
        }
        # ax[0] -> posterior, ax[1] -> trace
        sns.histplot(samples, **kwargs_hist, ax=ax[0])
        kwargs_kde = {"log_scale": log_scale, "color": "red"}
        sns.kdeplot(samples, **kwargs_kde, ax=ax[0])
        # Plot trace
        ax[1].plot(samples, alpha=0.5)
        ax[1].set_yscale("log" if log_scale else "linear")
        plt.show(fig)
    return output


def plot_trace_all(mcmc: "numpyro.MCMC", circuit: str):
    """Plots the posterior and trace of all variables in the MCMC sampler."""
    variables = ae.parser.get_parameter_labels(circuit)
    samples = mcmc.get_samples()
    children = [plot_trace(samples[var]) for var in variables]
    tab = widgets.Tab()
    tab.children = children
    tab.titles = variables
    return tab


def dropdown_trace_plots():
    """Creates a dropdown menu to select a circuit and plot its trace."""

    def on_dropdown_clicked(change):
        with output:
            output.clear_output()
            idx = circuits_list.index(change.new)
            plot = trace_plots[idx]
            display(plot)

    dropdown = widgets.Dropdown(
        description="Circuit:", options=circuits_list, value=circuits_list[0]
    )
    output = widgets.Output()
    dropdown.observe(on_dropdown_clicked, names="value")
    display(dropdown, output)

    # Default to the first circuit
    with output:
        display(trace_plots[0])


# Cache rendered plots to avoid re-rendering
circuits_list = circuits["circuitstring"].tolist()
trace_plots = []

for i, row in circuits.iterrows():
    circuit = row["circuitstring"]
    mcmc = row["MCMC"]
    if row["success"]:
        trace_plots.append(plot_trace_all(mcmc, circuit))
    else:
        trace_plots.append("Inference failed")

if interactive:
    dropdown_trace_plots()

The functions defined in the above cell are used to make the interactive dropdown menu. The dropdown menu lets you select a circuit model, and shows the posterior distributions of the parameters as well as the trace plots. The dropdown menu is useful for quickly comparing the results of different circuit models. Running this cell for the first time may take a while (~ 5 seconds per circuit), but once run, all the plots will be cached.

The distributions for the most part look okay, although in some cases (like R2 and R4 in the first circuit) the span is quite large (~ few orders of magnitude). Nevertheless, the distributions are bell-shaped. The trace plots also look stationary.

Now, let’s take a look at the posterior predictive distributions. “Posterior predictive” is a fancy term for “model prediction”, meaning that after we have performed Bayesian inference, we can use the posterior distribution to make predictions. In this case, we can use the posterior distribution to predict the impedance spectrum and compare it with our measurements and see how well they match. After all, all the posteriors might look good (bell-shaped, no multimodality, etc.) but if the model predictions don’t match the data, then the model is not good.

Note

For the following cell to work, you need to set interactive=True at the beginning of the notebook. It’s turned off by default since GitHub doesn’t render interactive plots.

[11]:
def plot_nyquist(mcmc: "numpyro.MCMC", circuit: str):
    """Plots Nyquist plot of the circuit using the median of the posteriors."""
    # Compute circuit impedance using median of posteriors
    samples = mcmc.get_samples()
    variables = ae.parser.get_parameter_labels(circuit)
    percentiles = [10, 50, 90]
    params_list = [[np.percentile(samples[v], p) for v in variables] for p in percentiles]
    circuit_fn = ae.utils.generate_circuit_fn(circuit)
    Zsim_list = [circuit_fn(freq, params) for params in params_list]
    # Plot Nyquist plot
    fig, ax = plt.subplots(figsize=(5.5, 4))
    for p, Zsim in zip(percentiles, Zsim_list):
        ae.visualization.plot_nyquist(Zsim, fmt="-", label=f"model ({p}%)", ax=ax)
    ae.visualization.plot_nyquist(Z, "o", label="measured", ax=ax)
    # Next line is necessary to avoid plotting the first time
    plt.close(fig)
    return fig


def dropdown_nyquist_plots():
    """Creates a dropdown menu to select a circuit and plot its Nyquist plot."""

    def on_change(change):
        with output:
            output.clear_output()
            idx = circuits_list.index(change.new)
            fig = nyquist_plots[idx]
            display(fig)

    output = widgets.Output()
    dropdown = widgets.Dropdown(
        options=circuits_list, value=circuits_list[0], description="Circuit:"
    )
    dropdown.observe(on_change, names="value")
    display(dropdown, output)

    # Default to the first circuit
    with output:
        display(nyquist_plots[0])


# Cache rendered plots to avoid re-rendering
circuits_list = circuits["circuitstring"].tolist()
nyquist_plots = []

for i, row in circuits.iterrows():
    circuit = row["circuitstring"]
    mcmc = row["MCMC"]
    if row["success"]:
        nyquist_plots.append(plot_nyquist(mcmc, circuit))
    else:
        nyquist_plots.append("Inference failed")

if interactive:
    dropdown_nyquist_plots()