The configuration is loaded upon import from a YAML file in the directory where PyPhi is run: pyphi_config.yml. If no file is found, the default configuration is used.

The various options are listed here with their defaults

>>> import pyphi
>>> defaults = pyphi.config.DEFAULTS

It is also possible to manually load a YAML configuration file within your script:

>>> pyphi.config.load_config_file('pyphi_config.yml')

Or load a dictionary of configuration values:

>>> pyphi.config.load_config_dict({'SOME_CONFIG': 'value'})

Theoretical approximations

This section deals with assumptions that speed up computation at the cost of theoretical accuracy.

  • pyphi.config.ASSUME_CUTS_CANNOT_CREATE_NEW_CONCEPTS: In certain cases, making a cut can actually cause a previously reducible concept to become a proper, irreducible concept. Assuming this can never happen can increase performance significantly, however the obtained results are not strictly accurate.

  • pyphi.config.CUT_ONE_APPROXIMATION: When determining the MIP for \(\Phi\), this restricts the set of system cuts that are considered to only those that cut the inputs or outputs of a single node. This restricted set of cuts scales linearly with the size of the system; the full set of all possible bipartitions scales exponentially. This approximation is more likely to give theoretically accurate results with modular, sparsely-connected, or homogeneous networks.

    >>> defaults['CUT_ONE_APPROXIMATION']
  • pyphi.config.MEASURE: The measure to use when computing distances between repertoires and concepts. The default is EMD; the Earth Movers’s Distance. KLD is the Kullback-Leibler Divergence. L1 is the L1 distance. ENTROPY_DIFFERENCE is the absolute value of the difference in entropy of the two distributions, abs(entropy(a) - entropy(b)). KLD cannot be used as a measure when performing big-phi computations because of it’s asymmetry.

    >>> defaults['MEASURE']

System resources

These settings control how much processing power and memory is available for PyPhi to use. The default values may not be appropriate for your use-case or machine, so please check these settings before running anything. Otherwise, there is a risk that simulations might crash (potentially after running for a long time!), resulting in data loss.

  • pyphi.config.PARALLEL_CONCEPT_EVALUATION: Control whether concepts are evaluated in parallel when computing constellations.

  • pyphi.config.PARALLEL_CUT_EVALUATION: Control whether system cuts are evaluated in parallel, which requires more memory. If cuts are evaluated sequentially, only two BigMip instances need to be in memory at once.

    >>> defaults['PARALLEL_CUT_EVALUATION']
  • pyphi.config.PARALLEL_COMPLEX_EVALUATION: Control whether systems are evaluated in parallel when computing complexes.



    Only one of PARALLEL_CONCEPT_EVALUATION, PARALLEL_CUT_EVALUATION, and PARALLEL_COMPLEX_EVALUATION can be set to True at a time. For maximal efficiency, you should parallelize the highest level computations possible: eg. parallelize complex evaluation instead of cut evaluation, but only if you are actually computing complexes. You should only parallelize concept evaluation if you are just computing constellations.

  • pyphi.config.NUMBER_OF_CORES: Control the number of CPU cores used to evaluate unidirectional cuts. Negative numbers count backwards from the total number of available cores, with -1 meaning “use all available cores.”

    >>> defaults['NUMBER_OF_CORES']
  • pyphi.config.MAXIMUM_CACHE_MEMORY_PERCENTAGE: PyPhi employs several in-memory caches to speed up computation. However, these can quickly use a lot of memory for large networks or large numbers of them; to avoid thrashing, this options limits the percentage of a system’s RAM that the caches can collectively use.



PyPhi is equipped with a transparent caching system for BigMip objects which stores them as they are computed to avoid having to recompute them later. This makes it easy to play around interactively with the program, or to accumulate results with minimal effort. For larger projects, however, it is recommended that you manage the results explicitly, rather than relying on the cache. For this reason it is disabled by default.

  • pyphi.config.CACHE_BIGMIPS: Control whether BigMip objects are cached and automatically retreived.

    >>> defaults['CACHE_BIGMIPS']
  • pyphi.config.CACHE_POTENTIAL_PURVIEWS: Controls whether the potential purviews of mechanisms of a network are cached. Caching speeds up computations by not recomputing expensive reducibility checks, but uses additional memory.

    >>> defaults['CACHE_POTENTIAL_PURVIEWS']
  • pyphi.config.CACHING_BACKEND: Control whether precomputed results are stored and read from a database or from a local filesystem-based cache in the current directory. Set this to ‘fs’ for the filesystem, ‘db’ for the database. Caching results on the filesystem is the easiest to use but least robust caching system. Caching results in a database is more robust and allows for caching individual concepts, but requires installing MongoDB.

    >>> defaults['CACHING_BACKEND']
  • pyphi.config.FS_CACHE_VERBOSITY: Control how much caching information is printed. Takes a value between 0 and 11. Note that printing during a loop iteration can slow down the loop considerably.

    >>> defaults['FS_CACHE_VERBOSITY']
  • pyphi.config.FS_CACHE_DIRECTORY: If the caching backend is set to use the filesystem, the cache will be stored in this directory. This directory can be copied and moved around if you want to reuse results _e.g._ on a another computer, but it must be in the same directory from which PyPhi is being run.

    >>> defaults['FS_CACHE_DIRECTORY']
  • pyphi.config.MONGODB_CONFIG: Set the configuration for the MongoDB database backend. This only has an effect if the caching backend is set to use the database.

    >>> defaults['MONGODB_CONFIG']['host']
    >>> defaults['MONGODB_CONFIG']['port']
    >>> defaults['MONGODB_CONFIG']['database_name']
    >>> defaults['MONGODB_CONFIG']['collection_name']
  • pyphi.config.REDIS_CACHE: Specifies whether to use Redis to cache Mice.

    >>> defaults['REDIS_CACHE']
  • pyphi.config.REDIS_CONFIG: Configure the Redis database backend. These

    are the defaults in the provided redis.conf file.

    >>> defaults['REDIS_CONFIG']['host']
    >>> defaults['REDIS_CONFIG']['port']


These settings control how PyPhi handles log messages. Logs can be written to standard output, a file, both, or none. If these simple default controls are not flexible enough for you, you can override the entire logging configuration. See the documentation on Python’s logger for more information.

  • pyphi.config.LOG_STDOUT_LEVEL: Controls the level of log messages written to standard output. Can be one of 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', or None. 'DEBUG' is the least restrictive level and will show the most log messages. 'CRITICAL' is the most restrictive level and will only display information about unrecoverable errors. If set to None, logging to standard output will be disabled entirely.

    >>> defaults['LOG_STDOUT_LEVEL']
  • pyphi.config.LOG_FILE_LEVEL: Controls the level of log messages written to the log file. This option has the same possible values as ``LOG_STDOUT_LEVEL.

    >>> defaults['LOG_FILE_LEVEL']
  • pyphi.config.LOG_FILE: Control the name of the logfile.

    >>> defaults['LOG_FILE']
  • pyphi.config.LOG_CONFIG_ON_IMPORT: Controls whether the current configuration is printed when PyPhi is imported.

    >>> defaults['LOG_CONFIG_ON_IMPORT']
  • pyphi.config.PROGRESS_BARS: Controls whether to show progress bars on the console.

    >>> defaults['PROGRESS_BARS']

Numerical precision

  • pyphi.config.PRECISION: Computations in PyPhi rely on finding the Earth Mover’s Distance. This is done via an external C++ library that uses flow-optimization to find a good approximation of the EMD. Consequently, systems with zero \(\Phi\) will sometimes be computed to have a small but non-zero amount. This setting controls the number of decimal places to which PyPhi will consider EMD calculations accurate. Values of \(\Phi\) lower than 10e-PRECISION will be considered insignificant and treated as zero. The default value is about as accurate as the EMD computations get.

    >>> defaults['PRECISION']


  • pyphi.config.VALIDATE_SUBSYSTEM_STATES: Control whether PyPhi checks if the subsystems’s state is possible (reachable from some past state), given the subsystem’s TPM (which is conditioned on background conditions). If this is turned off, then calculated \(\Phi\) values may not be valid, since they may be associated with a subsystem that could never be in the given state.

  • pyphi.config.SINGLE_NODES_WITH_SELFLOOPS_HAVE_PHI: If set to True, this defines the Phi value of subsystems containing only a single node with a self-loop to be 0.5. If set to False, their \(\Phi\) will be actually be computed (to be zero, in this implementation).

  • pyphi.config.REPR_VERBOSITY: Controls the verbosity of __repr__ methods on PyPhi objects. Can be set to 0, 1, or 2. If set to 1, calling repr on PyPhi objects will return pretty-formatted and legible strings, excluding repertoires. If set to 2, repr calls also include repertoires.

    Although this breaks the convention that __repr__ methods should return a representation which can reconstruct the object, readable representations are convenient since the Python REPL calls repr to represent all objects in the shell and PyPhi is often used interactively with the REPL. If set to 0, repr returns more traditional object representations.

    >>> defaults['REPR_VERBOSITY']
  • pyphi.config.PARTITION_TYPE: Controls the type of partition used for \(\varphi\) computations.

    If set to 'BI', partitions will have two parts.

    If set to 'TRI', partitions will have three parts. In addition, computations will only consider partitions that strictly partition the mechanism. That is, for the mechanism (A, B) and purview (B, C, D) the partition

    AB   []
    -- X --
    B    CD

    is not considered, but

    A    B
    -- X --
    B    CD

    is. The following is also valid:

    AB   []
    -- X ---
    []   BCD

    In addition, this option introduces wedge tripartitions of the form

    A    B   []
    -- X - X --
    B    C   D

    where the mechanism in the third part is always empty.

    In addition, in the case of a \(\varphi\)-tie when computing MICE, The 'TRIPARTITION' setting choses the MIP with smallest purview instead of the largest (which is the default.)

    Finally, if set to 'ALL', all possible partitions will be tested.

    >>> defaults['PARTITION_TYPE']
  • pyphi.config.PICK_SMALLEST_PURVIEW: When computing MICE, it is possible for several MIPs to have the same \(\varphi\) value. If this options is set to True the MIP with the smallest purview will be chosen; otherwise, the MIP with the largest purview will be selected.

    >>> defaults['PICK_SMALLEST_PURVIEW']
  • pyphi.config.USE_SMALL_PHI_DIFFERENCE_FOR_CONSTELLATION_DISTANCE: If set to True, the distance between constellations (when computing BigMip) is calculated using the difference between the sum of \(\varphi\) in the constellations instead of the extended EMD.


Load configuration values.

Parameters:config (dict) – The dict of config to load.

Load config from a YAML file.


Load default config values.


Return a string representation of the currently loaded configuration.


Print the current configuration.


Configure PyPhi logging based on the loaded configuration.

Note: if PyPhi config options that control logging are changed after they are loaded (eg. in testing), the Python logging configuration will stay the same unless you manually reconfigure the logging by calling this function.

TODO: call this in config.override?

class pyphi.config.override(**new_conf)

Decorator and context manager to override config values.

The initial configuration values are reset after the decorated function returns or the context manager completes it block, even if the function or block raises an exception. This is intended to be used by testcases which require specific configuration values.


>>> from pyphi import config
>>> @config.override(PRECISION=20000)
... def test_something():
...     assert config.PRECISION == 20000
>>> test_something()
>>> with config.override(PRECISION=100):
...     assert config.PRECISION == 100

Save original config values; override with new ones.


Reset config to initial values; reraise any exceptions.