NequIP

class graph_pes.models.NequIP(
elements,
direct_force_predictions=False,
cutoff=5.0,
layers=3,
features={'channels': 16, 'l_max': 2, 'use_odd_parity': True},
self_interaction='tensor_product',
prune_last_layer=True,
neighbour_aggregation='sum',
radial_features=8,
)[source]

NequIP architecture from E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.

If you use this model in your research, please cite the original work:

@article{Batzner-22-05,
    title = {
              E(3)-Equivariant Graph Neural Networks for
              Data-Efficient and Accurate Interatomic Potentials
            },
    author = {
              Batzner, Simon and Musaelian, Albert and Sun, Lixin
              and Geiger, Mario and Mailoa, Jonathan P.
              and Kornbluth, Mordechai and Molinari, Nicola
              and Smidt, Tess E. and Kozinsky, Boris
            },
    year = {2022},
    journal = {Nature Communications},
    volume = {13},
    number = {1},
    pages = {2453},
    doi = {10.1038/s41467-022-29939-5},
    copyright = {2022 The Author(s)}
}
Parameters:
  • elements (list[str]) – The elements that the model will encounter in the training data. This is used to create the atomic one-hot embedding. If you intend to fine-tune this model on additional data, you must ensure that all elements you will encounter in both pre-training and fine-tuning are present in this list. (See ZEmbeddingNequIP for an alternative that allows for arbitrary atomic numbers.)

  • direct_force_predictions (bool) – Whether to predict forces directly. If True, the model will output forces (rather than infer them from the energy) using a LinearReadOut to map the final layer node embedding to a set of force predictions.

  • cutoff (float) – The cutoff radius for the model.

  • features (SimpleIrrepSpec | CompleteIrrepSpec) – A specification of the irreps to use for the node and edge embeddings. Can be either a SimpleIrrepSpec or a CompleteIrrepSpec.

  • layers (int) – The number of layers for the message passing.

  • self_interaction (Literal['linear', 'tensor_product'] | None) – The kind of self-interaction to use. If None, no self-interaction is applied. If "linear", a linear self-interaction is applied to update the node embeddings along the residual path. If "tensor_product", a tensor product combining the old node embedding with an embedding of the atomic number is applied. As first noticed by the authors of SevenNet, using linear self-interactions greatly reduces the number of parameters in the model, and helps to prevent overfitting.

  • prune_last_layer (bool) – Whether to prune irrep communication pathways in the final layer that do not contribute to the 0e output embedding.

  • neighbour_aggregation (NeighbourAggregationMode) – The neighbour aggregation mode. See NeighbourAggregationMode for more details. Note that "mean" or "sqrt" aggregations lead to un-physical discontinuities in the energy function as atoms enter and leave the cutoff radius of the model.

  • radial_features (int) – The number of features to expand the radial distances into. These features are then passed through an MLP to generate distance-conditioned weights for the message tensor product.

Examples

Configure a NequIP model for use with graph-pes-train:

model:
  +NequIP:
    elements: [C, H, O]
    cutoff: 5.0

    # use 2 message passing layers
    layers: 2

    # using SimpleIrrepSpec
    features:
      channels: [64, 32, 8]
      l_max: 2
      use_odd_parity: true

    # scale the aggregation by the avg. number of
    # neighbours in the training set
    neighbour_aggregation: constant_fixed

The hidden layer and edge embedding irreps the model generates can be controlled using either a SimpleIrrepSpec or a CompleteIrrepSpec:

>>> from graph_pes.models import NequIP
>>> model = NequIP(
...     elements=["C", "H", "O"],
...     cutoff=5.0,
...     features={
...         "channels": [16, 8, 4],
...         "l_max": 2,
...         "use_odd_parity": True
...     },
...     layers=3,
... )
>>> for layer in model.layers:
...     print(layer.irreps_in, "->", layer.irreps_out)
16x0e -> 16x0e+8x1o+4x2e
16x0e+8x1o+4x2e -> 16x0e+8x1o+8x1e+4x2o+4x2e
16x0e+8x1o+8x1e+4x2o+4x2e -> 16x0e
>>> from graph_pes.models import NequIP
>>> model = NequIP(
...     elements=["C", "H", "O"],
...     cutoff=5.0,
...     features={
...         "node_irreps": "32x0e + 16x1o + 8x2e",
...         "edge_irreps": "1x0e + 1x1o + 1x2e"
...     },
...     layers=3,
... )
>>> for layer in model.layers:
...     print(layer.irreps_in, "->", layer.irreps_out)
32x0e -> 32x0e+16x1o+8x2e
32x0e+16x1o+8x2e -> 32x0e+16x1o+8x2e
32x0e+16x1o+8x2e -> 32x0e

Observe the drop in parameters as we prune the last layer and replace the tensor product interactions with linear layers:

>>> from graph_pes.models import NequIP
>>> # vanilla NequIP
>>> vanilla = NequIP(
...     elements=["C", "H", "O"],
...     cutoff=5.0,
...     features={"channels": 128, "l_max": 2, "use_odd_parity": True},
...     layers=3,
...     self_interaction="tensor_product",
...     prune_last_layer=False,
... )
>>> sum(p.numel() for p in vanilla.parameters())
2308720
>>> # SevenNet-flavoured NequIP
>>> smaller = NequIP(
...     elements=["C", "H", "O"],
...     cutoff=5.0,
...     features={"channels": 128, "l_max": 2, "use_odd_parity": True},
...     layers=3,
...     self_interaction="linear",
...     prune_last_layer=True,
... )
>>> sum(p.numel() for p in smaller.parameters())
965232
class graph_pes.models.ZEmbeddingNequIP(
cutoff=5.0,
direct_force_predictions=False,
Z_embed_dim=8,
features={'channels': 16, 'l_max': 2, 'use_odd_parity': True},
layers=3,
self_interaction='tensor_product',
prune_last_layer=True,
neighbour_aggregation='sum',
radial_features=8,
)[source]

A modified version of the NequIP architecture that embeds atomic numbers into a learnable embedding rather than using an atomic one-hot encoding.

This circumvents the need to know in advance all the elements that the model will encounter in any pre-training or fine-tuning datasets.

Relevant differences from NequIP:

  • The elements argument is removed.

  • The Z_embed_dim (int) argument controls the size of the atomic number embedding (default: 8).

For all other options, see NequIP.

Utilities

class graph_pes.models.e3nn.nequip.SimpleIrrepSpec[source]

Bases: TypedDict

A simple specification of the node and edge feature irreps for NequIP.

Parameters:
  • channels (int | list[int]) – The number of channels for the node embedding. If an integer, all \(l\)-order irreps will have the same number of channels. If a list, the \(l\)-order irreps will have the number of channels specified by the \(l\)-th entry in the list.

  • l_max (int) – The maximum angular momentum for the edge embedding.

  • use_odd_parity (bool) – Whether to allow odd parity for the edge embedding.

Examples

>>> from graph_pes.models.e3nn.nequip import SimpleIrrepSpec
>>> SimpleIrrepSpec(channels=16, l_max=2, use_odd_parity=True)
class graph_pes.models.e3nn.nequip.CompleteIrrepSpec[source]

Bases: TypedDict

A complete specification of the node and edge feature irreps for NequIP.

Parameters:
  • node_irreps (str) – The node feature irreps.

  • edge_irreps (str) – The edge feature irreps.

Examples

>>> from graph_pes.models.e3nn.nequip import CompleteIrrepSpec
>>> CompleteIrrepSpec(
...     node_irreps="32x0e + 16x1o + 8x2e",
...     edge_irreps="0e + 1o + 2e"
... )