MACE¶
MACE Models¶
- class graph_pes.models.MACE(
- elements,
- cutoff=5.0,
- n_radial=8,
- radial_expansion='Bessel',
- weights_mlp={'activation': 'SiLU', 'hidden_depth': 3, 'hidden_features': 64},
- channels=128,
- hidden_irreps='0e + 1o',
- l_max=3,
- layers=2,
- correlation=3,
- aggregation='constant_fixed',
- self_connection=True,
- readout_width=16,
The MACE architecture.
One-hot encodings of the atomic numbers are used to condition the
TensorProduct
update in the residual connection of the message passing layers, as well as the contractions in the message passing layers.Following the notation used in ACEsuite/mace, the first layer in this model is a
RealAgnosticInteractionBlock
. Subsequent layers are thenRealAgnosticResidualInteractionBlock
sPlease cite the following if you use this model in your research:
@misc{Batatia2022MACE, title = { MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields }, author = { Batatia, Ilyes and Kov{\'a}cs, D{\'a}vid P{\'e}ter and Simm, Gregor N. C. and Ortner, Christoph and Cs{\'a}nyi, G{\'a}bor }, year = {2022}, doi = {10.48550/arXiv.2206.07697}, }
- Parameters:
elements (list[str]) – list of elements that this MACE model will be able to handle.
cutoff (float) – radial cutoff (in Å) for the radial expansion (and message passing)
n_radial (int) – number of bases to expand the radial distances into
radial_expansion (type[DistanceExpansion] | str) – type of radial expansion to use. See
DistanceExpansion
for available optionsweights_mlp (MLPConfig) – configuration for the MLPs that map the radial basis functions to the weights of the interactions’ tensor products
channels (int) – the multiplicity of the node features corresponding to each irrep specified in
hidden_irreps
hidden_irreps (str | list[str]) – string representations of the
e3nn.o3.Irrep
s to use for representing the node features between each message passing layerl_max (int) – the highest order to consider in: * the spherical harmonics expansion of the neighbour vectors * the irreps of node features used within each message passing layer
layers (int) – number of message passing layers
correlation (int) – maximum correlation (body-order) of the messages
aggregation (NeighbourAggregationMode) – the type of aggregation to use when creating total messages from neigbour messages \(m_{j \rightarrow i}\)
self_connection (bool) – whether to use self-connections in the message passing layers
readout_width (int) – the width of the MLP used to read out the per-atom energies after the final message passing layer
Examples
Basic usage:
>>> from graph_pes.models import MACE >>> model = MACE( ... elements=["H", "C", "N", "O"], ... cutoff=5.0, ... channels=16, ... radial_expansion="Bessel", ... )
Specification in a YAML file:
model: +MACE: elements: [H, C, N, O] cutoff: 5.0 radial_expansion: Bessel # change from the default MLP config: weights_mlp: hidden_depth: 2 hidden_features: 16 activation: SiLU
- class graph_pes.models.ZEmbeddingMACE(
- z_embed_dim=4,
- cutoff=5.0,
- n_radial=8,
- radial_expansion='Bessel',
- weights_mlp={'activation': 'SiLU', 'hidden_depth': 3, 'hidden_features': 64},
- channels=128,
- hidden_irreps='0e + 1o',
- l_max=3,
- layers=2,
- correlation=3,
- aggregation='constant_fixed',
- self_connection=True,
- readout_width=16,
A variant of MACE that uses a fixed-size (
z_embed_dim
) per-element embedding of the atomic numbers to condition theTensorProduct
update in the residual connection of the message passing layers, as well as the contractions in the message passing layers.Please cite the following if you use this model in your research:
@misc{Batatia2022MACE, title = { MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields }, author = { Batatia, Ilyes and Kov{'a}cs, D{'a}vid P{'e}ter and Simm, Gregor N. C. and Ortner, Christoph and Cs{'a}nyi, G{'a}bor }, year = {2022}, doi = {10.48550/arXiv.2206.07697}, }
All paramters are identical to
MACE
, except for the following:elements
is not required or used herez_embed_dim
controls size of the per-element embedding
ScaleShiftMACE
?¶
To replicate a ScaleShiftMACE
model as defined in the reference MACE implementation, you could use the following config:
model:
offset:
+LearnableOffset: {}
many-body:
+MACE:
elements: [H, C, N, O]
...