Orb

Train this architecture on your own data using the graph-pes-train CLI, using e.g. the following config:

model:
   +Orb:
     channels: 32

Definition

class graph_pes.models.Orb(
cutoff=5.0,
conservative=False,
channels=256,
layers=5,
radial_features=8,
mlp_layers=2,
mlp_hidden_dim=1024,
l_max=3,
edge_outer_product=True,
activation='silu',
norm_type='layer',
attention_gate='sigmoid',
distance_smoothing=True,
max_neighbours=None,
)[source]

Bases: GraphPESModel

The Orb-v3 architecture.

Citation:

@misc{Rhodes-25-04,
    title = {Orb-v3: Atomistic Simulation at Scale},
    author = {
        Rhodes, Benjamin and Vandenhaute, Sander
        and {\v S}imkus, Vaidotas and Gin, James and Godwin, Jonathan
        and Duignan, Tim and Neumann, Mark
    },
    year = {2025},
    publisher = {arXiv},
    doi = {10.48550/arXiv.2504.06231},
}
Parameters:
  • cutoff (float) – The cutoff radius for interatomic interactions.

  • conservative (bool) – If True, the model will generate force predictions as the negative gradient of the energy with respect to atomic positions. If False, the model will have a separate force prediction head.

  • channels (int) – The number of channels in the model.

  • layers (int) – The number of message passing layers.

  • radial_features (int) – The number of radial basis functions to use.

  • mlp_layers (int) – The number of layers in the MLPs.

  • mlp_hidden_dim (int) – The hidden dimension of the MLPs.

  • l_max (int) – The maximum degree of spherical harmonics to use.

  • edge_outer_product (bool) – If True, use the outer product of radial and angular features for edge embeddings. If False, concatenate radial and angular features.

  • activation (str) – The activation function to use in the MLPs.

  • norm_type (NormType) – The type of normalization to use in the MLPs. Either "layer" for torch.nn.LayerNorm or "rms" for torch.nn.RMSNorm.

  • attention_gate (AttentionGate) – The type of attention gating to use in message passing layers. Either "sigmoid" for element-wise sigmoid gating or "softmax" for normalising attention weights over neighbours.

  • distance_smoothing (bool) – If True, apply a polynomial envelope to attention weights based on interatomic distances. If False, do not apply any distance-based smoothing.

  • max_neighbours (int | None) – If set, limit the number of neighbours per atom to this value by keeping only the closest ones.

Helpers

class graph_pes.models.orb.NormType

A type alias for a Literal["layer", "rms"].

class graph_pes.models.orb.AttentionGate

A type alias for a Literal["sigmoid", "softmax"].