HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics

1ETH Zürich, 2Max Planck ETH Center for Learning Systems 3Max Planck Institute for Intelligent Systems, Tübingen, Germany

By training just one model to mimic behaviour of fabric, we enable it to generalize to a vide variety of garments including unseen ones.

Abstract

We propose a method that leverages graph neural networks, multi-level message passing, and unsupervised training to enable real-time prediction of realistic clothing dynamics.

Whereas existing methods based on linear blend skinning must be trained for specific garments, our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing. Our method furthermore handles changes in topology (e.g., garments with buttons or zippers) and material properties at inference time.

As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes while preserving local detail. We empirically show that our method outperforms strong baselines quantitatively and that its results are perceived as more realistic than state-of-the-art methods

Method

We extend the graph-based message-passing architecture of MeshGraphNets with hierarchical component. Several levels of long-range edges allow the signal from each garment node to propagate father enabling the model to better model long-range dependencies in large garments.

Dynamic Topology

Its' graph-based nature allows HOOD to model garments with changing topology by toggling specific edges in the garment mesh.

Local Material

By controlling local material parameters we can model garments make of different fabrics.

Results

Comparisons

Against SNUG

Against SSCH

Against ARCSIM

BibTeX


      @inproceedings{grigorev2022hood,
      author = {Grigorev, Artur and Thomaszewski, Bernhard and Black, Michael J and Hilliges, Otmar}, 
      title = {{HOOD}: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics}, 
      journal = {Computer Vision and Pattern Recognition (CVPR)},
      year = {2023},
      }