Introducing OWAMcap

A Universal Standard for Desktop Interaction Data

What is OWAMcap?

OWAMcap is a specification for using the open-source mcap container file format with Open World Agents (OWA) message definitions. It defines how to structure multimodal desktop log data within standard mcap files using OWA-specific message schemas.

Key Characteristics:

  • Standard mcap file format with OWA profile designation
  • OWA's predefined message types for desktop interaction data (mouse, keyboard, screen, etc.)
  • Optimized storage strategies (e.g., external video files referenced from mcap)

Understanding MCAP:

MCAP is a modular container file format for heterogeneous, timestamped data. It's designed for robotics and autonomous systems, ideal for recording various data streams like sensor inputs, logs, and system states. OWAMcap leverages this robust format by defining specific message schemas tailored for desktop interaction data, enabling precise recording and replay of user activities.

The Challenge: Data Fragmentation

The primary obstacle to advancing desktop automation with foundation models is data fragmentation. Research groups often collect data in proprietary formats with varying internal structures, making dataset combination nearly impossible and mirroring costly inefficiencies seen in the robotics community.

The Open-X Embodiment Herculean Effort

The Open-X Embodiment project highlights this issue. Researchers had to:

  • Manually convert 22 different datasets
  • Spend months writing custom parsers
  • Standardize action spaces, observations, and metadata across diverse configurations
  • Validate data integrity across varied sources
  • Maintain numerous complex conversion scripts

This massive undertaking underscores the critical need for a unified data standard in desktop automation.

Conversion Complexity Illustrated

Example: Diverse Dataset Configurations


# Excerpt from oxe_dataset_configs.py
# Source: https://github.com/octo-models/octo/blob/main/octo/data/oxe/oxe_dataset_configs.py
OXE_DATASET_CONFIGS = {
  "fractal20220817_data": {
    "image_obs_keys": {"primary": "image", "secondary": None, "wrist": None},
    "depth_obs_keys": {"primary": None, "secondary": None, "wrist": None},
    "proprio_encoding": ProprioEncoding.POS_QUAT,
    "action_encoding": ActionEncoding.EEF_POS,
  },
  "bridge_dataset": {
    "image_obs_keys": {"primary": "image_0", "secondary": "image_1", "wrist": None},
    "depth_obs_keys": {"primary": None, "secondary": None, "wrist": None},
    "proprio_encoding": ProprioEncoding.POS_EULER,
    "action_encoding": ActionEncoding.EEF_POS,
  },
  "taco_play": {
    "image_obs_keys": {
        "primary": "rgb_static",
        "secondary": None,
        "wrist": "rgb_gripper",
    },
    "depth_obs_keys": {
        "primary": "depth_static",
        "secondary": None,
        "wrist": "depth_gripper",
    },
    "proprio_encoding": ProprioEncoding.POS_EULER,
    "action_encoding": ActionEncoding.EEF_POS,
  },
  # ... and many more configurations ...
}

Each dataset requires unique parsing and mapping for its specific observation keys, proprioception, and action encodings, dramatically increasing conversion effort without a standard.

The Solution: OWAMcap as the Standard

OWAMcap establishes a unified foundation, enabling seamless data integration and accelerating foundation model development for desktop automation.

Before OWAMcap: Fragmented Silos

Dataset A (Proprietary Format & Config)
Dataset B (Proprietary Format & Config)
Dataset C (Proprietary Format & Config)
⬇ Costly, Complex Conversions ⬇
Limited, Inconsistent Data for Models

After OWAMcap: Unified Ecosystem

Dataset A (OWAMcap)
Dataset B (OWAMcap)
Dataset C (OWAMcap)
⬇ Direct Combination ⬇
OWAMcap Unified Format
⬇ Efficient Access ⬇
Large-Scale Foundation Models

Key Benefits of Standardization:

🔗

Seamless Data Integration

Directly combine datasets from different sources without costly custom conversions.

🚀

Foundation Model Enablement

Provide aggregated, diverse data in a unified format for efficient model training.

👥

Breaking Down Data Silos

Foster collaboration by enabling easy sharing and combination of desktop interaction data.

By establishing OWAMcap, resources shift from data wrangling to actual research and model development.

Technical Innovation: Hybrid Storage

OWAMcap's innovative hybrid storage strategy optimizes for both efficiency and usability by separating video data from metadata.

Hybrid Storage Architecture

Desktop Events (Mouse, Keyboard, Window)
Screen Capture (Video Stream)
🡇
OWAMcap Recorder / Writer
🡇

MCAP File (.mcap)

Metadata, Timestamps, Frame References

External Video File (.mkv)

Efficiently Encoded Video Data

🡅
OWAMcap Reader (Lazy Loading from both)

This approach stores bulky video data in optimized external files (e.g., `.mkv`), while lightweight metadata, timestamps, and frame references reside in the `.mcap` file. This results in minimal `.mcap` file sizes and frame-accurate synchronization.

Benefits of Hybrid Storage:

  • Storage Efficiency: Significantly smaller mcap files.
  • Library Compatibility: Leverages existing, highly optimized video codecs and tools.
  • Lazy Loading: Enables on-demand loading of specific video frames, crucial for large datasets.
  • Tool Integration: Seamless use with standard video processing libraries.

Storage Efficiency Comparison

Conceptual comparison of storage for raw data vs. OWAMcap's metadata-only mcap file.

OWAMcap in Action

An example dataset (`example.mcap` and `example.mkv`) demonstrates the structure and efficiency of OWAMcap.

File Overview: example.mcap

library:     mcap-owa-support 0.1.0; mcap 1.2.2
profile:     owa
messages:    518
duration:    6.86s
compression: zstd (80.44% reduction)
channels:
  (1) window           7 msgs (1.02 Hz): WindowInfo
  (2) keyboard/state   7 msgs (1.02 Hz): KeyboardState
  (3) mouse/state      7 msgs (1.02 Hz): MouseState
  (4) mouse          115 msgs (16.77 Hz): MouseEvent
  (5) screen         362 msgs (52.80 Hz): ScreenEmitted
  (6) keyboard        20 msgs (2.92 Hz): KeyboardEvent

Key Insight:

Only 21 KiB for 6.86 seconds of rich, multimodal interaction data, thanks to external video storage and efficient compression (80.44% reduction).

Message Distribution (518 Total)

Distribution of message types within the example.mcap file.

Example Messages:

Topic: window, Message: {'title': 'ZType – Typing Game - Chromium', 'rect': [389, 10, 955, 1022]}
Topic: mouse, Message: {'event_type': 'move', 'x': 1597, 'y': 1112}
Topic: screen, Message: {'path': 'example.mkv', 'pts': 14866666666, 'utc_ns': 1741628814056571100}
Topic: keyboard, Message: {'event_type': 'release', 'vk': 162}

This structured, timestamped data enables precise reconstruction of user interactions synchronized with screen captures, and crucially, allows for direct combination with datasets from other sources using the OWAMcap standard.

Why OWAMcap Matters

The design choices behind OWAMcap are deliberate, aiming to create a practical and powerful standard for the desktop automation community.

Why MCAP?

MCAP is self-contained, supports heterogeneous timestamped data, and is optimized for random access—critical for training large vision-language-action (VLA) models. It avoids heavy dependencies unlike formats like ROS bagfiles.

Why External Video?

  • Highly optimized video codecs (H.264, etc.).
  • Maintains compatibility with existing video tools.
  • Enables selective frame loading for large datasets.
  • Prevents metadata files from becoming unwieldy.

Why Standardization?

Without a standard like OWAMcap, the desktop automation field risks repeating robotics' costly mistakes: fragmented datasets, wasted conversion efforts, and limited foundation model potential. Standardization enables collaborative progress.

The Bottom Line

OWAMcap transforms desktop interaction data from isolated, proprietary collections into a unified, accessible resource. It's not just a file format—it's the foundational infrastructure for collaborative progress and building the next generation of foundation models in desktop automation.