Meta announced on Thursday the launch of Meta Motivo, an artificial intelligence (AI) model designed to enable human-like movements in digital agents.
The model is expected to improve the realism of avatars and non-playable characters (NPCs) in the Metaverse, aiming to deliver more immersive and lifelike experiences.
The development is part of Meta’s broader push into AI and Metaverse technologies. The company projects a record-high capital expenditure of $37 billion to $40 billion for 2024. Meta has adopted an open approach, releasing several AI models for developers to use for free to encourage innovation and improve platform tools.
“We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, the democratization of character animation, and new types of immersive experiences,” Meta said.
Meta Motivo addresses a common challenge in digital avatars: creating realistic, human-like movements. By solving these body control problems, the model aims to enhance user engagement in virtual environments.
In addition to Meta Motivo, the company unveiled a new AI training model called the Large Concept Model (LCM), which reimagines the process of language modeling.
Unlike traditional large language models (LLMs) that predict the next word or token, the LCM predicts high-level ideas or concepts represented as full sentences in a multilingual and multimodal embedding space.
Meta also introduced the Video Seal, an AI tool that embeds invisible watermarks into videos to ensure traceability without affecting the viewing experience.
These advancements reflect Meta’s commitment to advancing AI and Metaverse capabilities while fostering open collaboration in the tech ecosystem.