Anthropic
September 25, 2024
Anthropic Unicorn News - September 25, 2024
Meta has launched Llama 3.2, a groundbreaking multimodal AI model capable of processing both visual and textual data. This new model, introduced at the Meta Connect event, aims to rival major players like OpenAI and Anthropic by enhancing capabilities in image recognition and integration.
(
) Introduction(
) Meta Advances AI with Launch of Llama 3.2, Aiming to Rival OpenAI and AnthropicMeta has introduced Llama 3.2 at the Meta Connect event, a significant advancement in their large language models. CEO Mark Zuckerberg presented the model, which is capable of understanding both images and text. Llama 3.2 is available in 11B and 90B parameter versions and is Meta's first open-source multimodal model. It is designed to handle complex visual tasks such as understanding charts, captioning images, and identifying objects from descriptions, integrating visual data processing with textual understanding. The model competes with Anthropic's Claude 3 Haiku and OpenAI's GPT4o-mini, particularly in image recognition tasks. Meta has also released lighter versions for mobile and edge devices, expanding its potential applications. Users can access Llama 3.2 through llama.com, Hugging Face, and Meta's partner platforms, potentially leading to a wide range of innovative uses across various sectors.