Meta’s first dedicated AI app is here with Llama 4 — but it’s more consumer than productivity or business oriented

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Facebook parent company Meta Platforms, Inc. has officially launched its own, free standalone Meta AI app, a move aimed at delivering a more personal and integrated AI experience across mobile devices, the web, and Ray-Ban Meta smart glasses.

The app is available on iOS through the Apple App Store and on the web — with no mention of when an Android version could come.

Powered by a version of its new, divisive quasi open source Llama 4 mixture-of-experts and reasoning model family, the new Meta AI app focuses on learning user preferences, maintaining conversation context, and providing seamless voice-first interaction. It requires a Meta products account to log in, though users can sign-in with their existing Facebook or Instagram profiles.

It comes ahead of the kickoff of Llamacon 2025, Meta’s first AI developer conference taking place this week at its office campus headquarters in Palo Alto, California, centered around its Llama model family and general AI developer tools and advances.

With the rise of more AI model challengers in the open source and proprietary domains — including everyone from OpenAI with ChatGPT to Google with its Gemini 2.5 model family and lesser-known (at least, to Western audiences) brands like Alibaba’s new Qwen 3 — Meta is keen to show off the power and capabilities of its own, in-house Llama 4 models.

It is also seeking to make the case to third-party software developers that Llama 4 is a powerful and flexible open(ish) source model family they can trust to build their enterprise products atop of. However, with this new Meta AI app launch, I’m not sure it is the most successful example. More on that below.

Text, image, and voice out-of-the-box — with document editing coming

The Meta AI app represents a new way for users to interact with Meta’s AI assistant beyond existing integrations with WhatsApp, Instagram, Facebook, and Messenger.

It enables users to have natural, back-and-forth voice conversations with AI, edit and generate images, and discover new use cases through a curated Discover feed featuring prompts and ideas shared by the community.

Alongside traditional text interaction, Meta AI now supports voice functionality while multitasking. An early full-duplex voice demo allows users to experience natural, flowing conversations where the AI generates speech directly, rather than simply reading text aloud.

However, the demo does not access real-time web information and may display occasional technical inconsistencies. Voice features, including the full-duplex demo, are currently available in the United States, Canada, Australia, and New Zealand.

On the web, meta.ai has been revamped to mirror the mobile experience, offering voice interaction, access to the Discover feed, and an improved image generation tool with enhanced style, mood, and lighting controls.

The web version seems especially powerful and capable for image creation, with many pre-set styles and aspect ratios to choose from. In my brief hands-on tests with the mobile app, the image creation tools seemed far more limited and I wasn’t able to find a way to switch the aspect ratio. In both formats, the image quality was far lower than dedicated and rival AI image generators such as Midjourney or OpenAI’s GPT-4o native image generation.

Meta is also testing a rich document editor and document analysis features in select countries.

Discover what other users are doing and creating with AI

A standout feature of the app is its “Discover” section, available by swiping up from the main chatbot interface, where users can browse and remix prompts, ideas, and creative outputs shared by others.

This feed highlights how people are using Meta AI to brainstorm, write, analyze social media content, create stylized images, and explore playful concepts — such as designing pixel-art scenes or seeking AI-generated companions.

Posts from creators include both text-based prompts and image results, giving others a starting point to experiment with the AI in new ways. It also coincides with tech journalist Alex Kantrowitz’s (Big Technology) recent observation in a LinkedIn post that AI is steadily replacing social media as a means of entertainment and content discovery for a growing number of users.

This peer-sharing dynamic aligns with Meta’s intent to make AI not only useful but culturally engaging, offering a social layer to what is traditionally a one-on-one assistant interaction.

Seeing the future

For users of the augmented reality Ray-Ban Meta glasses, the Meta AI app replaces the former Meta View app.

Existing device pairings, settings, and media content will migrate automatically upon updating.

This integration enables users to move from interacting with their glasses to the app, maintaining conversation history and access across devices and the web, although conversations cannot yet be initiated on the app and resumed on glasses.

Memory and personalization

Personalization stands at the core of the new Meta AI experience.

Users can instruct Meta AI to remember certain interests and preferences, and the assistant also draws from user profiles and engagement history on Meta platforms to tailor responses.

This feature is currently available in the U.S. and Canada. Users who link their Facebook and Instagram accounts through the Meta Accounts Center can benefit from deeper personalization.

When I downloaded the app to try it, it automatically suggested and pre-filled my Instagram account login.

Quick hands-on test

My initial tests of the Meta AI app interface reveal both the impressive functionality of Llama 4 and its current limitations in everyday tasks.

On the one hand, the assistant is capable of generating helpful responses, offering analysis and advice, and generating images rapidly.

However, some interactions expose severe limitations that have been mostly solved in other AI apps and the large language models (LLMs) powering them behind the scenes.

In one case, Meta AI initially miscounted the number of ‘M’s in the word “Mommy,” correcting itself only after being prompted to review its answer.

A similar pattern occurred when counting the letter ‘R’ in “Strawberry,” where it first responded with 2 before correcting to 3 after further clarification.

Another response incorrectly evaluated which number is larger between 9.11 and 9.9, a task involving basic decimal comparison.

These moments underscore the model’s limitations when it comes to attention to detail in short factual reasoning tasks — a known area where even advanced language models can falter.

The assistant’s ability to acknowledge mistakes, explain its reasoning, and offer transparent corrections reflects progress toward more interactive and self-correcting AI experiences.

But overall, I can’t recommend it being used for workplaces right now.

Speaking about the broader strategy in a video and audio interview with AI focused YouTuber and podcaster Dwarkesh Patel, Meta CEO Mark Zuckerberg emphasized that personalization and seamless, low-latency interaction are priorities for the company’s AI development.

“If you fast-forward a few years, I think we’re just going to be talking to AI throughout the day about different things we’re wondering about,” Zuckerberg said.

He highlighted the company’s focus on building systems that are quick, natively multimodal, and integrated deeply into daily life.

Zuckerberg also discussed Meta’s approach to open-source AI, noting that Llama 4 models are designed to balance efficiency, intelligence, and accessibility.

He reinforced that Meta’s commitment to open-source AI development aims to ensure broad innovation while maintaining American leadership in AI model standards, particularly in securing values and system integrity.

Everyday AI?

As Meta positions itself to compete in the increasingly crowded personal AI market, the launch of the Meta AI app marks a significant step toward making intelligent, personalized assistants part of everyday life for millions of users worldwide.

With active user feedback and a growing repository of shared use cases through the Discover feed, Meta is clearly investing in an ecosystem where AI evolves in tandem with community engagement and real-world demands.

While the Meta AI app is designed first and foremost for consumers, its launch carries broader implications for businesses across every sector.

Meta, with an audience of nearly 4 billion users globally across its apps and hardware products, has the scale to fundamentally shift public expectations around technology.

Even if only a small fraction of its users download and engage with the Meta AI app, it will introduce millions — possibly hundreds of millions — of non-technical consumers to regular, casual interaction with AI, and showcase the possibilities for conversational interaction in text and voice, rapid image generation, and problem solving.

This mainstream exposure will likely accelerate a shift in what people expect not just from consumer apps, but from workplaces, service providers, retailers, and every kind of merchant or vendor they interact with.

When individuals grow accustomed to personalized, conversational AI that can understand context, anticipate needs, and assist with creative or informational tasks, they will expect and demand similar functionality everywhere — from their own workplaces and those businesses and enterprises they purchase from.

Businesses that do not offer accessible, responsive AI-driven experiences risk feeling outdated or unresponsive compared to what consumers increasingly take for granted.

In effect, Meta’s new app may not simply compete with other AI offerings; it could redefine the baseline for digital interaction standards across industries.

Enterprises, regardless of size or sector, will need to rethink how they incorporate AI into customer experiences, service channels, and even internal operations if they want to meet the new cultural expectations this widespread consumer familiarity with AI is about to establish.