OpenAI might change app design forever
When Apple launched the App Store in 2008, it was impossibly influential to the future of the internet. The all-powerful world wide web was sliced and diced into bite-sized apps oft-dubbed Web 2.0. What followed was not just software that fit in your pocket. From TikTok to Uber, these camera-wielding, GPS-integrated, cloud-connected platforms changed the way we lived.
Now, in the wake of AI, the app store is arising anew. But instead of being built as tappable icons inside a mobile OS, they are plugging directly into the conversations of LLMs like Microsoft CoPilot and Anthropic’s Claude.
Today, OpenAI—the largest AI platform with 800 million weekly users—is opening ChatGPT for any developer to integrate their app right into the flow of conversation (pending review and approval). Following a pilot earlier this year, now any developer can plug in their own apps to be suggested contextually during any chat, or summoned by a user by @’ing their specific name.
Early partners like Adobe will let you edit images right in the flow of conversation (complete with sliders to tweak them), while Target will pull up any manner of product to buy. No matter your feelings on AI, the tools seemed destined to change the way we think about apps and even multitasking, by shifting us from software based upon nouns (Canva, Figma) to verbs (“build a slide deck”).
“It’s not going to feel like you’re entering through a front door anymore. You’re kind of meeting these [users] at a very specific moment in time,” says Bryant Jow, a designer at OpenAI overseeing app integration. “I really think one of the most important things is that it should not feel like there’s a learning curve or that you have to re-anchor yourself. It should just kind of feel like immediately and instantly intuitive.”
Indeed, the promise from all the partner companies I spoke to is to fulfill what LLMs generally only tease. We brainstorm all sorts of ideas inside AI chats, but when it’s time to bring them to life, we can hit a wall. This is where integrated apps can show up, offering their finer tuned services. But the devil is in the details. And very few of the details have been fully worked out.
“If you remember the first apps that people made on the App Store, like the beer drinking app [iBeer], they were like, whatever, right? A lot of people took a moment to figure out how do we behave in this ecosystem? What do we build? How do we provide utility? And how do we optimize for that?” says Gui Seiz, who leads product design on the AI team at Figma. “I think we’re still at that stage.”
What ChatGPT apps can actually do, and how they do it
To be entirely frank, the AI model providers are creating something of an ouroboros with connected apps. You talk to ChatGPT. It recommends you connect with an app. That app, however, is likely powered by AI models that could be from OpenAI. And so its part-OpenAI-powered agent, filled with specialized knowledge, then shows back up on OpenAI’s platform ChatGPT. It’s our agents-talking-to-agents future, happening now.
However, the secret sauce to these connections isn’t merely your typical pile of APIs that have been used to connect apps for years. It’s a rapidly growing new standard called MCP (Model Context Protocol). Originally developed by Anthropic in 2024, it’s now open source under Linux.
When a company runs an MCP server, it’s essentially opening a door to make everything it wants grockable by AI—sharing data, tools, and memory—all in one consolidated, automated process. While model companies originally brute forced their way across the internet, smashing and grabbing the data sets needed to build their systems, MCP is the equivalent of a butler asking them to wipe their feet and welcoming the AI in.
For Target, MCP meant that its initial launch on ChatGPT happened fast—a mere four weeks from when discussions with OpenAI kicked off and Target was selling on its platform.
But what’s it like to shop Target on an LLM? At the moment, you can type @target, and ask to shop, in my case, “lego deals for xmas.” It generates a thumbnail grid of options, all with prices. Tap one, and you’re ushered to a new page with more info, just like you’re on its website. There, you can add it to your cart. Target, like all of the partners I spoke to, promised more features will arrive fast—more at the scale of weeks than months.
Canva and Figma have both offered tools to create slide decks, turning a brainstorm or pretty much anything you want to paste into ChatGPT into a presentation. Both services are dipping into their own templates to build visual assets previewed as thumbnails. From there, you can tap into any preview to see the whole slideshow. The catch is that, in either case, you can’t really edit these slides further through conversation—the app integration kind of kicks you back to stock ChatGPT following the query. Instead, the preview, like Target, refers you back to their respective apps.
It’s why the most ambitious integration seems to be that of Adobe, which integrated tools from Adobe Express, Photoshop, and Acrobat. Adobe actually built out its own, lightweight front end experience into ChatGPT, so if you ask it to brighten a photo, a few sliders will appear on the screen that only control exposure and black and white levels. That way you can get the image exactly as bright as you like, rather then telling the AI, “a little brighter, wait, no, a little darker.”
This UI is intentionally granular, built to surface only what you need for a task and nothing more.
“That’s what makes this incredibly exciting,” argues Govind Balakrishan, SVP and GM on Adobe Express. “You’re no longer dealing with the the entirety of the Photoshop interface. You’re just dealing with those sliders that give you what you’re trying to do.”
Discovering new apps will be the new SEO
App discoverability could still use work, though. And this represents both a short term and long term challenge for the company.
In the short term, conversational discovery just stinks.
To be honest, summoning these apps can be frustrating and buggy. OpenAI needs to do some clean up work on their front end, too, adding the creature comforts we expect. For instance, when you @ any available app, it autofills that app like an Instagram handle—but only after you paired the app successfully once already. In the case of Adobe, this gets extra tricky, as you summon specific functions via their separate apps like @Photoshop and @AdobeAcrobat (and don’t ask to build a PDF in Photoshop). That’s unnecessarily messy and should be sorted by the LLM, not the user.
Meanwhile, you aren’t even supposed to be forced to call out apps all the time, as they are supposed to be suggested casually by the LLM in what the company calls “indirect invocation.” I’m not seeing much, if any, of that working yet.
When I’m too casual, saying “I’d like to shop at Target” instead of “@target find me X,” it listed nearby Target stores and then offered me shopping advice. When I said I was “hoping to work with the Target app on ChatGPT right now,” it explained I could do that, along with everything I could do in Target. But it was always up to me to invoke the aforementioned secret code—@Target in this case—to make my query.
It’s an easy enough affordance people will learn that’s no different than using X or Threads, but the whole point of a friendly conversational interface is that it isn’t a speakeasy. I was continuously surprised by the lack of contextual understanding (and OpenAI says they are not currently live for all users).
But this feels rapidly fixable.
The greater existential question for OpenAI is how and why it would recommend one app over another app that offers similar features with similar quality.
Make no mistake, each company wants to be the app that’s summoned on command. I myself wondered why some companies would even bother to plug into ChatGPT. As soon as they hand over their capabilities to a generalized AI, aren’t they diluting their own value? Target makes money with every sale, sure, and Canva still carefully offers its free items for free and its paid items for subscription. But Adobe, for instance, is offering all of its ChatGPT tools for free rather than upselling you to a subscription.
“At some level, we believe that the more users we have…leveraging the breadth and strength of our applications, the better off we will be over time,” says Balakrishan. “Monetization will sort of work in its way out.”
For now, it helps that all of these media generation services link you back to their respective apps, with full interfaces, to finish work you may only start on ChatGPT. Indeed, Canva shared early data from running its own MCP servers to field Claude, CoPilot, and ChatGPT requests since July. They’ve served 2.6 million users who’ve created more than 11 million designs, and it’s been working as a tool to attract attention. Canva notes that referral traffic from LLMs is rising at a faster rate than any other source.
But bigger picture, everyone seems to agree that baking apps into LLMs should be about more than just porting an app to a chat interface. It should unlock new workflows, functions, and UIs we haven’t imagined yet.
“There’s some stuff that, for whatever reason, the modality that Figma offers isn’t ideal to do that specific thing,” says Seiz. “I wonder what kind of new use cases or new things people are going to be trying to do.”
Finding AI’s next big modality
For Target, which launched just in time for Black Friday, one of its biggest surprises was a new shopping behavior. People uploaded handwritten lists instead of typing things in. That was interesting, and Target doesn’t know what’s possible from that, yet, but it’s one of many data points that could inform their future thinking.
“We wanted to be early and have a role in how that path evolves,” says Purvi Shah, VP of UX Design, Research and Accessibility at Target.
The greater concern for companies I talked to was not if they would be commoditized by plugging into a vast AI platform, but how they would be discovered in all that noise. It’s no secret that Adobe, Canva, and Figma are each competitors, much like Target and Walmart (which was also was early to integrate shopping with ChatGPT]. Suggesting any of them contextually, in conversation, means that OpenAI needs to make a decision of which competing service is right for any given moment. Naturally, they all want to own that moment.
When I ask OpenAI how they will manage this issue, Jow admits, it’s “definitely one of the hardest challenges” facing the team. When I ask if we’ll see paid placement, like the search ads that have driven Google’s business for years, he says, “We’ll see.”
In the meantime, app developers shared their own nervousness about how this will develop, and agree we are likely to see a era of AI platform optimization—much like sites classically optimized themselves to be discovered by Google—in order to rise to the top of ChatGPT and other LLMs. For now, all developers can do is serve “quality and relevant” responses to any prompt, according to Seiz, so that OpenAI is incentivized to keep recommending one’s service.
“It’s certainly inevitable that there will be multiple adjacent experiences that offer a really great tool for that use case,” says Jow. “And I do think that what we want to really ensure is that those options are displayed to the user in a very transparent way, so the user can decide which tool is best suited for them.”
