The tl;dr on what happened in AI last week


Greetings from San Francisco! Has everyone been keeping up with all the announcements? Last week has been pretty wild. 🤠

In only one week we’ve seen an incredible amount of announcements.

  • Github → Copilot X

  • Nvidia → Generative AI cloud services

  • Microsoft → Bing Image creator

  • Adobe → Firefly

  • Google opened up access to → Bard

  • Canva → a new set of AI tools

  • Unity → Unity AI Beta

and perhaps the biggest announcement

  • OpenAI → Plugins

Today’s edition follows a special format. I’m diving into each of these announcements providing a quick summary of what’s been announced, a few thoughts on how this can impact the industry, and wrapping up with some personal thoughts on what all these announcements could mean for builders and investors.

The ultimate summary of what happened last week 🫣


GitHub announced Github Copilot X 🚄

The facts

  • Github announced Copilot X, an AI assistant for the entire development lifecycle

  • Copilot X comes with a chat window that allows for context-aware conversations

    • You can use it to explain existing code, help fix bugs, and even learn 

  • They also announced

    • Tailored Docs, personalized documentation to support learning

    • Voice-activated instructions: use your voice to instruct the copilot what to do

    • Auto-generated PR descriptions

    • Co-pilot CLI: co-pilot assistant for writing terminal commands 

  • Access

What excites me?

The chat window is probably the most existing feature as Github releases a Copilot that is now a true companion for your coding, getting all the assistance you need in your IDE. Productivity on steroids.

The voice control is also very exciting. Providing natural language instructions via voice to a machine that can write code for you is pretty SciFi. You can start writing code from your walks in the park or from behind the wheel. And you know that moment on a date when you have one minute on your own? We can turn that into a productive minute. 🤓

Nvidia announced generative AI cloud services ☁️

The facts

  • Nvidia announced a set of cloud services designed to help businesses build and run generative AI models trained on custom data and created for “domain-specific tasks,” such as writing ad copy

  • The services include pre-trained models, frameworks for data processing, APIs and support from Nvidia engineering staff

    • NeMo is their service for language models, Picasso is the service for image, video and 3d content generation and BioNeMo is a service offering AI model training and inference to accelerate research and drug discovery

  • “Nvidia AI Foundations let enterprises customize foundation models with their own data to generate humanity’s most valuable resources — intelligence and creativity.” Nvidia founder and CEO, Jensen Huang

  • Access:

    • Apply for early access to NeMo here and to Picasso here

What excites me?

LLMs as a service are a compelling proposition for enterprise clients. This will increase the adoption of generative models inside the enterprise as companies will aim to use their own proprietary data and insights to create unique advantages. It’s a mirror of the cloud/ on-premise dynamic 🙂. I'm also very excited about BioNeMo. Although not as popular as other use cases, drug discovery powered by Generative AI is bound to radically transform the pharma and healthcare industries.

Adobe launched Firefly 🦋

The facts

  • Adobe launched its own AI image generator, called Adobe Firefly

    • Firefly currently has two tools: an image generator similar to DALL-E or Midjourney and a tool that generates stylized text, similar to an AI-powered WordArt (fashion is cyclic, right? :))

  • In the future, Adobe plans to add vector variations and generated sketches in Illustrator, AI-generated outpainting in Photoshop and AI-powered video edits in Premiere

  • According to Adobe, everything fed to its models is either out of copyright, licensed for training, or sourced from the Adobe Stock library. This is a different approach from what we've seen with models like DALL-E, Midjourney, or Stable Diffusion.

  • Adobe plans to allow artists to train the system on their own work so that it can assist them in generating content in their personal style (it's like Michelangelo getting AI apprentices)

  • Access

    • Join the beta here 

What excites me?

Not the WordArt, for sure. I can’t say anything is hugely impressive so far, but it’s interesting to see Adobe offer built-in options for things like art styles, lighting, and aspect ratio. This could be an improvement from situations where all of this is controlled via long prompts. I look forward to seeing this turn more into a “creative co-pilot” though.

Unity launched Unity AI 🎮

The facts

  • Unity announced they are building an AI ecosystem that will put AI-powered game-development tools in the hands of creators, enabling them to create and deliver real-time 3D content and experiences

  • The promo video was released at GDC (Game Developers Conference) last week

  • Access

    • Sign up for the Unity AI beta here 

What excites me?

The potential is huge here, but there’s skepticism with Unity. Their announcement is all tell, no show which could be an indication that the project is still in its infancy. Unity also doesn’t have the best reputation in delivering on their promises and some people are speculating a 3rd party might do a better job than Unity to create these tools. We shall see :) The technology is getting there, so can’t wait to see those first generated game assets.

Google opened up access to Bard 🎭

The facts

  • Google opened up access to it’s chatbot solution, Bard

    • The model used in Bard is based on Google’s own LaMDA (Language Model for Dialogue Applications) — the company is using a lightweight and optimized version of LaMDA

  • Unlike Microsoft’s Bing chatbot, Bard doesn’t have footnotes with web sources

  • Techcrunch reporters concluded Bard lags behind GPT4 and Claude

  • Access:

    • Sign up for the waitlist here 

What excites me?

The race is on. A few weeks ago, we only had chatGPT. Now we’ve got Claude and Bard. There’s a lot at stake for Google and my internal sources at Google (yes, I have insiders everywhere) tell me the only thing Execs talk about is AI. It currently feels like Bard is completely separate from Google Search, which isn’t surprising as Google doesn’t want to cannibalize their revenue, but I suspect it’s only a matter of time before Bard becomes the main act and Google Search is the old warm-up act that only a few people want to watch.

Canva released a new set of AI tools 🎭

The facts

  • Canva introduced a new set of AI features to its product suite

  • The company launched Assistant, which lets users search for design elements and provides quick access to features

    • The tool can also provide recommendations for graphics and styles that match your current design

    • Assistant provides quick access to AI-powered design tools, such as Magic Write, the platform's AI-powered copywriting assistant

  • Canva has also launched a new way to automatically generate presentations and a set of AI image-editing tools

What excites me?

Canva has so far shown more than Adobe on this front. With the AI-generated presentations, they want to give Tome a run for their money and they already have a significant subscription base. I’m impressed by how many AI-powered features they’ve shipped already and excited to see what comes next.

OpenAI released plugins 🔌

The facts

  • OpenAI today launched plugins for ChatGPT, which extend the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web

  • OpenAI’s first-party web-browsing plugin allows ChatGPT to draw data from around the web to answer the various questions posed to it

  • A number of early collaborators, including Expedia, Instacart, Kayak, Klarna, OpenTable, Shopify, Slack, Wolfram, and Zapier developed plugins for ChatGPT

  • Access:

    • Sign up for access here 

What excites me?

The plugins turn OpenAI into a marketplace and allow companies to innovate on top of chatGPT. By combining multiple plugins you can do some pretty awesome things. The example of cooking dinner that starts with sourcing a recipe and ends with you ordering everything on Instacart is impressive. Composability and 3rd party innovation will allow for completely new applications to be built. Adding voice control on top of this will push us into a proper SciFi future.

How does this impact builders?


If we go by rumors in the Valley and Twitter, a significant batch of companies from YC have just been killed by OpenAI's announcements. Furthermore, everyone building tools for graphic creators will have a tough time with recent movements from Adobe and Canva.

With an open ecosystem in which moats are difficult to build and with incumbents integrating AI at a very fast pace, what type of startups can succeed and how should founders think about moats and differentiation?

I've been in SF and NY in the last couple of weeks and I've heard the following themes from investors and founders:

  • Where possible, focus on embedding your solution deeply in existing workflows

  • Unique data and data flywheels can still act as moats

  • The one trend that is clear is that people are building, so middleware tech to support the app layer ecosystem continues to be of interest to investors

  • Strong UI and UX can act as differentiators, at least in the early stages

Over the weekend I attended a hackathon in SF organized by my friend Ivan, who runs the Cerebral Valley community in SF. It's been incredible to see how much can be built in only 24h by talented developers. It's great to see how access to LLMs creates a level playing field for all builders, but who will succeed in building ventured-backed businesses?

Here are a few of my own thoughts on how to think through the opportunity space:

  • To re-envision industries that have yet to be disrupted (education is a prime example), we need to think from first principles. "AI-generated courses" is not a business. Learning is a complex process with complex dynamics. The forces that drive us to learn and facilitate learning need to be understood for strong solutions to be created

  • Consumer behavior is changing. Building for the next generation of consumer experiences requires understanding the changes. How much of the UX is a guided experience versus a prompt-led experience? What drives people to keep using a tool: is it utility or the dopamine that hits us when we generate something new and original?

  • A good idea is to look for opportunities where AI replaces a manual activity in a market where incumbents don't have a distribution advantage

It's a great time to build and there are incredible opportunities for new businesses to be built and new markets to be created, so let's go! 🔥

Want access to the Generative AI investors database? Share and subscribe to AI In the Middle (below) to receive it! ⬇️

Before you go

How we got to photorealistic generated images.

And scene. That’s all for today! Thank you for always reading, sharing, and subscribing. Want to share something with us? Slide in our DMs, they’re open.

— Calin Drimbau (@calindrimbau)