AI is a battlefield: unpacking the battles


Happy National Popcorn Day 🍿 everyone! Calin here, bringing you another binge-worthy edition.

I have an "edition of battles" in store for you. On the education front, people are debating the use of chatbots; should they be banned or integrated in the education process? The second battle is between open and closed systems. Who will win between OpenAI's "gated" approach to deploy large language models and Stability's open model? Finally, in courts, visual artists are challenging the big players in image generation.

I recommend you settle into the chair, popcorn in hand, and take a stance in each debate.

Ban or integrate? The choice for educators


The education world has had two reactions to the emergence of chatbots. The first was to ban them in schools. Earlier this month, the New York City Education Department blocked chatGPT on school devices and networks. In an effort to protect original work, deepfake detection tools have started to emerge.

Universities are starting to adapt their teaching methods to prevent students from using chatbots like chatGPT to “cheat” by producing higher-quality writing than they could on their own. Changes range from making students write first drafts in class to involving chatGPT in the classroom by asking students to evaluate the chatbot’s responses.

I'm confident that the current wave of technology will have the most significant impact on education that we have seen in the past few decades. I believe that new pedagogical methods will be developed, with educators utilizing prompt teaching and human evaluation of AI outputs as part of their toolkit. This marks the beginning of the generative learning era.

With great technology comes great responsibility - or not


Time recently published an investigation showcasing what goes on behind the scenes of OpenAI's efforts to protect the world from misuse of its technology. For technology to know what is "disallowed", someone needs to label the data that algorithms are trained on. This process involves going through graphic text and images and labeling them as toxic.

The article examines OpenAI's partnership with a Kenyan outsourcing company, which was terminated prematurely due to the "traumatic nature" of the work and its effects on employees. They had to read text that included "child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest".

The alternative is to not install any rail-guards. Stability, OpenAI's competitor, has opted for an open-source model. This approach puts technology in more hands, leading to faster progress, but it also opens the door for the technology to be used for use cases like pornography and deepfakes. Emad Mostaque, Stability’s CEO doesn’t seem too phased, responding to criticism by saying “a percentage of people are simply unpleasant and weird, but that’s humanity”.

Will generative AI be a weapon of mass instruction or a weapon of mass destruction? Tune in for the next episode; this series has just begun.

Artists fight back

A trio of artists have filed a class-action lawsuit against, Midjourney and DeviantArt claiming these platforms have infringed the rights of "millions of artists" by training their AI tools on images scraped from the web without the consent of the artists.

The lawsuit argues the artists’ work is used to fuel money-making machines that have the ability the generate artwork in the style of specific artists. Experts argue it’s a complicated copyright problem that will need to be sorted in the courts.

Friedberg reminds us of this clip from mentalist Derren Brown, highlighting the fact that humans are not as "creative" as they may think.

This is not the only legal action taking place in this field. Getty Images is suing Stable Diffusion for scraping its content. Additionally, in the coding world, Microsoft, GitHub, and OpenAI are being sued for allegedly violating copyright law by using open-source code to train AI models.

And whilst we’re on the subject of trials, we’re about to see the first AI lawyer fight a legal case in court. An AI language model will be used to defend someone in a real case.

Seems like we’ll soon see an AI model fight another AI model in court over AI-related allegations.

Gen AI Deals that make your eyes (and mouth) water 💰

What caught our eye? 👀In a round led by Vertex Ventures and Sequoia Capital, CloseFactor comes out to tackle an age-old problem for salespeople: automating the manual work involved in target account research. Is this the beginning of the end for the Junior Sales role?

What caught our eye? 👀Paul Ehlinger, the CEO previously said the company, is a first of its kind, fully AI-driven design studio for influencer marketing content.

New developments to spam your #random Slack channel 💬

  • 💰 ChatGPT is getting an API! Sign up here

  • 🔮 Elad Gil chatting to Reid Hoffman about the future of AI

  • ⚖️ Claude AI from Anthropic passes Law and Economics exam

  • 🍺 A brewery in Detroit releases an AI-made beer. Marketing stunt or the start of a new industry? We’d have to taste it to answer that. Are any of our readers in Detroit?

Things to learn when you need a raise

Want access to the Generative AI investors database? Share and subscribe to AI In the Middle (below) to receive it! ⬇️

Before you go

For all the Batman fans in the audience. This is genuinely mind-blowing.

Curtain call. Hope you enjoyed today’s edition!

— Calin Drimbau (@calindrimbau