Ever since generative AI tools like Midjourney became available to the public in 2022, curious users and AI fanatics alike have been experimenting with the technology. But for tech aficionados and AI enthusiasts like Justin Meyer and Maxfield Hulker, Midjourney’s closed-source model wasn’t enough — they wanted to go deeper. That’s why Justin and Max created Citivai, an open-source generative AI tool and social platform where users can create, share, and experiment with new image generation models. They sit down with Bilawal to discuss why community is so important to open-source development, the future of algorithmic personalization, and the famous so-called “dead internet theory.” They also unpack the risks of open-source development, and emphasize the importance of setting boundaries to keep users safe — while acknowledging the important role that “not-safe-for-work” content has played in the evolution of these powerful tools.
For transcripts for The TED AI Show, visit go.ted.com/TTAIS-transcripts
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] TED Audio Collective.
[00:00:33] We've been logging on, sometimes simply to look at all the crazy stuff the web has to offer.
[00:00:39] And since then, that need has been fulfilled by online communities, where creatives come together to inspire each other and share their techniques.
[00:00:48] Sites like DeviantArt gave artists a space to showcase their diverse digital creations.
[00:00:53] Newgrounds became a hub for quirky flash games and animations.
[00:00:57] And Vimeo began as a humble community for aspiring filmmakers.
[00:01:01] In recent years, you probably noticed AI-generated art popping up on your social feeds.
[00:01:07] Those eerily surreal portraits or fantastical landscapes that look almost too perfect to be real.
[00:01:15] And with a new type of creation, a new platform has emerged for this style of art, Civitai.
[00:01:21] It's a hub where beginners, professional artists, and engineers alike are experimenting with the latest models,
[00:01:28] like stable diffusion and flux.
[00:01:30] Tweaking them, developing new techniques, and sharing their workflows.
[00:01:35] As the community has grown, so have the tools.
[00:01:38] Becoming increasingly accessible and blurring the lines between all the roles one can play within this ecosystem.
[00:01:45] With just a few clicks, anyone can go from consumer to creator.
[00:01:48] But what does this really mean for the value of art?
[00:01:52] And what are the risks and rewards of democratizing technology capable of creating almost anything?
[00:02:00] I'm Bilal Vosudu, and this is the TEDAI Show, where we figure out how to live and thrive in a world where AI is changing everything.
[00:02:09] How will humans and machines work together in the future?
[00:02:20] We spend so much time discussing how the world's changing.
[00:02:24] It would be absolutely absurd to believe the role of the CEO is not.
[00:02:28] This is Imagine This, a podcast from BCG that helps CEOs consider possible futures for our world and their businesses.
[00:02:37] Listen wherever you get your podcasts.
[00:02:44] Hey, everybody. This is John Wygel, the host of The Hustle Daily Show, a short daily podcast that brings you the latest news in business and tech.
[00:02:51] Every day, I'm joined by amazing writers of The Hustle Newsletter, and we cover the biggest business headlines in 10 minutes or less and explain why you should care about them.
[00:02:59] We're not your typical boring news podcast full of stock picks or quarterly performance breakdowns.
[00:03:03] We have a lot of fun.
[00:03:04] We're irreverent, and we find that sweet spot where business news meets culture and tech.
[00:03:08] We go deep on topics like how a man won the lottery 14 times or why it's nearly impossible to buy an original Bob Ross painting.
[00:03:14] If nothing else, you'll walk away with an interesting piece of news to talk about with your friends that makes you sound way smarter than you actually are.
[00:03:20] So search for The Hustle Daily Show on Apple Podcasts, Spotify, or wherever you listen to your podcasts.
[00:03:27] To help unpack these questions, we're joined by co-founders Justin Meyer and Maxfield Holker.
[00:03:33] Together, we'll explore the future of content creation and consumption, the dead internet theory, and why not safe for work content remains on their platform.
[00:03:43] All right, Justin and Maxfield, welcome to the show.
[00:03:46] Thanks again for having us. Yeah, good to be here.
[00:03:48] So everyone's got their origin story.
[00:03:51] And I'm curious, what first drew each of you into the world of AI and creative tech?
[00:03:55] Do you want to go first, Justin?
[00:03:57] So Max introduced me to Midjourney in August of 2022.
[00:04:02] And I don't know, I've been watching the development of AI image for some time.
[00:04:06] I was fascinated by what I saw with Dali.
[00:04:09] And I've always been a creative person. Engineering always came more naturally to me.
[00:04:13] But being able to use kind of my engineering skills to modify prompts and go back and forth with AI was just game changing for me, empowering.
[00:04:23] How about you, Maxfield?
[00:04:25] Yeah. I started playing with Dali way back before Midjourney was a thing and like just an open Google form that you could actually mess around with it in.
[00:04:33] And I made some of my first images of just myself like melting into a landscape image that was just like completely like pixelated and destroyed.
[00:04:42] And it was really interesting to kind of like see this kind of new art form that was less human made and more computer made.
[00:04:48] Yeah, it's interesting. I think Midjourney for me as well, and I think this is what 2022 Midjourney V3, I believe, when people really started playing with it to like just be able to query this sort of distillation of human creativity and get something back was super exciting.
[00:05:04] And of course, that is also the year open source AI really took off, right? We had stable diffusion and image generators pop up. So I'm curious, you launched Civitai a few years ago, and now it's drawing in millions of visitors every month. What did you fix in the world of AI creation? Like what age were you scratching when you first launched this product?
[00:05:25] Can I take that one, Max?
[00:05:26] Can I take that one, Max?
[00:05:26] Yeah, please. It's your vision.
[00:05:28] I made that leap from Midjourney into open source image generation because Midjourney's unlimited plan was only kind of unlimited. I had to go slower and slower. And so I needed to find a new outlet for my hobby and stable diffusion had just been released. And so I got really active in that community. And every time somebody would post an image, people asked, what's your prompt? How'd you make it? Right? And people started to make custom models. They'd add new concepts. They'd add new styles. They'd add new styles. They'd have new ideas.
[00:05:55] They'd add new characters, things like that. And so Civitai was really aimed at solving that problem. We wanted to make it so that people could find all of the models, all of the resources that they needed to make something in one place. And that anytime they posted an image, we would capture all of the information about how it was made, including the models that were used to make that picture. And a whole lot of social features kind of grew out of that because, you know, people needed to be able to talk about these awesome things that they were making. So that was the itch.
[00:06:23] Yeah, I mean, it's really interesting because you're totally right. Everyone was very obsessed with what prompts are using, but it's more than that. And so your platform kind of has the full chain of custody, the full workflow that was used to generate that image. For the uninitiated, could you talk a little bit about open source AI and what it means to sort of create these like fine-tuned models? And, you know, people throw on terms like Laura's, like, can you just break that down simply for those listening?
[00:06:50] How would you break that down, Max? I think that you can break it down more simply than I can. I love complexity. It's a problem.
[00:06:56] Yeah, absolutely. Yeah, let me think about that. So let's see, you have general base models that are usually made by big corporations that have millions of dollars to be able to throw at that because they just require so much training data.
[00:07:08] Laura's and embeddings are fine-tunes that you're able to make off of those base models for use with those base models to be able to further tweak and kind of get that last 10% of image generation of what you're trying to get concept-wise without, you know, having to have kind of the deep payroll and pockets to be able to fund that.
[00:07:24] Like I said, that's like probably the highest level section of like what that allows.
[00:07:29] Building on that, Max, do you want to just break down maybe some examples of the kind of fine-tunes your community is making?
[00:07:36] Like, and some of like maybe the cooler examples that you've come across recently.
[00:07:40] World morphs are probably the thing that has been the most interesting to me since the beginning, which is this idea of, you know, what would the world look like if everything was waffles or if everything was made of wires or if everything was made of coffee cups.
[00:07:52] Yeah, exactly. It was all felt. And then the ability to be able to stack that with different styles, right?
[00:07:57] So here's like a realistic image of everything was made of felt.
[00:07:59] And then here's what an image of everything was made of felt was if it was made in like a Studio Ghibli anime style as well.
[00:08:05] And the layering of those kind of complex concepts on top of one another is really where open source shines because you can't do something like that really reliably with any of these closed source tools.
[00:08:15] And the ability to do like specific phases of human expression is really interesting.
[00:08:21] I think that's something that a lot of generative AI struggles with is ability to do specific facial expressions and being able to have like trained concepts like on facial expressions is really, really cool.
[00:08:31] Because again, you can kind of get exactly what you want out of that.
[00:08:33] So is it fair to say it's kind of like a fancy way of like filtering or focusing what you get out of these models and getting like a consistent aesthetic or style that somebody else could then reproduce.
[00:08:45] And I love your point about stacking those pieces together too, because it does feel like there are all these like Lego primitives available on the open source side that you don't really see on the closed source side.
[00:08:56] So well, they eventually end up having some way to do mid journey has their like style reference thing, like it slowly comes about.
[00:09:03] But the moment of paper drops, you know, you see this coming out in open source almost immediately.
[00:09:10] So I'm kind of curious, like before you built this, Justin, like, what was like the ecosystem?
[00:09:15] Like where was collaboration happening?
[00:09:17] You mentioned Reddit, you want to break down sort of what was the center of gravity for the community before Civitai and then after?
[00:09:25] Yeah, I think the issue was there kind of wasn't a center of gravity.
[00:09:29] It was so distributed, right?
[00:09:30] Like there were different communities kind of all around the world.
[00:09:34] You know, some of the people were focused on tooling and some people were focused on model development.
[00:09:38] Some people were focused on, you know, whatever that latest research paper was.
[00:09:41] And it all came together on Civitai.
[00:09:44] I mean, it totally makes sense that community is a focus because Civitai is derived from the Latin word for community.
[00:09:51] But what about online community is so important for making open source AI work, unlike perhaps all the other approaches to building out these AI creation tools?
[00:10:02] With closed source development and things like that, it's really up to the small teams inside of a company and all of the resources that they have behind them to push something forward.
[00:10:11] With open source development, really what it comes down to is how productive a group of people can be in continuing the development of something.
[00:10:20] And this is kind of like a new type of open source movement where the way that people push things forward is, you know, not necessarily through software, but through training.
[00:10:30] So it kind of creates this new type of distributed training that doesn't really exist inside of these, you know, companies inside these closed source models.
[00:10:38] Instead, it's allowing a lot of people to kind of add what they think needs to be added and then people to grab what they want from that pile.
[00:10:46] And so community is critical because it essentially makes it so that people can find and provide that niche that they do best.
[00:10:54] Basically, without a community, open source development doesn't really happen.
[00:10:57] Yeah, it makes sense, right?
[00:10:58] Like, why trust a small number of anointed product managers to make decisions about what to build for absolutely everything?
[00:11:05] Let the community decide.
[00:11:06] Yeah, exactly.
[00:11:07] And Max?
[00:11:08] No, I was going to say it's less about open source AI as it is about open source content as a whole.
[00:11:14] Like, what's interesting about this is it's pushing a new boundary in terms of just generally new content, which is that it's no longer static and more interactive.
[00:11:21] It's more customizable and personable to, you know, the person who's actually viewing it.
[00:11:25] But the interesting thing about that is that when you break away from the tools themselves, it doesn't matter if they're open source or closed source or anything else.
[00:11:32] What matters is how easily can you replicate that content because of how much of it has been cataloged.
[00:11:37] So the art itself has like a full stepping stone.
[00:11:39] I mean, it's a whole new idea.
[00:11:41] Imagine if you had a YouTube video and it showed like every piece of equipment that was used, every shooting location, every angle, every lighting feature, every like rough setting on every piece of equipment they were used for how to make that video.
[00:11:50] So you could really accurately go out and recreate it exactly.
[00:11:54] That's kind of like what this is, is the ability to completely recreate and then be able to then remix and change media on the fly, which is cool.
[00:12:02] That's open source content.
[00:12:03] It's a really interesting way of putting it, Max.
[00:12:05] It's like we saw like, you know, a sliver of this with perhaps like, you know, the rise of TikTok and the ability to like remix content essentially or like splice stuff together.
[00:12:16] And it always reminded me of, I don't know if y'all remember Photoshop tennis on like Reddit communities where somebody would upload a photo and then somebody else would add something to it.
[00:12:25] And then like through the course of that conversation, you could see this like meme basically like, you know, come to life.
[00:12:31] And we saw that in a more static, rudimentary sense, again, with these short form platforms.
[00:12:37] But yeah, what you're describing now, the ability to look at something and then basically take it and like, yeah, remixing it is like isn't even fully capturing, like recreating it and then taking it in whatever direction you want is super exciting.
[00:12:52] And yeah, it seems like there's nothing else out there quite like it.
[00:12:56] But in my mind, it's like totally the new age of media, right?
[00:12:59] I mean, even this podcast, if you think about 10 years from now, it'd be like, I don't like the guests that are on here.
[00:13:04] Let me just replace them or let me replace the concept or let me replace the people who are on it.
[00:13:09] Like, I don't want this Max guy talking about-
[00:13:11] Yeah, I can't stand Justin's voice.
[00:13:11] Let me get that.
[00:13:12] Let me replace that with, let me get like, you know, David Attenborough voice of this guy instead.
[00:13:18] You know, that's a really cool idea and I like that a lot.
[00:13:20] And maybe this is foreshadowing because we'll definitely come back to the lines between reality and imagination blurring and the various implications therein.
[00:13:29] But kind of bringing it back to Civitai, it seems like open source was so fundamental and Max, to your point, kind of the inherent remixability and being able to see sort of the full, you know, ingredient list.
[00:13:42] And, you know, the set of instructions to reproduce and remix something is really cool.
[00:13:47] But clearly, y'all just announced Spine and you're also embracing closed source tools like Eleven Labs, Kling, UDO.
[00:13:56] How do you think about the open source and closed source movement?
[00:13:59] And how does Civitai evolve in this sort of world where we have to live alongside both type of tools?
[00:14:05] I think ultimately what we've seen is that the best content isn't using just one tool.
[00:14:09] You know, they'll start generating their image using an open source model like Flux that has a few Lauras on it.
[00:14:16] And then after they make that image, they'll go take it into a video tool like Luma or Kling.
[00:14:22] And then they'll add music to it using, you know, Suno or UDO.
[00:14:26] And what we saw is that essentially, hey, people want to be able to use all these tools to make content.
[00:14:31] With the end goal of being able to support this new medium that Max is talking about, where like we can completely recreate content and make it remixable.
[00:14:39] You describe Spine as what AOL was to the internet.
[00:14:43] Spine is to AI art.
[00:14:45] Can you elaborate on that?
[00:14:47] And please do keep in mind that some of our listeners are younger than 30 and do not remember a world with...
[00:14:52] What is AOL?
[00:14:53] What is AOL installation disks for crying out loud?
[00:14:56] Yeah.
[00:14:56] Yeah.
[00:14:57] So I guess for those that aren't familiar with AOL, it basically made it so that the internet was all accessible in one place, right?
[00:15:02] You could do your instant messaging.
[00:15:04] You could do your email.
[00:15:05] You could find your stocks.
[00:15:07] All of the stuff that people were kind of doing on the internet at the time.
[00:15:10] And in a way, essentially what we're doing now is trying to bring together all of the tools that people are using to create content using AI into one place in the same way.
[00:15:20] So that now, rather than having to know all of the different things or knowing what the best tool is at a current time and trying to figure that out, we can help you just find something that you think is interesting.
[00:15:30] We'll allow you to swap out the video that was used with a different animation or an animation style or something like that.
[00:15:37] And now, rather than having to figure out where to go to do a specific thing, you can come to one place that kind of brings it all together and see, you know, based on the best content that's being made, what's right for what it is that you're trying to make as well.
[00:15:51] I love it.
[00:15:51] Love it.
[00:15:52] Okay.
[00:15:52] So it's interesting, right?
[00:15:53] The essence of being able to have visibility into how something was created stays the same.
[00:15:58] But since people are using closed source tools, you can now kind of encapsulate that information as well and kind of make those workflows accessible to other creators.
[00:16:08] Like I've talked to designers and big tech companies too that are like, yeah, I'm on Civitai all the time downloading models.
[00:16:14] And they're also like, I got to be really careful about what I have on my screen.
[00:16:18] Some people think that I'm actually working.
[00:16:20] And so this is one of the interesting things about open source, right?
[00:16:25] Not safe for work content has been, interestingly, a huge driver of many aspects of open source AI art.
[00:16:32] It's a part of Civitai.
[00:16:34] Can you explain why to our audience, you know, not safe for work content has been important in the evolution of open source AI?
[00:16:42] A couple things.
[00:16:43] I mean, I think first and foremost, it's a cliche, right, that all new medias and formats are pushed forward by porn.
[00:16:49] But it's a true cliche.
[00:16:51] It's a cliche for a reason.
[00:16:52] And in a lot of ways, the people who are pushing this actually have like a deep need and desire to want to make this stuff.
[00:16:57] And they're also the ones that are pushing the technology forward.
[00:16:59] In our mind, we wrestled a lot when we were first starting Civitai.
[00:17:02] Do we have a separate section for not safe for work?
[00:17:05] Or do we not allow not safe for work?
[00:17:07] The thing that cinched us for it and that like we needed to maintain it was the multi-usability of these resources, right?
[00:17:13] How many different things it can be used on?
[00:17:14] For instance, at the time, one of the best models that was for doing anatomy was actually a porn model because it had been trained on the most amount of like, you know, human bodies, obviously.
[00:17:25] So it was being used by tons of people who were not doing anything that was not safe for work.
[00:17:29] But they were using this porn model because they were just getting the most accurate representations of like human figures and different poses.
[00:17:34] And at that point, I think, you know, Justin and I had a conversation where like we can't close this stuff out.
[00:17:39] Like it has so many more uses beyond just what it's being advertised for and what it could potentially be used for.
[00:17:44] I don't know.
[00:17:45] Would you add anything to that, Justin?
[00:17:46] Yeah, I just agree with you.
[00:17:48] Essentially, what we saw is that this energy that people are putting into producing, you know, adult content, mature content with AI was ultimately pushing the quality of the model forward.
[00:17:58] And so it was important for us on that side of things.
[00:18:01] We wanted to make sure that we could support everything that people were making with AI.
[00:18:05] And like you said, part of the draw of open source tech was those limits that were being placed on you by these closed source platforms didn't exist, right?
[00:18:14] You could make all kinds of things, good and bad.
[00:18:17] And, you know, being able to be a space that could support that hasn't come without challenges.
[00:18:22] And we've continued to learn and grow and do our best to kind of put the rails in place, at least through our site as well.
[00:18:29] But it's definitely it was it was a difficult decision.
[00:18:33] But I'm happy that we made it because I think that it kind of makes a unique experience and it allows us to be the hub of all things AI, not just, you know, half of it, if you will.
[00:18:42] It is interesting that the earlier versions of stable diffusion were really good at anatomy.
[00:18:49] And then when stability went through scrubbing pornography from their training data, suddenly there was this massive degradation, you know, in how you reconstruct anatomy.
[00:18:57] But I have to imagine there were some like moral and ethical quandaries that you all had to deal with.
[00:19:02] For example, CSAM, like I know you all have made a huge attempt to reduce the amount of not safe work content on the platform itself.
[00:19:11] And there were instances of people using some of these models to create, you know, CSAM content.
[00:19:16] What was that journey and experience like, you know, kind of putting the rails back on there?
[00:19:20] I'd love for you to go into a little bit more detail.
[00:19:23] Yeah, it was it was an interesting learning experience for us because obviously when we were putting it together, we were like, oh, you know, I guess we weren't really aware of how depraved people could potentially get with this.
[00:19:32] So like CSAM hadn't really even entered our minds when we were putting together this site.
[00:19:35] And when we were first confronted with it, we're like, oh, OK, got it.
[00:19:38] So we should we need to start making some moves here really quick to be able to kind of limit this.
[00:19:42] So one of the first things we did was put together some content policies that we considered common sense content policies around like what we would consider not only, you know, obviously legally what you can put on the Internet and what we're allowed to host, but also ethically what we thought was was right, both around content for real people and for content around minors.
[00:20:01] For instance, one of the things that's still a rule today is that we don't allow any photorealistic depictions of minors on the platform, period.
[00:20:06] And that's purely just for the reasons of like, again, it has too many, too many possibilities of potentially being misconstrued or abused by people.
[00:20:15] And then we don't allow any depictions of real people that's not in a either like a work or school type setting, essentially, that you wouldn't allow to be used in those environments.
[00:20:25] And that includes poses as well as facial expressions.
[00:20:28] And our main thought process around that was kind of like, hey, we're both dads and we're both married.
[00:20:33] And it's like, do we do we want to do we want to have our own kids or our own spouses like portrayed in this way?
[00:20:39] No.
[00:20:40] So let's try and let's try and put some rules in place to prevent that.
[00:20:44] And you're, of course, talking about sort of the non consensual deep fake problem, right?
[00:20:48] Mm hmm.
[00:20:48] Yeah.
[00:20:49] Yep.
[00:20:50] Cool.
[00:20:50] Justin, anything to add?
[00:20:51] Yeah.
[00:20:52] Yeah.
[00:20:53] We hit new milestones all the time of the amount of content being uploaded.
[00:20:56] And so having to kind of handle the volume has not been without challenges.
[00:21:01] And we've had to make, you know, unique models of our own to help detect stuff.
[00:21:05] And it's been a journey and we're constantly improving it.
[00:21:08] One of the things that we committed to back in, I think we kicked it off in like September of last year, actually, was we started working with Thorne along with a few other AI companies in the space to prepare a white paper called Safer by Design with the intent to really start to, I guess, start to establish kind of the norms for reporting on what we see and kind of handling these models so that we can kind of help, I don't know, make things safer in the future.
[00:21:35] Things are inherently not very safe right now.
[00:21:37] It's the Wild West.
[00:21:38] It's a whole new frontier, right?
[00:21:40] And that doesn't come without risk.
[00:21:41] It doesn't come without danger.
[00:21:43] And so really what we're trying to do is think about, okay, if this is where we're at right now and this is the wilderness that we're living in, what does it look like in the future?
[00:21:50] How can we make things safer in the future?
[00:21:52] How can we start to handle this volume that's coming at us as, you know, people are generating billions of images a month now?
[00:21:58] We do actively work with government organizations like NCMEC as well as others to kind of combat CSAM in general.
[00:22:04] And it's been helpful to have their support as well and kind of understand how we can do our part to help fight this problem as it kind of takes on a new shape, if you will.
[00:22:15] There is this dichotomy you often have with open source, right?
[00:22:18] Like you trust people and give them agency to do whatever they can with minimal guardrails.
[00:22:24] And you want to put minimal guardrails there because that's how you foster innovation, right?
[00:22:29] And maybe on the other end of the spectrum is you have a bunch of these closed source models who might – where I might add there are some other ethical quandaries where clearly they're training on all this type of data.
[00:22:39] They don't just let you prompt it.
[00:22:41] If I take IP as an example, I can't ask Dali for Godzilla, but if I ask for a dinosaur, you know, with a really long tail, like I'm going to basically get Godzilla.
[00:22:51] And so, you know, so the question is where does enforcement happen?
[00:22:55] Does it happen at the point of model creation?
[00:22:58] Does it happen at the point of generation?
[00:23:00] Does it happen at the point of distribution?
[00:23:02] And I'm kind of curious if you have any thoughts about that since you are really deep on the model creation side, but now you do have, you know, inference capabilities and you're offering workflows.
[00:23:12] But then again, you can't really control distribution.
[00:23:16] Right, right.
[00:23:17] Yeah, I think ultimately it's going to have to be both.
[00:23:20] One of the interesting challenges with open source tech is you can't really control what people do once they have the model in their hands.
[00:23:26] So the best you can do is kind of control what you're training on.
[00:23:29] We actually kicked off something called the Open Model Initiative back in June, May, shortly after SD3 was released.
[00:23:36] And people are like, hey, we got to start building our alternative of our own driven by the community with an open license.
[00:23:42] And one of the decisions that we made early on as we were kind of talking about what that data set should look like is sure, we can have mature content.
[00:23:49] We need to be able to capture human anatomy, but let's keep that separate from the ability to create children.
[00:23:54] Right.
[00:23:55] So there's already some work that needs to be done kind of at the model development side of things to make sure that, you know, at least you're producing and releasing something safe.
[00:24:04] Now, what people do after that, that's the joys of open source.
[00:24:08] Could be dangerous.
[00:24:09] But that's where that second part comes in, right?
[00:24:11] And where we're enforcing things like the ability to control what comes out of those models.
[00:24:16] So really, ultimately, I'd say that it kind of needs to be both.
[00:24:20] But I think thinking even further down the pipeline, what are people doing with that content?
[00:24:24] Really where we should be managing this stuff is on the sites where it's getting posted, on the tools where it's being shared.
[00:24:30] Can we instead enforce policies around what people are doing with the stuff that they're making?
[00:24:34] If they have the ability to somehow get around filters that are put on by TikTok and Microsoft and be able to produce content that they shouldn't, shouldn't it be on the platforms where they're sharing them to like prevent the spread of that content that shouldn't be made?
[00:24:47] So I think that that's kind of like where I'm imagining the ultimate filter should get applied.
[00:24:52] But certainly got to do stuff on both sides of that equation.
[00:24:55] Totally.
[00:24:56] It sounds like we need a full stack solution.
[00:24:57] But to your point about, you know, what the community is doing with these models and these capabilities, in 2023, there were about 10,000 unique creators that contributed models monthly to Civitai.
[00:25:10] What drives that level of engagement like this, like, you know, like small minority of ultra super users, if you will?
[00:25:16] I think Max would jump in and say clout as the first answer.
[00:25:21] I could see it in his mind.
[00:25:23] We have leaderboards.
[00:25:25] I'm a big gamer myself.
[00:25:26] Max is as well.
[00:25:27] And so we understand how incentives work and we want to encourage the community to participate and engage and give them the means to continue to create.
[00:25:37] And that's kind of where Buzz comes into play.
[00:25:39] But before even Buzz was introduced, I would say that the first thing that motivated people was, you know, hey, I want my name to be up in lights.
[00:25:47] So to access certain tools on Civitai, you use a currency called Buzz.
[00:25:52] Buzz and you can purchase Buzz with currency or earn it through content creation or contributing to the platform.
[00:25:58] How does this work?
[00:26:00] And how does it play into the creator economy that y'all are constructing, Max?
[00:26:04] Yeah, yeah, absolutely.
[00:26:05] When we were transitioning from having everything be free to, because originally, you know, like all generation, everything on the site was free for anybody who wanted to use it.
[00:26:12] And we realized, obviously, that's not sustainable.
[00:26:14] We have to actually, you know, be charging a fee, especially if you want to start supporting a creator economy.
[00:26:18] Then the other thing was like, okay, well, you can give value by actually, you know, interacting.
[00:26:23] You can give value by giving your feedback on the models you've used, by producing images.
[00:26:27] Every image that you produce, you're giving some value back to the community itself, whether people use that for training data, whether they use that to actually, like, better hone prompts, or whether they just use that as feedback on their own resources that they've created.
[00:26:38] Every interaction you do in the community has value, and we should, you know, be giving you value accordingly for that.
[00:26:43] And our solution to that was Buzz, right?
[00:26:46] So the idea was like, let's give them Buzz as a way of saying, hey, thank you for interacting with the community.
[00:26:51] And also, here's how you can then use that to now use this now paid generation service, because it uses the Buzz platform.
[00:26:57] And then we can use that to then fund the creator economy.
[00:27:01] The creator economy as a whole is like one, I think, one of the core goals that we had when we were setting out for getting Civitai going.
[00:27:07] We noticed that, I mean, if you've talked to any of these creators, you know how much time and sweat and tears and often cash goes into creating some of these resources, especially the really high-end ones.
[00:27:17] It takes a lot of money and a lot of time to do this well.
[00:27:21] And a lot of these people are just doing it for free, and it's like, that doesn't make a lot of sense.
[00:27:24] We've got to get them some kind of payment here, especially with all the people who are benefiting from it.
[00:27:28] And the creator economy really stemmed from that idea of like, okay, we've got to make sure that these people are getting compensated correctly.
[00:27:34] That led us into the Buzz system and how we've kind of distributed that in two ways.
[00:27:38] So creators have two ways of earning on the platform right now.
[00:27:40] They have an early access system where they can put a resource up kind of in the spirit of open source rather than making it just purely behind a paywall.
[00:27:48] You can put it up for 15 days essentially as early access.
[00:27:51] And then we also allow a split for all generations that are done on the platform itself.
[00:27:55] So creators get, I think, 25% now, 25% of all Buzz that's spent on any single image generation.
[00:28:03] But yeah, no, we have more plans for more ways for creators to be able to monetize in the future.
[00:28:07] But those are like the two main ones right now, and they've been going pretty well.
[00:28:10] We've been getting some good feedback from people who've been using them.
[00:28:12] You're totally right.
[00:28:13] Like an open source, people are putting a disproportionate amount of time into essentially like, you know, a public commons community resource.
[00:28:21] So to see, you know, a community where you can make money and get a slice of even your model being used to create an image, that's really, really cool.
[00:28:30] Are there different personas for the types of users that you end up seeing?
[00:28:34] Like I have to imagine some are like the regular model contributors.
[00:28:37] Some are there just like, I don't know, paying and downloading models.
[00:28:41] Others are creating content on the platform itself.
[00:28:44] How do you all think about the various stakeholders that you serve as Civitai?
[00:28:49] Yeah, pretty early on, we kind of divided them into three categories.
[00:28:53] And they each kind of serve a different purpose, and they kind of build on each other.
[00:28:57] The first class of user was what we called our creators.
[00:29:00] They were the people making models that then attracted our next class of users called enthusiasts,
[00:29:05] who would then go and take those models and create images, create content,
[00:29:10] which would then attract the next class of users, which is called consumers.
[00:29:14] So we kind of treat them in those layers in that way.
[00:29:17] And what we found is that, you know, there's a ton of consumers.
[00:29:21] There's basically, you know, 90% of our users are consumers.
[00:29:24] The next 9% are enthusiasts, and then the top 1% are creators, which I hear is pretty common for these kind of...
[00:29:30] The power law.
[00:29:31] Yeah, yeah, exactly.
[00:29:32] So one of the interesting things about this, though, is that there's kind of less barriers than ever before to move up that stack.
[00:29:38] And so one of the things that we're constantly pushing for is figuring out ways to help consumers become enthusiasts and to encourage enthusiasts to become creators.
[00:29:46] Because you don't need to be, you know, some super technical person to figure this stuff out.
[00:29:51] Because at this point, now anybody can create.
[00:29:53] You just need to have a good idea, and AI can kind of hold your hand to make that thing awesome.
[00:29:57] Speaking of money, I have to ask.
[00:30:16] So y'all raised a $5.1 million seed round last year, led by Andreessen Horowitz.
[00:30:22] The AI landscape has shifted a lot over the last year.
[00:30:26] How do investors view your refined mission now?
[00:30:29] And are you having to deal with this pressure to perhaps be profitable versus your original mission that's, you know, focused on art and community?
[00:30:38] Like, are those incentives perfectly aligned, or is there some tension there you're having to contend with?
[00:30:45] It's interesting.
[00:30:46] Yeah.
[00:30:47] I mean, there is pressure to be profitable, and there is pressure to be a business.
[00:30:52] The cool thing about this medium, though, is that unlike traditional art where, you know, it's on a single person to go produce a piece of work,
[00:30:59] and there's not really ways for other people to put money into it.
[00:31:02] It's different, right?
[00:31:03] Like, we have lots of room to monetize on top of services that people want to use.
[00:31:08] And so reaching profitability is something that I think will only become more and more sustainable.
[00:31:14] It's actually funny.
[00:31:15] Part of the pitch that we were sharing was that when we added monetization, when we turned on the buzz system and started charging for generation,
[00:31:22] we actually saw an increase in the amount of engagement, an increase in the amount of creators within the community.
[00:31:28] So it's cool because it actually makes art and community sustainable in a way that kind of doesn't exist right now,
[00:31:36] because it adds this whole new way to participate that naturally attracts dollars.
[00:31:40] So I'm hoping that we can keep it that way, that we can make it sustainable.
[00:31:45] And so far, it seems like we're on track, but certainly getting that funding helped us.
[00:31:50] We had thought we could bootstrap it initially, but doing that for a million people at once is just a hard thing to do when you're a small company.
[00:31:57] Love it.
[00:31:57] Anything to add, Max?
[00:31:59] Yeah, I was going to say one thing that's interesting is that everybody looks for comparables, especially in the VC world.
[00:32:04] Like, you know, what industry are you upsetting?
[00:32:07] Or who currently are you, like, turning over?
[00:32:09] Are you replacing?
[00:32:10] Are you doing whatever with?
[00:32:11] And it's very hard to come to.
[00:32:12] It's like, oh, no, we're actually, like, designing a whole new form of content and content consumption.
[00:32:17] It does feel like it's not just about creation.
[00:32:20] It totally is about consumption, too, and how that's shifting.
[00:32:23] You talked about this notion of remixability.
[00:32:25] It's kind of making creation more accessible in the sense that you don't have this blank canvas problem, right?
[00:32:31] Like, suddenly you have a starting point or you have multiple, you know, primitives that you can put together to make something else entirely.
[00:32:38] And one of the things I think about is, going back to the shorts analogy of remixability, is how quickly these platforms, like, kind of reverse engineer your soul.
[00:32:47] Like, they figure out what kind of content you like.
[00:32:49] And, of course, you've got, you know, user-generated content on one end.
[00:32:52] You've got, you know, users on the other end and an algorithm that sort of does the matching.
[00:32:56] And I can't help but imagine in this future that we're headed to where content will be, you know, personalized, disposable, created just in time for you.
[00:33:05] How do you all think the future of consumption is going to evolve with the kind of tools and capabilities you're building?
[00:33:11] I love the way that you're thinking about it.
[00:33:13] I completely agree.
[00:33:14] I think that, I mean, with so much content that can be created now, there's no reason that all of it won't be personalized for you.
[00:33:21] Like, even if it was something that was made by somebody else, if it was made in a language that's not native to you, it'll be translated.
[00:33:27] I mean, why wouldn't it be, right?
[00:33:29] So, I think that, you know, it's going to be interesting to watch the bubble that we're kind of already sitting inside of with these algorithms serving us exactly what we want to see taken even to another level as it's further personalized to you.
[00:33:43] And kind of what those limits might be.
[00:33:46] Are all advertisements going to include pictures of me or pictures of family or pictures of the person that it thinks that I'm going to be most attracted to or something?
[00:33:55] It's kind of wild to think about what those constraints might be and how we still kind of have a collaborative experience inside of that, right?
[00:34:04] Like, if everything's personalized, how do we connect?
[00:34:07] Can we view the same content but have it have minor differences in there?
[00:34:12] Can I still connect with you around the story of, you know, Breaking Bad or something like that?
[00:34:19] Even though there was parts of the series that I saw that were completely different than yours.
[00:34:23] It's going to be interesting to see kind of how the world shifts as content as we see it today is more like a universe than a snapshot.
[00:34:31] So, I'm looking forward to that.
[00:34:33] The universe analogy is an interesting one.
[00:34:35] And I did hear your quote, don't build movies, build universes.
[00:34:39] And, yeah, it's at the heart of the question that I wonder about.
[00:34:43] It's like, what is the future of shared experiences?
[00:34:45] Did you watch season, you know, 17 of CSI Miami or whatever?
[00:34:52] Is that going to be the case?
[00:34:53] Like, these kind of shared stories and experiences that bring us together versus kind of being lost in our own islands of personalized content?
[00:35:02] And one of the things that kind of fuels into this lately is every time I ask somebody like, hey, what's your favorite YouTuber?
[00:35:09] I'll get like three new names and look them up.
[00:35:12] They all have like millions of followers and I have never heard of them.
[00:35:15] And so then I just imagine layering generative AI on top of that.
[00:35:19] And it's just like this turtles all the way down infinite fractal.
[00:35:23] Yeah, I'm curious if that like evokes any responses in y'all because it seems to be headed that way.
[00:35:29] Yeah, no.
[00:35:31] Justin and I have talked about this a lot.
[00:35:32] My personal view is, yes, we essentially just go into rabbit holes of our own media creation and we never emerge.
[00:35:40] Because there's going to be no incentive to.
[00:35:42] I mean, why would you, right?
[00:35:43] Yeah.
[00:35:44] It's like if you look at the popularity of something like TikTok and TikTok's entirely like it's an algorithm that also happens to have content, right?
[00:35:51] Like that is what TikTok's value is.
[00:35:53] And you give TikTok the ability to then generate this content on demand, on the fly for everything you want to see ever.
[00:35:58] Like you just never will get people off of that.
[00:36:01] It'll be inescapable.
[00:36:02] And I think that's going to be the fate for large portions of the population for sure.
[00:36:09] It reminds me of that scene in WALL-E where it's just like people like floating around with their like headsets and it's just Mountain Dew straight to the vein.
[00:36:18] And honestly, this also brings up a really fun, you know, kind of chemistry both of y'all have as co-founders here is you are like a bit of an odd couple where, Justin, you strike me as far more of an AI optimist.
[00:36:31] And Max, like I don't want to call you a pessimist, but certainly you have a more pragmatic perspective.
[00:36:37] I consider myself a realistic optimist is the word I use.
[00:36:40] There we go.
[00:36:41] Which means pessimism, yeah.
[00:36:42] It's a nice way of saying pessimism.
[00:36:45] I am curious, like how does that dynamic play out when you're building this company and making product decisions and figuring out where to take stuff?
[00:36:53] I think it works really well, personally.
[00:36:55] Most of the features that come in, go into the platform itself are Justin's brainchild.
[00:37:00] And he gets there from getting feedback from the community.
[00:37:03] We spend a lot of time getting feedback from the community.
[00:37:05] And then it's always fun because then he'll be like, oh, we should do it this way.
[00:37:09] We should make it this way.
[00:37:09] And it's like, no, people will absolutely abuse that.
[00:37:12] Like we will be laundering money if we implement that, you know, like we can't do that.
[00:37:16] And so it's...
[00:37:17] It's nice.
[00:37:18] Max basically will tell me all of the ways that people are going to abuse it because he has got that pessimistic view.
[00:37:23] And it's like, oh.
[00:37:24] It's like the red hat, you know.
[00:37:26] Saves us tons of legal fees, right?
[00:37:28] Because I just do all the live.
[00:37:29] You're right.
[00:37:30] You're right.
[00:37:30] We can't do that.
[00:37:32] So it's great for like platform building because it's a good balance in terms of like, okay, well, like this is something we really want to put in.
[00:37:38] But, you know, how could it go awry, right?
[00:37:39] And then most of the time, you know, even though I think one of the plus sides of my pessimism is like part of me really wants to see it go awry.
[00:37:47] So I still am like, oh, let's just do it.
[00:37:48] Anyway, let's push it regardless because I want to see what breaks.
[00:37:51] And that means that we end up pushing out a lot of cool features really quickly.
[00:37:55] So it works well for the platform, I think.
[00:37:57] Love that.
[00:38:00] Worldview contributes to your belief in the dead internet theory.
[00:38:04] And can you explain what that is to the uninitiated?
[00:38:07] Sure, yeah.
[00:38:08] My personal definition of dead internet theory is this idea that we replace enough of the content on the internet with content that mimics actually coming from other people or that you could be indistinguishable from coming from other people.
[00:38:20] Or you just don't care if it comes from other people to the point where it loses all inherent value.
[00:38:25] And there's no real reason to be on the internet as a whole except just as an entertainment device.
[00:38:30] Yeah, no, I think we're actively contributing to it.
[00:38:32] I think AI is actively contributing to it.
[00:38:34] And I personally find that to be a great thing.
[00:38:37] I've been disillusioned with the internet for years.
[00:38:39] And I would like the entire thing to burn down if possible.
[00:38:42] So if we can help it, then I'm all for it.
[00:38:44] Justin, what do you think of Maxfield's belief in this?
[00:38:48] I mean, I sit here laughing because he's talked to me about this so many times and he's not wrong.
[00:38:54] I mean, it's definitely going to be a challenge.
[00:38:55] It's so easy to make so much content, so much AI slop as they call it these days that it's just the beginning of it all too.
[00:39:03] It's going to require us to kind of think differently and the internet's going to have to change.
[00:39:07] Like, how long until only 10% of the content of the internet is something that was made by a human?
[00:39:14] And I guess the other part is like, does it really even matter?
[00:39:17] It won't matter.
[00:39:18] There's people that I already wish I didn't actually have to talk to that I could just like say,
[00:39:22] hey, LLM, be me to talk to this person so that I don't have to manage this relationship.
[00:39:28] Like a digital twin of Justin delegated to manage this.
[00:39:31] Yeah, can we have the Justin agent take care of things for me?
[00:39:34] That would be great.
[00:39:35] Every time I'm like forced to go on Twitter, I'm like, you know, this would actually be better if these were AI chatbots.
[00:39:40] Like I think it would actually be a better experience if the entirety of Twitter was just AI chatbots.
[00:39:44] Because as it is, it's like, why am I forced to just experience this like slop all of the time?
[00:39:50] So I agree with you.
[00:39:52] It is inevitable.
[00:39:54] And like, it's this weird thing where, you know, I'm doing some prompting to take my few bullet points to turn it into an email.
[00:40:01] And then the other person's using an LLM, the new Apple intelligence to summarize it.
[00:40:06] It's like, wait, couldn't we have just sent the bullet points?
[00:40:08] And so it's this like compression and decompression that's happening, but still kind of human intent at the core of it.
[00:40:16] But you could easily see that change.
[00:40:18] A la the fully generative TikTok feed example that we're, you know, kind of speculating about.
[00:40:23] So I have to ask, what do you think creation looks like in three years from now?
[00:40:29] I think the thing that makes it still difficult is I look at how quickly video has moved over just the last year.
[00:40:35] I don't know if you guys have seen like that Will Smith eating spaghetti from like a year ago compared to like today where it's like, yeah, that could actually be Will Smith eating spaghetti.
[00:40:44] I don't know.
[00:40:45] The auto-generated TikTok thing could be real.
[00:40:48] I wouldn't be surprised to hear that there was already automated generated shorts in three years.
[00:40:52] And maybe they're not fully personalized yet, but that they're probably working towards that.
[00:40:57] I think the other one that's really interesting to me is probably going to be game development.
[00:41:01] It feels like we're still thinking about, you know, how do we make things cost less?
[00:41:05] But really, there's only so much cost cutting that you can do before you, you know, change from looking at efficiency into looking at, okay, well, what interesting stuff can we do with this?
[00:41:15] And how does that kind of change the game?
[00:41:17] Can you double click on that gaming point?
[00:41:19] Because it is interesting.
[00:41:20] I did get a chance to ask Jensen about this at GTC, his prediction that everything will be generated and not rendered in the future.
[00:41:27] And I think it's interesting to think about what happens when the model isn't the means for creation, but the model is the content in itself that you're experiencing.
[00:41:37] Does that spark any thoughts in you?
[00:41:39] Yeah, a few different things.
[00:41:42] I had the opportunity to make my own little AI game about two or three months ago over a weekend.
[00:41:48] And using AI agents as essentially people that were managing the game, people that were characters in the game, people that were creating content for the game.
[00:41:57] It became pretty clear that like, hey, if we can already do this at this stage, and I did this in a weekend, definitely we're going to have these AI generated games where it's like you come in, maybe some of it's been structured by somebody else, maybe not.
[00:42:11] But everything can be made on demand and it can adjust itself to kind of fit what you're doing.
[00:42:16] And I think what I'm imagining is the majority of content that's going to be generated in the future is going to be stuff for games where people aren't there to create something.
[00:42:26] It's not about a creative act.
[00:42:27] It's about enjoying content.
[00:42:29] It's about exploring.
[00:42:30] It's about making choices and kind of seeing the outcomes.
[00:42:32] I think it'll probably change gaming and probably make a lot more people gamers that weren't before because now it'll fit whatever it is that you're interested in.
[00:42:40] Totally.
[00:42:40] Back to personalization.
[00:42:42] Yeah.
[00:42:43] So, Max, everyone is talking about making movies with AI, right?
[00:42:47] Like that seems to be very popular on Twitter.
[00:42:49] But again, when I look at what content people are actually consuming with their eyeballs and driving watch time, it's a lot of short form content.
[00:42:57] It's a lot of memes that I think really drive a lot of attention.
[00:43:01] What do you suspect is going to be like the most dominant form of consumption in a couple of years?
[00:43:08] Same thing is now.
[00:43:09] Short form content, right?
[00:43:10] I mean, our attention spans are only going to get shorter if not already.
[00:43:13] I mean, one of the interesting things about Google when Google came out of the scene, right, is we outsourced a lot of our long-term memory to the internet.
[00:43:20] And we just kind of stopped enhancing that ability within ourselves.
[00:43:24] And as ChatGPT and other generative tools become more prevalent in our lives, we're going to be outsourcing more and more of our ability to think, create, and really actually engage.
[00:43:34] I mean, one of the most popular forms of content that you can see on YouTube is after a new movie comes out is a bunch of explanation videos.
[00:43:40] Yeah.
[00:43:40] Or recaps or reviews, right?
[00:43:43] Because people don't be like, I don't want to watch this movie.
[00:43:44] I want to watch an eight-minute recap of it and have somebody tell me what it was about at the end.
[00:43:48] I don't want to have to think about what it was about.
[00:43:50] And in my mind, I think that's how like 90% of people engage with content now in the world is they want to just be given the short-form section of it.
[00:43:57] Not because they're lazy, but just because they've conditioned their mind to be like, hey, this is like the easiest way I can get this dopamine hit.
[00:44:03] I've already at the point where I don't consume any media that's not at two times speed now just because it's too long.
[00:44:09] I can't be bothered to watch anything that's not at two times speed.
[00:44:13] Yeah.
[00:44:13] And it's also the volume of content is growing, right?
[00:44:15] So there's so much more for us to choose from versus like the list of books, movies, TV shows, and certainly social media content is absolutely exploding.
[00:44:25] So this is like one way to make sense of it is just to speed run it all.
[00:44:28] And it was really interesting for me to see the explosion of Notebook LM and people getting podcast summaries.
[00:44:35] So it's like whatever modality or medium or format you're most comfortable consuming in, you can kind of translate any content into any content.
[00:44:44] And it has that remixability aspect that I see on your platforms as well.
[00:44:49] Perhaps it's more on the creation side, but I could totally imagine it tending more towards consumption too.
[00:44:54] One of the more sci-fi extrapolations of this is, you know, I now have like at least one AI tool that records every single meeting I'm in and checks every single email I send and is building a database of me, right?
[00:45:07] I could see a very, very possible future where rather than having me on a podcast, I'm just like, hey, take my AI equivalent and we'll do it.
[00:45:16] And then rather than even consuming a podcast, why would you even bother with that?
[00:45:18] You can have someone's AI equivalent and just ask them questions if you were interested in those questions.
[00:45:22] Or, you know, I could take an AI equivalent of you and have you ask questions of my AI equivalent for me.
[00:45:27] Totally.
[00:45:27] And then get a different AI to summarize it for me.
[00:45:29] And then it's like, okay, well, you know, at this point, at this point, you know, like what even is like the most summarized form of content that I could possibly get out of all that, right?
[00:45:37] Do I even need to bother with any of that?
[00:45:38] Or can I just get a bolted list of being like, this is what this person believes.
[00:45:42] Move on.
[00:45:43] I think in a sense, it's almost inevitable.
[00:45:45] We are going to have a bunch of agents that we delegate to take on a bunch of interactions and they're going to be talking to each other, negotiating, doing all sorts of interesting things.
[00:45:55] So as we wrap things up, I'm curious for each of you, what gives art and creativity in this new world deep value?
[00:46:04] And how does Civitai amplify that value for your community?
[00:46:08] Do you want to go first, Justin?
[00:46:10] Give me a second.
[00:46:11] Sure, sure.
[00:46:12] Take your O1 thinking time.
[00:46:17] I'll have to report exactly what I'm thinking for you.
[00:46:19] Exactly.
[00:46:20] No, it's cool.
[00:46:21] That's obfuscated.
[00:46:22] We can't see the real chain of thought.
[00:46:23] That's not allowed.
[00:46:26] I'll do a quick pass while you're thinking, Justin, if you want.
[00:46:30] Go for it.
[00:46:30] Yeah.
[00:46:31] Yeah.
[00:46:31] No, I mean, I think the thing that's really interesting to me about all this creativity-wise is the less human aspect of it, right?
[00:46:37] It's the idea of, sure, humans are giving some sort of general guidance, but really what it is is almost like kind of a ghost in the machine aspect.
[00:46:46] My favorite art that comes out of any of this is when people combine these resources and they give it no prompt whatsoever.
[00:46:51] They give it absolutely nothing to go off of as to what it should make and what it pops out is ethereal and strange.
[00:46:59] And obviously, it's just an amalgamation of the training data.
[00:47:01] But it's different than what I would think a human artist would think of making.
[00:47:06] It has less intent to it to a degree, which is funny to me because it's like the artists that I think we hold up on the highest pedestal are the ones that are able to convey emotion in the most transcendent kind of way.
[00:47:19] So you're able to get a feeling of some sort without actually having it being spelled out for you in some kind of art form.
[00:47:24] And that personally is kind of the feeling I get from a lot of this just direct robot creation that has no human intervention whatsoever.
[00:47:34] As these models get better, you lose a lot of that.
[00:47:37] You lose more and more of this kind of static and you get more and more intent from people.
[00:47:43] And that's interesting too because it's cool to see these people who obviously have no classical art skills whatsoever would never be able to make these things otherwise are able to convey an idea that has value and meaning to them.
[00:47:55] And I feel like you're able to communicate with somebody on a wavelength that you otherwise wouldn't maybe just through a conversation.
[00:48:02] Love that.
[00:48:03] What do you think, Justin?
[00:48:04] Yeah, I think the thing I would say is really along with the end of what Max was saying there.
[00:48:09] I think the thing that I'm most excited about with what we're doing here is empowering every individual to be able to kind of create and communicate in a way that was very limited before.
[00:48:21] The required, you know, decades of training and experience and exploration.
[00:48:26] And now you can, you know, see somebody else make something and make something of your own that's like that or better in 30 seconds.
[00:48:34] You know, I think that that fundamentally changes the capacity for humanity to communicate.
[00:48:39] And I think that improving our ability to communicate is only going to help us work together better.
[00:48:46] And that's really what I want to do is I want to help us kind of move towards the utopia that I dream of rather than the possible dystopia that maybe Max is telling me is going to happen.
[00:48:58] Max, Justin, thank you so much for joining us.
[00:49:00] Thanks again for having us.
[00:49:01] Thanks so much for having us.
[00:49:01] All right.
[00:49:05] So talking with Justin and Max, I can't help but feel we're at this fascinating inflection point.
[00:49:10] What stands out most is how Civitize essentially baked the entire creative recipe, every model, every prompt, every single step of creation right into the content itself.
[00:49:21] It's like having the tutorial embedded in the artwork, creating this unprecedented level of remixability and transparency.
[00:49:29] Whether you're looking at an image or video, you can see exactly how it was made and can build upon it.
[00:49:35] But this also raises deeper questions about where we're headed.
[00:49:39] As content becomes less static and more dynamic, almost personalized to each viewer, what happens to shared experiences?
[00:49:47] Are we headed towards individual bubbles of AI-generated content, each perfectly tailored to our tastes?
[00:49:52] Or will we find new ways to build shared universes together, co-creating and remixing in ways we can't yet imagine?
[00:50:00] And while Justin and Max might disagree on whether we're headed towards utopia or the dead internet,
[00:50:05] they're both helping build the tools that will define how we express ourselves and connect in this new era.
[00:50:11] As the lines between human and AI-generated content continue to blur,
[00:50:16] platforms like Civitize remind us that community and creativity will be crucial to whatever comes next.
[00:50:26] The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard.
[00:50:32] Our producers are Dominic Girard and Alex Higgins.
[00:50:36] Our editor is Banban Cheng.
[00:50:38] Our showrunner is Ivana Tucker.
[00:50:41] And our engineer is Asia Pilar Simpson.
[00:50:43] Our researcher and fact-checker is Christian Aparthe.
[00:50:46] Our technical director is Jacob Winnick.
[00:50:49] And our executive producer is Eliza Smith.
[00:50:51] And I'm Bilavo Sidhu.
[00:50:53] Don't forget to rate and comment, and I'll see you in the next one.

