AI and Content Creation: A Leader's Guide (ssr)

Written by Simen Svale

Missing Image!

I'm a big believer in the current wave of AI. As I experiment with large language models, I keep discovering amazing use cases. At this point, it's almost exhausting; the technology is such a gold mine, but harnessing it effectively in the real world can be challenging.

But on May 8, we'll introduce a deceptively simple yet powerful product related to content creation and AI. To tease that announcement (it's taking everything not to reveal it to you now!), I'm here to share some of the principles we've discovered for the effective deployment of AI in content creation.

Don't miss the unveiling of our new product on May 8th and learn how to get early access!

Register now

Using language models is not like programming

As programmers we're not innately the best prompt writers. We think imperatively, in terms of commands and neat, clean constraints. We strive to be precise, terse and direct as we see our subjects as perfectly predictable automatons.

Language models are none of those things. One thing you learn early on when using them is the importance of giving it at least a little bit of context. You have to imagine that every conversation is like talking to a brilliant child who has read every book in existence, but apart from that has no idea about anything, especially not you and your particular situation.

When people complain about hallucinations and generic content, it is often a natural consequence of not providing enough background information on the task at hand and, at the same time, being too prescriptive in how they frame the task.

Language models are tuned to follow instructions, and they will do it to a fault. If you ask it to write a dossier on a made-up company, it will. And it will invent all the facts it needs to fulfill your task.

I once asked an early version of GPT to explain what "topo 00 flour was," misspelling the name of a special class of fine-grained wheat flour used in pizza baking, "Tipo 00." It told me it was a "coarse-grained flour, named after the craggy nature of the Italian landscape." Totally makes sense since "topo" in many languages is associated with "place" or "locality." But a totally made-up "fact" caused by early versions of this technology just not being able to know when it just doesn't know. (Topo, by the way, in Italian just means mouse).

So, approaching AI prompting is very similar to how you would empower another person to help you with a task:

Be clear about the goal, but also the context of the goal and why the goal matters

So, for example, when briefing an AI model on writing a blog post, it's not enough to just say "write a blog post about our new product". You need to provide background: what is the product, what problem does it solve, who is the audience, and what the goal of the post is. The more context you can give, the more effective the AI will be in crafting content that resonates. And the clearer you express what you are trying to achieve, the better the AI can leverage all its training data to help you in ways you might not even have considered.

Be clear about your intentions, share your ideas

When I prompt AI, I think broadly in three categories of context. There is the "raw material," the facts and background information that the AI can draw upon when solving my task. This can sometimes be a lot of text from different sources. While writing this particular blog post, I gave it a stack of engineering notes and marketing text about the product we're launching so it understands what kind of product we're teasing.

Then there is the goal for the task, usually a pretty short statement. For this particular post, I came up with this as a kind of mission statement:

I am writing a blog post for sanity.io/blog as Simen Svale Skogsrud, CTO and Co-founder. The goal is to tease the launch event by talking about AI and content in a way that inspires people, especially copywriters, content creators, and UX writers, to attend the event.

But this is not enough. I think the content we (AI and I) would create together would still be quite generic. It is my job as the leader of this process to go out on a limb and present some high-level ideas and intentions that I want to communicate. It is my job to have a point of view. So I wrote this:

Points I want to make: - AI is treated like magic, but without context, content will be anemic - Insight, a point of view, and intentionality is still your job, and is what creates content that connects - AI can contribute to the shaping and adaptation of content, but it needs raw material - We see a future where humans are more focused on designing the context, the insights, the intentions, but AI is increasingly able to co-create the final form of a specific bit of content - Working with AI is more like being a leader, than being a programmer

Working with AI is not like programming, it's like being a leader

The core point I am making here is that working with language models is like being a good leader. Set clear goals. Give rich context. Share information in a way that allows for creativity and flexibility in solving the task. Share your intentions and give license to be bold by taking the lead on sharing where you want to be going out on a limb.

Another interesting "trick" you can borrow from good leadership practices is giving room for failure. I have found that it often helps to specify that I don't expect that it knows everything or can perform every possible task perfectly. I state that if it needs more information or feels uncertain about a task, to come to me. Ask me for help, as we are solving this together. This seems to limit the amount of hallucinations and thrashing in the output.

As one example, I often use this as a background prompt:

"If you don't know something, I prefer you just let me know. We will figure things out together as collaborators."

You can do it, little language model!

I once was experimenting with generating 3D models assisted by a language model. Actually, I was creating mountain ranges based on faces of my coworkers Mount Rushmore style. I thought it would be neat if I could 3D print some of them, so I asked the AI to export the 3D models as STL files, a file format preferred by 3D printers.

It declined, saying it didn't have the right exporter available. I responded, "I think the STL file format is quite simple, and I believe you can create an exporter for it yourself." It responded that indeed it agreed it was a simple format and it proceeded to write the required exporter and give me the STL files which worked first try.

Now, I didn't necessarily mean AI prompting as leadership this literally. But it turns out that showing trust in the abilities of a language model and boosting its self-confidence actually works sometimes, just as it does for us humans.

The future of AI in content operations

At Sanity, we see a future where content creators are more focused on designing the context - the intentions, the insights, the strategy behind the content. And AI becomes an increasingly capable co-creator, able to take that context and cast the final forms of the content.

This is the vision we'll showcase at our upcoming AI and content launch event: empowering content creators with AI tools that understand the context of what they're creating.

If you are excited about the potential of AI in content creation, I encourage you to join us on May 8. It's an opportunity to see the future of content, and to take part in shaping it.

I look forward to seeing you there!

Join us online for Sanity Connect to explore the future of content creation. You'll see exciting product announcements, hear from inspirational customers, and hear from a former Pixar storyteller on creating compelling content with AI.

Register now