The autoBlog Project

I have FOMO watching my dev colleagues play with all the latest AI tools.

As a hobbyist dev, I wanted to have a play with the OpenAI API.

So, as a marketer, I thought it would be cool to see if we could generate unique content and see how AI-generated content might performs in terms of search engine optimization.

autoBlog

The biggest limitation of this project is that it's built using free tools. Everything from site-speed to image quality is a bit impacted.

I've also chosen a non-commercial, esoteric subject-matter – topics within philosophy – to try to minimise the potential negative impact of AI-generated content (more on this later).

The architecture

Back end:

  • OpenAI API: Used the Text DaVinci 003 model generate content and the DALL-E model for images.
  • Express.js: To build a simple API
  • Node.js: Server for handling API requests, running cron jobs and for content generation
  • Octokit: GitHub's REST API – used for creating and retrieving posts
  • Cyclic: Serverless hosting. Fast (and free) way to host the Node.js back end.

Front end:

  • Next.js: I used a template from this React-based front end framework for building server-rendered applications. I needed a static site generator for better performance and SEO.
  • Tailwind: Utility-first CSS framework.
  • Vercel: Hosting platform

How it works

I started by designing a backend server in Node.js.

The server is responsible for generating and saving blog posts with AI.

The server uses a bunch of libraries and APIs, including OpenAI's API to access the DaVinci text model, Octokit for interacting with GitHub, and node-cron for scheduling the blog post generation.

Under the hood, the server is doing this.

1. Cron job runs function to generate and save blog post:

Here's an abridged version of the function that orders the asynchronous steps required to generate and save all of the content required for each blog to be created.

async function generateAndSaveBlogPost() {
  try {
    const outputTitle = await generateTitle();
    const outputDallePrompt = await generateDallePrompt(outputTitle);
    const [outputImage, outputContent] = await Promise.all([
      generateImage(outputDallePrompt),
      generateContent(outputTitle),
    ]);
    const imageBase64 = await downloadImageToBase64(outputImage);

    // Extra logic

    await Promise.all([
      saveImage(imagePath, imageBase64),
      saveBlogPost(filePath, markdownString),
    ]);

  } catch (error) {
    // Handle error
  }
}

Let's look at the main steps one by one.

2. Generating Unique Blog Titles:

I use OpenAI's GPT-3.5 API to generate a new title based on a list of philosophical topics. I use a bit of JavaScript logic to create a slug at the same time.

async function generateTitle(topic, approach, prompt) {
    
  const openAITitleResponse = await openai.createCompletion({
    model: "text-davinci-003",
    prompt: prompt,
    temperature: 0.9,
    max_tokens: 40,
    top_p: 1.0,
  });

  return openAITitleResponse.data.choices[0].text.trim();
}

3. Creating Visuals with DALL-E:

DALL-E generates images based on text prompts. I create a relevant prompt using the GPT-3.5 API in the same way as before. Then, I generate an image URL with the generateImage() function and download it as a base64-encoded string..

async function generateImage(prompt) {
  const openAIPromptResponse = await openai.createImage({
    prompt: prompt,
    n: 1,
    size: "512x512" // to avoiding using too many tokens
  });

  return openAIPromptResponse.data.data[0].url;
}

async function downloadImageToBase64(url) {
  const response = await axios.get(url, { responseType: 'arraybuffer' });
  const base64 = Buffer.from(response.data, 'binary').toString('base64');
  return base64;
}

4. Generating Blog Post Content:

GPT-3.5 API creates the post content in markdown based on the title, in the same way as before.

A generateExcerpt() function extracts the first two sentences to be used as a meta description and summary.

5. Saving the Blog Post and Image to GitHub:

The saveBlogPost() function saves the blog post to the GitHub repo, while the saveImage() function saves the generated DALL-E image.

Here's roughly what's happening with saveBlogPost...

async function saveBlogPost(filePath, markdownString, outputTitle) {
  const base64EncodedContent = Buffer.from(markdownString).toString('base64');

  //...
  // Check to see if the file already exists by checking the `sha`
  //...

  const createOrUpdateFileRequest = {
    owner: repoOwner,
    repo: repoName,
    path: filePath,
    message: `Add new blog post: ${outputTitle}`,
    content: base64EncodedContent,
    branch: branch
  };

  await octokit.request('PUT /repos/{owner}/{repo}/contents/{path}', createOrUpdateFileRequest);
}

Ta-dah! The GitHub repo is modified with the new blog and image, and this automatically triggers a rebuild of the website hosted in Vercel.

a blog on autoBlog

autoBlog Credit: Author

6. Automating Blog Post Generation:

The generateAndSaveBlogPost() function orchestrates the entire process. The cron job schedules blog post generation every day at 9AM. I'm going to change this to every 3 days soon to avoiding using too many tokens.

generateAndSaveBlogPost();
cron.schedule('0 9 * * *', generateAndSaveBlogPost);

Ethical considerations and mitigation

Those of us who build with AI must consider the ethical implications of what we make. I want to take a moment to talk about how I've approached this in autoBlog.

I don't expect autoBlog to generate high-quality content. I wrote about the real-life shortcomings – and dangers – of AI-generated content here.

To that end, I've made sure it's clear to users that it's an experiment with content generated by AI. I've included a banner explaining this at the top and bottom of the site, and before and after every piece of content.

This way, questioning the credibility and relevance of the content will be the default for users.

autoBlog is intended as a fun personal experiment. That said, AI can have insidious social consequences in terms of opportunity and inequality.

So I decided to create a blog that can never write about any commercial or social issues. I've set it up to focus on topics within academic philosophy. It's an area I know pretty well, it's generally non-commercial and pretty esoteric in nature. I'll keep an eye on it though.

That'll really limit the visitor potential of the site, but it's worth it to ensure there are fewer risks of unintended consequence.

3. What Results I'm Interested In

As the autoBlog project unfolds, despite its limitations, I'm looking forward to seeing how it performs. I'll be keeping an eye on:

  • Is the DaVinci model capable of generating decent content without human input?
  • How could the prompts be optimised?
  • How could chaining prompts improve the output?
  • Organic search traffic: Can hands-free AI-generated content rank and drive any traffic at all?

I'll report back soon!

Check it out

If you'd like to peek, check out the autoBlog project here.

View the back end code on GitHub

View the front end code on GitHub

Spot anything wrong? Please let me know.