A guide to adding comments to your Astro blog

May 21, 2023

Adding comments to your Astro blog

A guide to adding comments to your Astro blog


A blog without comments is like a party without guests. It is not fun. So today I am going to write a simple comment system for my blog.

Because I am hosting my blog with Cloudflare Pages, I’ve been considering using Cloudflare D1 for a while now. I saw it was still on Open Alpha and is currently free, so I thought it was going to cost me a lot of money when it is out of beta. But I was wrong. It is free for up to 5M/100K reads/writes per month with 1GB of storage. I think it is more than enough for my blog. So I decided to give it a try.

In this guide I’ll be using Kysely as SQL builder. I really want to use Prisma, but after some research, I found out that it is slow and not suitable for serverless, and also not support D1 yet. So I decided to use Kysely instead. It is fast because it’s just an SQL builder, and I can use kysely-codegen to generate types for my database schema from the database itself.

Enable SSR with Cloudflare Pages

Astro is a static site generator, but it can also be used as a server-side rendering framework. To enable SSR we need to change the output option in our astro.config.mjs file to 'server'

astro.config.mjs

import { defineConfig } from 'astro/config';

export default defineConfig({
+  output: 'server'
});

And add an adapter to be use during runtime. I am using Cloudflare Pages so that’s going to be @astrojs/coundflare adapter.

npx astro add cloudflare

Disable SSR for static pages

By having output set to 'server' Astro will treat all pages as server-side rendered pages. But as this site is just a blog, most of the content are static. I’ll disable SSR for everything I currently have, and only enable SSR for the comment related pages.

This is to take advantage of Cloudflare’s free static site hosting and only having to pay for when the comment related pages are being accessed.

You can disable SSR by exporting a prerender property from your page component.

src/pages/index.astro

---
export const prerender = true;
---
<meta http-equiv="refresh" content="0;url=/en/" />

Custom _routes.json

By default @astrojs/coundflare adapter will automatically create _routes.json file that contains all the routes that will be used by Cloudflare Pages’s Function invocation, without it everything will be treated as a static page, but if you include everything, the static page will eat into your invocation quota. So I’ll just include the comment related pages.

If you create _routes.json file inside your public folder, it will be used instead of the one generated by @astrojs/coundflare adapter.

public/_routes.json

{
  "version": 1,
  "include": [
    "/en/blog/*/comment",
    "/th/blog/*/comment"
  ],
  "exclude": []
}

Enabling Preview

Install wrangler as a dev dependency.

npm install wrangler --save-dev

After installing Wrangler, if you are unauthenticated, you will be directed to a web page asking you to log in to the Cloudflare dashboard. If not, you can run npx wrangler login to manually log in.

Edit preview script to use wrangler pages dev command.

package.json

{
  "scripts": {
-   "preview": "astro preview",
+   "preview": "wrangler pages dev ./dist",
  }
}

To preview your site, you need to build it first before running the preview script.

npm run build
npm run preview

It will start a local server at http://localhost:8788

Adding D1 to your project

  1. Create a new D1 database using wrangler CLI.
$ npx wrangler d1 create <DATABASE_NAME>

 Successfully created DB '<DATABASE_NAME>'

[[d1_databases]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "<DATABASE_NAME>"
database_id = "<unique-ID-for-your-database>"
  • 32 characters max, alphanumeric, uses dashes (-) instead of spaces.
  • Descriptive of the use-case and environment - for example, “staging-db-web” or “production-db-backend”
  • Only used for describing the database, and is not directly referenced in code.

This will create a new D1 database, and output the binding configuration needed in the next step.

  1. Create wrangler.toml configuration file and add binding configuration to it.

wrangler.toml

name = "xiaz-tv_astro" # name of your project
main = "dist/_worker.js" # the entry point for your Workers script, this file must exist but is not being used by Pages
compatibility_date = "2023-05-19" # the earliest date that the Workers runtime will be updated to the next breaking version

# binding configuration, copied from the output of `wrangler d1 create` command above
[[d1_databases]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "<DATABASE_NAME>"
database_id = "<unique-ID-for-your-database>"
  1. Generate types from binding configuration.
npx wrangler types

This will generate worker-configuration.d.ts file in your project root folder with all the types needed to use D1 and other bindings if you have any. This required @cloudflare/workers-types so install it as a dev dependency.

npm install @cloudflare/workers-types --save-dev
  1. Add wrangler to .gitignore file.

.gitignore

# wrangler
wrangler.toml
.wrangler/

Creating a database migration

Cloudflare D1 comes with a built-in migration tool, so we don’t need to use any third-party migration tool. We can use npx wrangler d1 migrations command to create a new migration.

npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>

This will create a new migration file in migrations folder. We can then edit the migration file to create our database schema.

DROP TABLE IF EXISTS comments;
CREATE TABLE IF NOT EXISTS comments (
  id integer PRIMARY KEY AUTOINCREMENT,
  posted timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  author text NOT NULL,
  body text NOT NULL,
  post_slug text NOT NULL
);
CREATE INDEX idx_comments_post_slug ON comments (post_slug);

To list all unapplied migrations, run npx wrangler d1 migrations list <DATABASE_NAME>

To apply all unapplied migrations, run npx wrangler d1 migrations apply <DATABASE_NAME>

We will apply the migration on local database first, so we can test our code before deploying it to production.

npx wrangler d1 migrations apply <DATABASE_NAME> --local

When we are ready to deploy our code to production, just run this again without --local flag.

Generate types from database schema

We can write our types for Kysely manually, but it is easier to just generate it from the database schema. We can use kysely-codegen to generate types for us.

First, we need to install kysely-codegen as a dev dependency.

npm install kysely-codegen --save-dev

kysely-codegen will look for database connection string from .env file, so we need to create one. Cloudflare D1 is a SQLite database, the local database file is located at .wrangler/state/v3/d1/<DATABASE_ID>/db.sqlite so we need to add the following to .env file.

.env

DATABASE_URL=.wrangler/state/v3/d1/<DATABASE_ID>/db.sqlite

You can find your database ID in wrangler.toml file, or by running npx wrangler d1 list command.

Also add them to src/env.d.ts file so IntelliSense can pick them up.

src/env.d.ts

interface ImportMetaEnv {
  readonly DATABASE_URL: string;
}

interface ImportMeta {
  readonly env: ImportMetaEnv;
}

Add a generate script to package.json file.

package.json

{
  "scripts": {
    "generate": "kysely-codegen"
  }
}

Then we can generate types by running npm run generate command.

npm run generate

kysely-codegen will generate .d.ts file inside it own package folder, which we can then import into our project.

Creating a database client

Install kysely and kysely-d1 as dependencies.

npm install kysely kysely-d1

Cloudflare Runtime can be access with getRuntime function from @astrojs/cloudflare/runtime package. We can use it to get the database client from the Request object.

Create a getDB helper function that we can use to get a database client from Request object anywhere in our project.

src/db/index.ts

import { getRuntime } from "@astrojs/cloudflare/runtime";
import { Kysely } from "kysely";
import { D1Dialect } from "kysely-d1";
import type { DB } from "kysely-codegen";

export const getDB = async (request: Request) => {
  const runtime = getRuntime<Env>(request);

  // Create a Kysely instance with kysely-d1
  const db = new Kysely<DB>({
    dialect: new D1Dialect({ database: runtime.env.DB }),
  });

  return db;
}

Add path alias to tsconfig.json file, so we can import @db from anywhere in our project.

tsconfig.json

{
  "extends": "astro/tsconfigs/strict",
  "compilerOptions": {
    "baseUrl": ".",
    "paths": {
      "@i18n/*": ["src/i18n/*"],
      "@components/*": ["src/components/*"],
      "@layouts/*": ["src/layouts/*"],
+     "@db": ["src/db/index.ts"],
+   },
+   "types": ["@cloudflare/workers-types"]
  }
}

Enable Database access in Vite dev enviroment

In order for npm run dev command to work we need a separate database driver for when we run in Vite dev enviroment. We can use better-sqlite3 for this, Install it as a dev dependency.

npm install better-sqlite3 --save-dev

Update the getDB helper function to use better-sqlite3 when running in Vite dev enviroment. To avoid ESBuild from bundling it, we can use dynamic import to load it. This work fine when I ran npm run build but I don’t know why, even though wrangler also use ESBuild, it still try bundle it. So I have to use .env.development file to load the database driver from env instead.

src/db/index.ts

import { getRuntime } from "@astrojs/cloudflare/runtime";
import { Kysely, SqliteDialect } from "kysely";
import { D1Dialect } from "kysely-d1";
import type { DB } from "kysely-codegen";

export const getDB = async (request: Request) => {
  const runtime = getRuntime<Env>(request);

+ if (!runtime) {
+   // This mean we are in Vite dev mode
+   // Load the database driver from env to avoid bundling it
+   // The content of `.env.development` is: DATABASE_PACKAGE=better-sqlite3
+   const driverPackage = import.meta.env.DATABASE_PACKAGE;
+   const Database = (await import(`${driverPackage}`)).default;
+   const DATABASE_URL = import.meta.env.DATABASE_URL;
+   // Create a Kysely instance with SqliteDialect
+   const db = new Kysely<DB>({
+     dialect: new SqliteDialect({
+       database: new Database(DATABASE_URL),
+     }),
+   });
+   return db;
+ }

  // Create a Kysely instance with kysely-d1
  const db = new Kysely<DB>({
    dialect: new D1Dialect({ database: runtime.env.DB }),
  });
  return db;
}

.env.development

DATABASE_PACKAGE=better-sqlite3

Note: Do not put this file in .gitignore file, we need it to be included in the repo.

Creating endpoints

Now that we have a database client, we can start creating endpoints for our comment system. But first to make life easier, we create two helper function, error and success

src/components/util.ts

export const error = (status: number, message: any = {}) => {
  return new Response(JSON.stringify(message), {
    status: status,
    headers: {
      "Content-Type": "application/json",
    },
  });
}

export const success = (message: any) => {
  return new Response(JSON.stringify(message), {
    status: 200,
    headers: {
      "Content-Type": "application/json",
    },
  });
}

I’ll put our endpoint on the same route with the blog post, so the comment endpoint will be /<lang>/blog/<slug>/comment

GET /[lang]/blog/[slug]/comment

To create a GET endpoint, we need to create a comment.ts file inside src/[lang]/blog/[slug] folder.

src/[lang]/blog/[slug]/comment.ts

import type { APIRoute } from "astro";
import { getCollection } from "astro:content";
import { getDB } from "@db";
import { error, success } from "@components/util";

export const get: APIRoute = async function get({ request, params: { lang, slug } }) {
  return success("Hello World");
}

Try to access the endpoint at http://localhost:3000/en/blog/<slug>/comment and you should see a 200 response with the message “Hello World”.

This get endpoint will be used to get all the comments for a blog post. To implement this, first we check if the post exist.

if (!slug) {
  return error(400, {
    message: "Missing slug",
  });
}

const blogs = await getCollection("blog", (post) => post.data.lang === lang);
const blog = blogs.find((post) => post.slug === slug);

if (!blog) {
  return error(404, {
    message: "Post not found",
  });
}

If all check passed, then we get all the comments for that post from the database.

const db = await getDB(request)
const comments = await db
  .selectFrom("comments")
  .select(["id", "author", "body", "posted", "post_slug"])
  .where("post_slug", "=", slug)
  .orderBy("posted", "desc")
  .execute();

You will notice that every method from Kysely will be typed, and you can easily select the correct column name from the auto-complete list. If you are using VS Code, you can press Ctrl + Space to show the auto-complete list. it is really helpful.

Then we return the comments as JSON.

return success(comments);

POST /[lang]/blog/[slug]/comment

For the POST method, it’s almost the same as get.

src/[lang]/blog/[slug]/comment.ts

export const post: APIRoute = async function post({ request, params: { lang, slug } }) {
  return success("Hello Again");
}

Try access the endpoint again, but this time use a POST client like Hoppscotch to test the endpoint with POST method. You should see a 200 response with the message “Hello Again”.

We can reused the blog validation from the get endpoint. and change the db query to insert the comment into the database instead.

interface NewComment {
  author: string;
  body: string;
}
const { author, body } = await request.json<NewComment>();
const posted = new Date().toISOString();

if (!author || !body) {
  return error(400, {
    message: "Missing author or body",
  });
}

if (author.length < 3 || body.length < 3) {
  return error(400, {
    message: "Author or body too short",
  });
}

if (author.length > 50 || body.length > 500) {
  return error(400, {
    message: "Author or body too long",
  });
}

const db = await getDB(request);
const newComment = {
  id: "0",
  author,
  body,
  posted,
  post_slug: slug,
}
const result = await db
  .insertInto("comments")
  .values({
    author,
    body,
    posted,
    post_slug: slug,
  })
  .executeTakeFirst();
if (!result || !result.insertId) {
  return error(500, {
    message: "Error inserting comment",
  });
}
newComment.id = result.insertId.toString();

Then we return the new comment as JSON.

return success(newComment);

Now, if you try to post a comment with invalid data, you should see a 400 response with the error message. If you try to post a comment with valid data, you should see a 200 response with the new comment.

Creating a comment component

My plan is to create a Svelte component that load the comment on mount, and only hydrate the component when it is visible. This way we can reduce the number of Function invocations, and improve the performance of our site.

I’ll create a Comment component that accept a slug prop, and load the comment from the server when it is mounted.

src/components/Comment.svelte

<script lang="ts">
  import { onMount } from "svelte";
  import type { Comments } from 'kysely-codegen';

  export let lang: string = 'en';
  export let slug: string;

  let comments: Comments[] = [];

  let newComment = {
    author: '',
    body: '',
  };

  let message = '';

  onMount(async () => {
    const response = await fetch(`/${lang}/blog/${slug}/comment`);
    const data = await response.json<Comments[]>();
    comments = data;
  });

  async function post() {
    message = '';
    const response = await fetch(`/${lang}/blog/${slug}/comment`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(newComment),
    });
    
    if (!response.ok) {
      const data = await response.json<{ message: string }>();
      message = data.message;
      return;
    }
    
    const data = await response.json<Comments>();
    comments = [data, ...comments];
    newComment.author = '';
    newComment.body = '';
  }
</script>

<div class="bg-stone-800 rounded shadow px-4 py-3">
  <form on:submit|preventDefault={post}>
    <label class="block">
      <textarea class="w-full h-24 px-2 py-1 bg-stone-700 text-stone-200" placeholder="Write a comment..." bind:value={newComment.body} />
    </label>
    <label class="block">
      <input type="text" class="w-full px-2 py-1 bg-stone-700 text-stone-200" placeholder="Name" bind:value={newComment.author} />
    </label>
    {#if message}
      <div class="text-red-500">{message}</div>
    {/if}
    <button type="submit" class="mt-2 px-2 py-1 rounded shadow bg-stone-700 text-stone-100">Post</button> 
  </form>

  {#if comments.length === 0}
    <p>No comments yet.</p>
  {/if}
  {#each comments as comment}
  {@const dateStr = (new Date(comment.posted.toString())).toLocaleString(lang)}
    <div class="mt-2">
      <div>{comment.author} <span class="text-stone-400 text-xs">commented on {dateStr}</span></div>
      <div>{comment.body}</div>
    </div>
  {/each}
</div>

Note: We can use the Comments type generated by kysely-codegen to type the comments variable.

Adding the comment component to the blog post

We can just append it to the MarkdownPostLayout we created a while ago.

src/layouts/MarkdownPostLayout.astro

---
...
+import Comment from '@components/Comment.svelte';
...
---
<BaseLayout ...>
  <div class="max-w-5xl mx-auto px-8 py-4">
    <article class="blog-layout">
      ...
    </article>
    <Comment lang={blogEntry.data.lang} slug={blogEntry.slug} client:visible />
  </div>
</BaseLayout>

Note: We use the client:visible Client Directive to only hydrate the component when it is visible. This uses an IntersectionObserver internally to keep track of visibility.

Deploying to Cloudflare Pages

Before we can deploy we must first migrate the production database.

npx wrangler d1 migrations apply <DATABASE_NAME>

Note: This is without the --local flag.

And don’t forget to add the D1 database bindings to your Pages.

  1. Goto your Pages’ project settings in Cloudfare dashboard.
  2. Click on the “Functions” tab on the left.
  3. Scroll down to “D1 database bindings” section.
  4. Click on “Edit binding” button.
  5. Input your binding name (that you refer to in code, in this example we use DB)
  6. Select your database from the dropdown.
  7. Click on “Save” button.
  8. (Optional) Do the same for “Preview” tab.

Then we can deploy our site to Cloudflare Pages. I have setup Pages so that when I push to main branch, it will automatically deploy to production. So I just need to push my code to GitHub.

git add .
git commit -m "Add comment system"
git push

TODO

  • I may have to change the way I load the comment in the future, because right now it will load all the comments for a post at once, and if a post have a lot of comments, it will take a long time to load. I may have to load the comments in chunks, or only load the comments when the user scroll down to the comment section.
  • I may also have to add some sort of rate limiting to the comment endpoint, so that a user can’t spam the endpoint and eat up all my Function invocation quota.
  • Maybe add some authentication to the comment endpoint?
  • Spam filter? Akismet?

Conclusion

And that’s it for now. We now have a comment system for our blog. You can see it in action down below. If you have any question or suggestion, let me know in the comment section, or on our Discord server.

No comments yet.