It's easy to get caught in the trap of over-complicating things as engineers. We design complex architectures, add a bunch of dependencies, set up database clusters, container orchestration, think of how to scale to millions of users, and so on. Though, we often forget that we're building a simple web application that will be used by a handful of people.
Yet, there are moments when we recognize we've gone too far with over-engineering. We may not act on this realization immediately, but it plants the seed for a simpler approach. Imagine cutting down on dependencies, forgoing the database cluster, stripping away some abstraction layers, or questioning the need for that hefty framework. Imagine how much simpler our lives would've been and how much faster we could've shipped our product.
What if, for the next project, we come to terms with ourselves that a million users aren't going to come overnight? Instead, we focus on building a simple, robust, and maintainable solution that will get the job done? Something that we would actually enjoy working on day in and day out. Something that allows us to focus on the problem at hand, and not the unnecessary complexity we've created.
This blog post is about my journey looking to simplify my approach. I'll introduce you to what works for me as my Tiny Stack
and walk you through it in case you find yourself on a similar path.
The Tiny Stack Mindset
The Tiny Stack
focuses on what we really need instead of what we could have. It's a stack that's simple to set up, easy to maintain, devoid of unnecessary complexity, and yet, powerful enough to build a wide range of applications. The beauty of the Tiny Stack
is that it's not a specific set of tools; it's rather a mindset that makes us question the need for each tool we add to our stack.
This is very similar to the idea of intentional simplicity
in design. We start with the bare minimum and add only what we need. We don't add anything just because we can. We don't add anything just because it's cool, hot, or trending. We add only what we need.
The Tiny Stack
is inherently personal and adaptable. Mine might be completely different from yours, depending on our unique preferences and requirements, and that's completely fine. In fact, I encourage you to question each of the tools I use and decide for yourself if they make sense for your project.
My Version of the Tiny Stack
Ok, with my pretentious inspirational speech out of the way (I hope you enjoyed it), let's get to what you came here for: my version of the Tiny Stack
.
The Components
My version of the Tiny Stack
is composed of three core components: Astro, SQLite, and Litestream. Each of these tools has been chosen for their simplicity, performance, and the overall developer experience when they are used together.
Astro
Astro is a modern meta-framework that allows you to build faster websites with less client-side JavaScript. It's a tool that's gaining traction for its performance benefits and developer-friendly experience. Astro allows you to write components using your favorite framework (React, Vue, Svelte, etc.), but only sends the necessary JavaScript to the client, resulting in faster load times.
It's very simple to set up and use, and I find it to be a great alternative to other more complex frameworks. I also really like the fact that it's framework-agnostic, which means that you can use this stack whether you're a React, Vue, or Svelte developer.
SQLite
SQLite is a self-contained, high-reliability, embedded, full-featured, public-domain, SQL database engine. (That's a mouthful!) It's astonishingly lightweight and can handle a surprising amount of load with proper tuning. SQLite is perfect for small to medium-sized projects that don't require the horsepower of larger database systems like PostgreSQL or MySQL.
The beauty of SQLite lies in its simplicity: it's just a file. There's no need to set up a separate database server, manage connections, or configure replication. It's also cross-platform and has bindings for almost every programming language, making it highly accessible.
Additionally, SQLite being a part of the same process as your application means that you don't have to worry about network latency or connection failures. This solves yet another problem commonly known as the N+1 problem where you have to make multiple network requests to fetch data from the database.
Overall, SQLite deserves a lot more credit than it gets. It's a fantastic tool that's often overlooked because of its simplicity. I would highly recommend giving it a try before jumping to a more complex database system.
Litestream
Litestream is a real-time streaming replication for SQLite. It complements SQLite by providing the replication capabilities often needed for production environments. With Litestream, your SQLite database is continuously backed up to a separate location (like S3), and in the event of a catastrophe, you can restore your database to any point in time.
Litestream's integration is seamless and does not require changes to your application code. It's a game-changer for using SQLite in production as it addresses one of the main concerns people have: durability and fail-over.
There is a lot more to Litestream which I'll cover later in this article, but for now, let's move on to the next section.
Why This Combination?
You might be wondering why I chose these particular tools for my Tiny Stack
. The answer is a combination of simplicity and effectiveness. Each tool is minimal in its own right, but put together, they form a powerful combination for building and deploying web applications. Astro provides the foundation for building the user interface and handling server-side logic. SQLite acts as the persistent storage mechanism, and Litestream ensures that the data is safe and recoverable.
The beautiful thing is that we can package all of these components into a single Docker container and deploy it anywhere! You see, SQLite is just a file, and Litestream is just a tiny binary that runs in the background, we then add Astro on top of that, and we have a fully functional web application.
A huge benefit of this approach is that we can easily set up local and staging environments identical to production, and our entire stack is as portable as it gets. We can run it on our local machine, on a Raspberry Pi, or on a beefy server. It's all the same, no vendor lock-in, no complicated setup, no headaches.
This combination allows me to focus on writing the application rather than spending time configuring and managing infrastructure. It's a lean stack that can be scaled up with additional tools if necessary, but for many applications, this trio is more than sufficient.
The Tiny Stack in Action
I can probably drop a link to my GitHub repository and call it a day, but I would prefer to walk you through the setup process and explain how each component is set up and configured. I'll also drop a link to the repository at the end of this article, so you can check it out for yourself. So buckle up, we're going in!
Throughout the rest of this article, I'm going to build a simple application that lets us post comments and displays them in a list. It's very basic, but it's enough to demonstrate how the Tiny Stack
works.
Setting Up Astro
First, create a new Astro application by running the following command:
This will create a new Astro application in the current directory. You can learn more about Astro in their documentation. At this point, you can run npm run dev
to start the development server and navigate to http://localhost:4321
to see the application running.
Once the app is created, we need to install the Node.js adapter, so we can run it in a Docker container. This, again, is as simple as running the following command. This will add the necessary dependencies and configuration to our project.
Now, we can run npm run build
to build our application. This will create a dist
folder with all the necessary files. We can then run node ./dist/server/entry.mjs
to start the server for production.
Based on my experience, it sometimes asks to install some additional dependencies, such as sharp
for image processing. If you run into this issue, just install the missing dependencies and try again.
Setting Up SQLite with Drizzle
Now with our Astro application set up, we can move on to adding SQLite. There are a ton of options when it comes to query builders and ORMs for SQLite in Node.js, but for the purpose of this article, I'm going to use Drizzle because our good friend Ben can't stop talking about it. (Ben, I hope you're happy now and can finally shut up about it.)
Setting up Drizzle was quite simple! First, we have to install a few packages:
Then, we need to create a drizzle.config.ts
file in the root of our project with the following contents:
This file tells Drizzle where to find our database schema (more on that in a second), where to output the migration files, and what driver we want to use. In this case, we're going to use better-sqlite
. We're also telling Drizzle where to find our database file. In production, we want to use an absolute path, so we're using the /data
directory, which is where our SQLite database will be stored. This is important because we're going to mount this directory as a volume in our Docker container to ensure that the data is persisted.
Now, we need to create the schema.ts
file that we referenced in our drizzle.config.ts
file. This file will contain our database schema. For this example, we're going to create a simple table for storing comments:
This is quite self-explanatory if you have any experience with SQL. We're creating a table called comments
with three columns: id
, author
, and content
. The id
column is our primary key, and the other two columns are text
columns that cannot be null.
I'm going to create another file called types.ts
in the same folder with the following contents:
Here, I'm just exporting the type that Drizzle generates for our comments
table. This will come in handy later when we're writing our queries and need to pass the correct type to our components.
Now, we need to set up our database. To do so, we first need to generate the migration files by running drizzle-kit generate:sqlite
. This will create a migrations
folder and a sql
file that contains the SQL statements for creating our database. We're going to be using this command a lot, so I have added it to the scripts
section of our package.json
file:
With that in place, all that's left is to create a database instance and run the migration. I'm going to create a db.ts
file in the src/utils
folder with the following contents:
This file creates a new SQLite database instance and passes it to Drizzle. We're also passing our schema to Drizzle, so it can create the necessary tables. Finally, we're running the migration to ensure that our database is up-to-date.
If you run npm run dev
, you should see a db.sqlite3
file in your project root. This is our database file, and it contains the comments
table we created. If you want to inspect the database, you can use a tool like TablePlus.
Rendering the Comments with Astro
As I mentioned earlier, I'm going to create a simple comment board application. This means that we're going to be able to post comments and view them in a list. You can, of course, add delete and update operations, but that's going to make this article longer than it needs to be.
To keep this organized, let's create a new Astro component that is responsible for displaying the comments. We can do that by creating a Comment.astro
file in the src/components
folder with the following contents:
I've added some Tailwind CSS classes here to make it look nice, but you can omit them if you want. The important part is that we're importing the Comment
type from our types.ts
file and using it to type the comment
prop. This will ensure that we're passing the correct type to our component.
Let's move to our index.astro
file, fetch the comments from the database, and pass them to our Comment
component. We can do that by adding the following code to our index.astro
file:
As you can see, I'm importing the db
instance we created earlier and using it to fetch all the comments from the database. I'm then looping over the comments and passing them to the Comment
component.
Again, this is a very trivial example; ideally, you would want to make sure you're properly handling errors, not fetching too many comments at once, and so on. But for the purpose of this article, this is enough.
If you run npm run dev
and navigate to http://localhost:3000
, you should see an empty list. This is because we haven't added any comments yet. In the next section, we're going to add a form that allows us to post comments.
Adding a Form to Post Comments
I'm going to create another component that will be responsible for posting comments. We can do that by creating a CommentForm.astro
file in the src/components
folder with the following contents:
This is a simple form with two fields: author
and content
. We're going to use this form to post comments to our database. Let's then add this form to our index.astro
file:
Now, if you run npm run dev
and navigate to http://localhost:4321
, you should see the form at the bottom of the page. But if you try to submit the form, you'll notice that nothing happens. This is because we haven't added any logic to our form yet. Let's do that now.
Adding Logic to the Form
Adding logic to our form is actually very simple! By default, when you submit a form, Astro will send a POST
request to the same page. This means that we can handle the form submission in our index.astro
file. Let's do that now:
As you can see, we're checking if the request method is POST
, and if it is, we're fetching the form data and inserting a new comment into the database. We're then fetching all the comments from the database and passing them to our Comment
component. This means that when we submit the form, the page will reload, and we should see the new comment in the list.
Again, you would want to add some error handling here; ideally, you would want to make sure users are authenticated and not spamming your database, but that's beyond the scope of this article.
At this point, we have a working application that allows us to post comments and view them in a list. This is done using Astro's built-in server-side rendering feature and SQLite as our database. Next, we're going to Dockerize our application and set up Litestream for backups and replication.
Setting Up Litestream
Before I get into the details of setting up Litestream, let me explain how it works. Litestream uses SQLite's write-ahead log (aka WAL) to replicate the database to a separate location (Cloudflare R2 Storage, S3, GCS, etc.). This process allows for efficient and incremental updates, providing a durable and consistent backup without the need for a full database snapshot every time.
Litestream is written in Go, which means that we can compile it into a single binary and run it anywhere. In addition, it comes with a handy -exec
flag that allows it to supervise our main process. This means that we can run Litestream in the same container as our application and have it automatically shut down when our application closes. You can learn more about Litestream in their documentation.
Now that we know how Litestream works, let's set it up. For this article, I'm going to skip the local setup and jump straight to the Docker setup. Again, you can refer to their documentation for the local setup.
Setting Up Litestream Configuration
First, let's create a litestream.yml
file in the root of our project with the following contents:
This file tells Litestream where to find our database file and where to replicate it. In this case, we're replicating it to an S3 bucket. In our case, I'm going to use Cloudflare R2 Storage, so I have created a bucket there called sqlite
and an API token with the necessary permissions. From there, you get the Access Key ID
and Secret Access Key
which correspond to the LITESTREAM_ACCESS_KEY_ID
and LITESTREAM_SECRET_ACCESS_KEY
environment variables.
From the bucket itself, you can get the Endpoint
URL, which corresponds to the REPLICA_URL
environment variable. Please note that the bucket name is defined separately in the litestream.yml
file, so make sure you are not passing the bucket name in the REPLICA_URL
environment variable.
If you're using a different provider, you can refer to the documentation for the correct configuration.
Setting Up Litestream Entrypoint
Now that we have our litestream.yml
file set up, we need to create an entrypoint script that will start Litestream and our application.
Let's create a run.sh
file in the scripts
folder with the following contents:
This script first checks if the database file exists, and if it doesn't, it restores it from the replica. This is useful for the initial setup, but it also ensures that we don't overwrite our database if we restart the container.
Finally, it runs Litestream with our application as a subprocess to ensure that Litestream monitors our application while also replicating the database to the replica which would be Cloudflare R2 Storage in our case.
Setting Up Dockerfile
With our application, SQLite, and Litestream set up, we can now create a Dockerfile that will package everything together. Let's create a Dockerfile
in the root of our project with the following contents:
This Dockerfile is a bit more complicated than what you might be used to, so let's break it down.
First, we have a base
stage that installs Litestream. We're using a multi-stage build to ensure that we don't have any unnecessary dependencies in our final image. We're also using the TARGETARCH
argument to ensure that we're installing the correct version of Litestream for our architecture. This is important in case you want to run the container on Apple Silicon or Raspberry Pi.
Next, we have a build
stage that installs our dependencies and builds our application. Finally, we have a runtime
stage that copies the necessary files from the build
stage, sets up some environment variables, and runs our application using the run.sh
script we created earlier.
Building and Running the Container
Now that we have our Dockerfile set up, we can build our container and run it. Let's first build the container by running the following command:
This will build the container and tag it as tiny-stack
. Now, we need to export the necessary environment variables and run the container. We can do that by running the following command:
Make sure to replace the values with your own. Now, we can run the container by running the following command:
This will run the container and mount the data
directory to the /data
directory in the container. This is important because we want to ensure that the database is persisted even if we restart the container.
Now, if you navigate to http://localhost:4321
, you should see the application running. If you post a comment, you should see the database file in the data
directory. You can also check the S3 bucket to see if the database is being replicated.
Finally, if you restart the container, the container should continue using the same database from the data
directory. This is because we're using the same volume for the database and the replica. If you want to test the restore functionality, you can delete the database file from the data
directory and restart the container. This should restore the database from the replica.
A Little Note on Deployments
Having a containerized application gives you a lot of flexibility when it comes to deployments. It's one of the most portable ways to deploy an application, and there are tons of options when it comes to hosting providers. You can deploy it directly to a VM using something like Kamal, or you can use a managed service like Railway or DigitalOcean's App Platform. The choice is yours!
Though, there are a few things to keep in mind when deploying this stack. First, you need to make sure that the data
directory is persisted between deployments. You can get away without the persisted volume, but that would mean some downtime when you deploy a new version of your application as the database would be restored from the replica. Though, this is very simple to set up, and most hosting providers support it out of the box.
Second, you need to make sure that the REPLICA_URL
, LITESTREAM_ACCESS_KEY_ID
, and LITESTREAM_SECRET_ACCESS_KEY
environment variables are set correctly. This is usually done through the hosting provider's dashboard, but you can also set them manually if you're deploying to a VM.
Finally, you would need a proxy server to handle the SSL termination and routing. Again, there are a lot of options here, most providers offer a managed solution out of the box, but if you want to set it up yourself, I would suggest looking into Cloudflare or if you really want to stay in the Tiny Stack
mindset, Caddy is a great and simple option.
Conclusion
This was a long post, I know, but there was a lot to cover. We went from setting up Astro to setting up SQLite and Drizzle, and finally, we set up Litestream for backups and replication. We then Dockerized our application and set up a simple deployment pipeline.
Though, trust me, it's not as complicated as it seems; it's mostly things that you would have to set up once and then forget about them. You may also replace some of the tools I used with your own. For example, you may use a different ORM or decide to add a few more libraries to the mix to improve your development experience.
I highly encourage you to try this stack out for yourself and see if it works for you. You can find the source code for this project on GitHub.
Until next time, happy coding!
Interested in LogSnag?