portfolio dev (part 2): ci/cd and self hosting
originally posted on 2024-12-16
tags: [coding, portfolio-dev, project]
- - -
man, i love self hosting. i love having things under my control.
i love the idea of having my own website.
i love the idea of having my own ci/cd pipeline.
i love the idea of having my own everything.
that's why...i decided to self host my website.
that's right. i sacrificed my sleep schedule (and by proxy, a couple class attendances) to do something completely personal.
motivations
at the time of this endeavor, i was using vercel. vercel is great, and it's free. however...
i was in the middle of deploying my secret santa website (blog here),
and i realized something stupid: i was using a json database (like, literally just a json file), and vercel did not support that as a dynamic read/write file.
this was a problem. i could have used a different database, but i decided to take this as an opportunity to do something
that i wanted to do for a while - self host.
this would also give me the opportunity to learn about how websites are hosted, how to use a reverse proxy, and how to use docker.
what could go wrong?
game plan
in order to host both of these websites, i needed to use a reverse proxy.
i decided to use nginx for this.
first of all, i had to learn what a reverse proxy was. i had a vague idea, but i needed to know the specifics.
a reverse proxy (with my understanding) is a server that sits between the client and the website(s). it takes the client's request
and then forwards it to the right place. it's literally the reverse of a "proxy" - instead of many acting as one, one acts as many.
with this in mind, i wanted to dockerize my websites. this way, i could easily deploy them and keep them portable.
at the moment, my server is a ugreen nas. they were released recently, so they're not half bad.
currently, it's a small server that i use for file storage and other things.
i decided to put this server on a seperate vlan network so that it would be isolated from the rest of my network.
in order to fulfill this...drawing, i used docker. below is the architecture i used for this endeavor:
i made a nginx docker container that would listen on port 80. i then made a configuration file that would forward requests to the right place.
at the moment, i had two websites: my portfolio and the secret santa website for my friends (blog here :3).
each of these websites had their own docker container and their own port. they also had their own docker files.
in order to make sure that the secret santa website was functional, i had to adjust the nextjs configuration for that app (specificall the base url).
after this, i made a docker compose file that would run all of these containers.
all of these containers are run on the same network so that they can communicate with each other.
simply running docker compose up
would start all of these containers seamlessly.
so far, this was fine. i could run the containers on my local machine, and the websites would be reachable on localhost.
but...i wanted to make this accessible to the world.
the domain
the domain of this website (as of right now, bryanchan.org) has been a domain that i have owned for a while.
when i deployed my portfolio website on vercel, i just changed my dns records to point to vercel's servers.
however, i wanted to self host.
this meant that i had to learn about how dns records worked. i had to learn about how to set up an a record, a cname record, and a txt record.
here's what i learned!
dns is a system that translates domain names to ip addresses. this is done through a series of servers that are connected to each other.
these servers are usually owned by companies and are located all around the world in data centers. when you type in a domain name, your computer
uses its dns resolver to send a request to dns root name servers (configured on your computer). from there, the request reaches
the authoritative name servers, which are the servers that are responsible for the domain. the authoritative name servers then return the ip address
of the domain.
in order to set up a domain, i had to set up an a record. an a record is a thing that points a domain to an ip address.
i simply pointed my domain to my network's ip address.
my network uses unifi (a router system), so i had to set up a port forwarding rule. this rule would forward all traffic on port 80 to my server's ip address.
after this, it was just a matter of waiting for the dns records to propagate. now, my website is accessible to the world! that's pretty cool.
okay, but what about ci/cd?
oh right, i almost forgot about that.
i wanted to make sure that my website was always up to date. i wanted to make sure that i could deploy my website with a single command.
this is where ci/cd comes in. ci/cd stands for continuous integration and continuous deployment. it refers to
the practice of automating code integration and delivery.
in other words, i wanted to be able to update my website automatically whenever i made changes to the code.
at the moment, all of my source code is stored on github. it would be nice if i could just push to github and have my website update on my server.
thankfully, github actions exists.
github actions is a feature of github that allows you to automate workflows. i used this to automate the deployment of my website.
i made a short script that would ssh into my server and pull the latest changes from github and rebuild the portfolio container.
by extension, this also meant that i had to port forward a port on my server to ssh. this was a bit of a security risk, but i was willing to take it (at least for now).
i then made a github action that would run this script whenever i pushed to the main branch. the reason i used a script was because i did not want to test the script by repeatedly pushing to github (this would be quite a hassle). i made a seperate script so that i could test it locally before running it online.
the secrets were stored in the github repository's secrets (settings > secrets and variables > actions). this way, i could keep the secrets out of the code.
of course, if any changes were made to the more private parts of the website (like environment variables), i would have to
update that manually.
i was really excited when i saw my website update automatically for the first time. it was like magic.
with all of this set up, i was able to self host my website. i was able to update it with a single push to github. there was only one problem...
ssl certificates
aaaaauuuuuuuggghhhhh. i forgot about ssl certificates.
to give a brief rundown, secure socket layer (ssl) certificates are certificates that encrypt the connection between the client and the server.
it is also used to verify the identity of the server.
without ssl, the connection between the client and the server is not secure and can be intercepted. for example, if you use
a public wifi network and log into a website without ssl, your login information can be stolen. this is...really bad.
in the context of my website, my secret santa website has a login system. without ssl, someone could intercept someone's login info.
i really didn't want that.
thankfully, there is a service called letsencrypt that provides free ssl certificates.
i used a docker container called certbot to get these certificates.
first, i added a new service to my docker compose file. this service would run the certbot container.
the certbot container and the nginx container share a few volumes. this is so that the certbot container can write the
certificates for the nginx container to read and use.
after this, i had to edit the nginx configuration file so that i could obtain the certificates.
this configuration file completely removed the proxy pass to my portfolio (for now).
after this, i had to restart my other containers and run a command on the certbot container
in order to get my certificates.
in this case, [domain]
would be bryanchan.org. this took a couple tries, since i didn't know that nginx had to be restarted and running.
however, in the end, i was able to generate my first certificate.
finally, i edited the nginx configuration file to use the certificates.
after this, i restarted the nginx container and...it worked! i had ssl certificates on my website.
look at that beautiful lock.
and...that's where we are now!
conclusion
i am now self hosting my website.
i have a ci/cd pipeline set up.
i have my own ssl certificates.
i am now in control of my website.
to you - the user, nothing has changed. the website still looks the same. heck, the website might even take longer to load since
there's no cdn. however, i am happy. i am proud that i was able to do this.
to the future!