Server – closingtags </>
Categories
Automation Homelab Security Server

Automating Manual Nextcloud Upgrades

The #Nextcloud upgrade process never works for me and I always end up having to walk through the manual upgrade process step-by-step. After yet another failed update, I decided it was time to #automate the process using #ansible.

Categories
Javascript Programming Security Server Svelte

2022 Recap

I know its cliché to ask it but I’m going to anyways; where did the year go? It feels like just yesterday, I was writing about setting up a Node & MongoDB with Docker Compose.

#2022 #Recap #FullStack #WebDev

Categories
Automation CSS HTML5 Javascript Linux Mobile PHP Programming Security Server Svelte WordPress

BUY! BUY! BUY!

I’ve been writing on this blog for nearly 9 years and I’ve learned so much since I started. The style and the content have come a long ways and I cringe every time I read old posts thoroughly enjoy seeing how I’ve grown as a developer. I’ve interacted with people that I never would have had the chance to otherwise. It’s been a wonderful learning experience.
While my intentions have always been for this site to exist as a sort of journal/wiki/knowledgebase/playground, I’ve always secretly wanted to become a billionaire tech influencer. And now you can help me achieve that goal by buying my merchandise!
BUY! BUY! BUY!
By purchasing merchandise from my shop, you can support this site financially, by giving me real money that you’ve earned for your “hard work.” While donations are always appreciated, I understand that you may want something in return; something tangible, something you can see and smell, something to keep you comfortable while you cry yourself to sleep. And since nobody actually donates to strangers on the internet, I opened a shop.

All of the designs are completely original and there are many, many more to come. The pricing is affordable for all budgets and will only expand with more options. Be sure to check posts here often by following the social media channels or the RSS feed. There may just be coupon codes hidden in future posts 😉.
So if you’re ready to showcase the fact that you know what HTML is and like the look of monospaced fonts, then you should go checkout the new closingtags merch shop. Once you’ve got the closingtags swag (closingswag 🤔), be prepared to have people you barely know ask if you “work with computers” or to tell you about their genius new app idea.
Don’t forget to buy, buy, buy!

Categories
Automation Javascript Server

Deploying Node.js Apps with Ansible

‘Automating’ comes from the roots ‘auto-‘ meaning ‘self-‘, and ‘mating’, meaning ‘screwing’.

Categories
Automation Linux Security Server

Writing to Bind Mounts from Unprivileged LXC Containers

Update: As evident by the comments on this post, it seems this method may not work for all installs. The issue seems to be with SMB shares. This post outlines the process with NFS shares. Thanks to Alex in the comments for these findings. If you find other issues or learn why this is the case, leave a comment below or fill out the contact form.

When ever I run into an issue during GNU/Linux server administration, 9 times out of 10, it’s due to permissions. By this point, it’s only frustrating when I realize that I didn’t check the permissions first. Since starting my homelab years ago, one issue that has plagued me has been giving write access to my unprivileged LXC containers in a shared storage.
I could possibly sidestep this problem by starting a VM but I like containers. Why? Containers are great because they reduce resources consumed, segment logic, and are quickly reproduced. This is all accomplished by using existing features of the Linux kernel and its user space. The host machine already has a kernel (unlike a VM which is given its own kernel), so when running a container, the host machine kernel is shared with the container and is managed by the host as another user on the system. By design, unprivileged LXC containers (henceforth known as unpriv LXC) have no permissions on the host machine. They are relegated to the nobody user and nogroup group. This ensures that if an attacker were to compromise the container, they would have no permissions on the host machine. That’s all well and good, but what if you want to share storage across your unprivileged containers? There are presumably ways that you can punch holes in AppArmor but Proxmox does in fact, make it simple.
In my example, I have an unpriv LXC running Plex. I don’t want this container to be able to do anything on the host but I would like it to be able to read and write from a shared media folder on my network.
Mounting NFS
The first step, is making sure the Proxmox host has access to the Network File Share (NFS). This is incredibly simple via the GUI. Under Storage View, click the Datacenter, then click Storage. Click the “Add” button and select the appropriate storage option. In your case, it may be SMB/CIFS or Directory but in my case, it’s NFS.
** Update: As noted by commenters, this process does not work for SMB shares. **

The ID will be the name of the storage, the server is your NFS IP address, export is the NFS path, and the content is what Proxmox will use this storage for.

Bind Mount
Once our host has access to the NFS, we need to give the container access to that data via a bind mount. A bind mount is a folder on the host that is mapped inside the container. To create the bind mount, open the Proxmox CLI, and run
pct set 100 -mp0 /host/shared_dir_location,mp=/path/in/container

pct is the Proxmox Container Toolkit
set tells pct we’re going to set an option
100 is the container ID we’ll be working on
-mp0 is the name of the mount point
the first path listed is the directory on your host you’re attempting to share with the container
,mp=/path is the path where we want that directory mounted within the container

Permissions
From here, open the Proxmox GUI (web interface) and within the Server View, click Datacenter > Permissions > Groups.

Permissions management for a Proxmox Datacenter

After that, click the “Create” button to make a new group permission. Give it a name like “shared_file_access” and a description so you know what it does. We’ll also go on to select “Roles” below “Groups” and create a new role. Give the new role a name like “DataAccessRole” and assign it the storage related privileges, which are the ones that start with “Datastore.” as well as “Pool.Allocate” and “Pool.Audit.”
Once that’s complete, we can select our container from the server list on the left, navigate to it’s permissions, and click “Add” to give it our group permission and the role. We’ll want to do the same thing for the storage point we mounted earlier on the host. That can be found under Data center > Storage. Select your storage point, and navigate to the permissions of it. From there, give it the same permissions you gave to our container.
If your storage location is properly mounted inside of Proxmox, your unpriv LXC should now be able to read and write to the location we mounted earlier! You can test this by opening a terminal in your unpriv LXC, navigating to the bind mount point, and attempting to create a file there eg. touch test.txt.
Comment below with how well this worked for you, and if you liked this post, share it around.

Categories
Automation Javascript Server

Node + MongoDB development environment w/Docker Compose

I’ve recently done some work on a personal project that I have put off for far too long. The project itself is a chat bot for moderating Telegram chat groups but this post isn’t about that project. Instead, it will focus on how I simplified setup of the development environment using Docker Compose.
I don’t have much experience with Docker or Docker Compose because in the past, I’ve used Vagrant for creating my environments. And Vagrant has been great, but over the years, I’ve run into a few issues:

virtual machines can take up to a minute to start, sometimes longer
new boxes (images) are slow to download
software versions fall out of sync with production environments
unexpected power outages have led to corrupted VMs

These issues are likely addressable in each configuration but between custom shell scripts and having so little experience with Ruby (Vagrantfiles are written in Ruby), managing it can quickly become unwieldy. Plus, Docker has become a de facto industry standard so it’s about time I learned something about it.
Ideally, this project would have a single file that makes spinning up a consistent development environment quick and painless, which is where Docker Compose comes in. A docker-compose.yml file; located in the root directory of the project, will make starting the environment as simple as running a single command; docker-compose up -d. The “-d” flag runs the containers in detached mode (the background) which frees up your terminal.
MongoDB
My first goal was to get MongoDB working with the project. This was relatively easy since the docker community maintains an official MongoDB image with an example docker-compose.yml:
# Use root/example as user/password credentials
version: ‘3.1’

services:

mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example

mongo-express:
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/
Since I wanted to start simple, I ran Node locally and connected to the MongoDB container as if it were another machine. To do that, ports needed to be exposed to the host machine. I also needed a database setup and for that same database to persist. I decided to keep the Mongo Express container as it creates a useful web interface for checking data.
version: ‘3.1’
services:
mongo:
container_name: toximongo
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: toxipass
MONGO_INITDB_DATABASE: toxichatdb
ports:
– 27017:27017
volumes:
– ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
– ./mongo-volume:/data/db
mongo-express:
container_name: toximongo-express
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: toxipass
ME_CONFIG_MONGODB_URL: mongodb://root:toxipass@mongo:27017/toxichatdb
Some of the changes to take note of:

name containers to make them easy to differentiate and access
usernames + passwords set
expose MongoDB to local machine on port 27017
create a persistent volume at ./mongo-volume/ in the root of the project
run /docker-entrypoint-initdb.d/init-mongo.js to create a user and initialize the database

init-mongo.js is a very simple script that runs when the container is started for the first time.
db.createUser({
user: ‘root’,
pwd: ‘toxipass’,
roles: [
{
role: ‘readWrite’,
db: ‘toxichatdb’,
},
],
});
Node
After getting access to a database, the next step was to include Node. Since Node versions change frequently, its great to ensure that all developers are supporting the same version. It also simplifies getting started working on the project if a developer isn’t expected to have to install something else. This was also straightforward since there is an official image supplied by the Node JS Docker team with extensive documentation.
version: ‘3.1’

services:
node:
container_name: toxinode
image: “node:16”
user: “node”
working_dir: /home/node/app
environment:
– NODE_ENV=development
volumes:
– ./:/home/node/app
expose:
– “8080”
command: [sh, -c, “yarn && yarn start”]

mongo:
container_name: toximongo
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: toxipass
MONGO_INITDB_DATABASE: toxichatdb
ports:
– 27017:27017
volumes:
– ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
– ./mongo-volume:/data/db

mongo-express:
container_name: toximongo-express
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: toxipass
ME_CONFIG_MONGODB_URL: mongodb://root:toxipass@toximongo:27017/toxichatdb

Again, I’ve given the container a custom name. There isn’t much else changed except for the command which issues a shell inside the container, calls Yarn to install dependencies, and then starts the application.
Summary
Since incorporating Docker Compose with this project, I’ve been impressed at the speed with which my development environment can start. This setup doesn’t include a Dockerfile but because it isn’t building custom images, I didn’t think that was necessary to include. I’m certain there are ways to improve it and I’ve got lots to learn, especially if I’d like to incorporate deploying this to production environments. If you’ve got tips or suggestions, let me know!

Categories
Linux Server

Backup containers w/Ansible

The one thing that you should absolutely be doing routinely is backing up your stuff. Yet you’ll notice in my previous post about using Ansible to automate my homelab, there is no mention of backing up the environment. If you’re wondering why that is, it’s because I am both lazy and forgetful.
Having been bitten one too many times by my fancy automated updates, I decided I’d had enough. I was spending too much time connecting to the Proxmox web GUI, starting a backup, then running updates so if (when) things broke, I’d have access to a more recent backup. I needed to be able to take backups within Ansible as well as run my updates.
This is the solution I came up with:

– hosts: host
remote_user: root

vars_prompt:
– name: ctid
prompt: ‘Specify container ID or leave blank for all’
private: no

tasks:
– name: Backup all containers.
command:
cmd: vzdump –all –maxfiles 5 –compress gzip
when: ctid == “”

– name: Backup specified container.
command:
cmd: vzdump {{ ctid }} –maxfiles 5 –compress gzip
when: ctid != “”

This playbook should be easy enough to understand without too much exlaining but I’ll sum it up: if a container ID is specified when run, use the Proxmox tool vzdump to backup that container; otherwise, backup all containers (compress those files and only keep the most recent 5). Please borrow, tweak, share, and critique.

Categories
Linux Server Yii2

Starting Systemd Services with Vagrant machines

I recently ran into a minor inconvenience with a configuration on one of my Vagrant machines. You see, I’m implementing a queueing service on an application and I needed that service to be started whenever the machine starts up. Normally, this is quite simple and is done by creating a file named myservice@service in /etc/systemd/system with content like so:
[Unit]
Description=My Queue Worker %I
After=network.target
[Service]
User=www-data
Group=www-data
ExecStart=/usr/bin/php /var/www/my_project/my_script –verbose
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then run systemctl daemon-reload and systemctl enable myservice@1 myservice@2 to start two workers on system boot. Reboot your system, run systemctl status myservice@* and you should see both of those services running.
The problem with doing this on Vagrant occurs when the file your service is attempting to run is located in the shared folder that doesn’t end up getting mounted until the system has already started all services. But Vagrant has that handy file for provisioning and it can do all sorts of neat things like run shell commands after provisioning. To get these very same services to start up in a Vagrant VM, you simply need to add this bit to your Vagrantfile:
Vagrant.configure(“2”) do | config|
config.vm.provision “shell”, run: “always”,
inline: “systemctl start myservice@1 myservice@2”
end
Because this is a change to the Vagrantfile, you’ll have to re-provision the VM with vagrant reload –provision. This will shutdown the currently running VM, and re-run all provisions. Normally, these provisions are only run during the provisioning stage of Vagrant but because we added the run: “always” flag, this snippet will be run every time the machine is started. Now, once you’ve booted your VM with vagrant up, ssh into it with vagrant ssh and you should be able to run systemctl status myservice@* to see all of your services running.

Categories
Linux Server

Automating Proxmox LXC Container Creation, Updates, and more with Ansible

I love tinkering with my homelab but there is always a fear in the back of my mind, that one of my servers is running an outdated package that is being actively exploited. I don’t want to spend my free time cleaning up a mess that some nefarious party has made of my servers and network; I want to tinker! I like to keep everything up to date to prevent that, but I hate having to navigate my way into each and every server to run the updates manually. I’m a fan of automating anything that I have to do more than a couple times so I started researching Ansible.
If you’re doing any sort of server management or application deployment, you really ought to be looking into using this tool. It’s simple to get setup with and once you put in the initial time investment, it will undoubtedly save you time. Now when I want to update all of my servers, I can run two commands from my terminal. But it doesn’t just stop at updating servers, oh no. If I want to create a new server, it’s a matter of copying/pasting an existing configuration from one of my “playbooks,” changing a couple variables, and running the playbook. Just like that, I’ve got a brand new server running on my network.
By now, you agreed that Ansible is great and you should be using it, so how can you get started? I’ve got a repo setup on Github where I’ve shared what I have so far. If you’re looking to start using Ansible to automate Proxmox, I’ve done some of the heavy lifting already. A lot of what I have there, is taken from Nathan Curry’s post on his website. Give that a read first, then come back to my repo where you can tweak to your heart’s desire.
 

Categories
Server

DDNS with Cloudflare API

Cloudflare is fantastic but they aren’t a common option for setting up DDNS on your average home router like other services are. Fortunately, it’s super simple to get around this minor inconvenience with the help of their thoroughly documented API. If you’re interested in hosting a website on your own network, are using Cloudflare’s DNS service, and have the problem of your IP changing frequently, then look no further; I’ve done the legwork already.
I took the liberty of borrowing some code I found from Rohan Jain. His post is great and explains how to get your script running as a systemd service. I was lazy and just set the script below to run every 5 minutes in cron job because it’s simple.
To get your DIY DDNS solution up and running follow these steps:

save the bash script from below to a machine on your network
get the appropriate credentials for the script (Cloudflare Email, Auth token, Zone ID, DNS ID, and domain name)
put credentials in the script
create a cron job that runs the script at your desired interval