Introducing a collectively owned and operated digital solutions team
#webdev #coop #code #smallbiz
Introducing a collectively owned and operated digital solutions team
#webdev #coop #code #smallbiz
The #Nextcloud upgrade process never works for me and I always end up having to walk through the manual upgrade process step-by-step. After yet another failed update, I decided it was time to #automate the process using #ansible.
I’ve been writing on this blog for nearly 9 years and I’ve learned so much since I started. The style and the content have come a long ways and I cringe every time I read old posts thoroughly enjoy seeing how I’ve grown as a developer. I’ve interacted with people that I never would have had the chance to otherwise. It’s been a wonderful learning experience.
While my intentions have always been for this site to exist as a sort of journal/wiki/knowledgebase/playground, I’ve always secretly wanted to become a billionaire tech influencer. And now you can help me achieve that goal by buying my merchandise!
BUY! BUY! BUY!
By purchasing merchandise from my shop, you can support this site financially, by giving me real money that you’ve earned for your “hard work.” While donations are always appreciated, I understand that you may want something in return; something tangible, something you can see and smell, something to keep you comfortable while you cry yourself to sleep. And since nobody actually donates to strangers on the internet, I opened a shop.
All of the designs are completely original and there are many, many more to come. The pricing is affordable for all budgets and will only expand with more options. Be sure to check posts here often by following the social media channels or the RSS feed. There may just be coupon codes hidden in future posts 😉.
So if you’re ready to showcase the fact that you know what HTML is and like the look of monospaced fonts, then you should go checkout the new closingtags merch shop. Once you’ve got the closingtags swag (closingswag 🤔), be prepared to have people you barely know ask if you “work with computers” or to tell you about their genius new app idea.
Don’t forget to buy, buy, buy!
‘Automating’ comes from the roots ‘auto-‘ meaning ‘self-‘, and ‘mating’, meaning ‘screwing’.
Update: As evident by the comments on this post, it seems this method may not work for all installs. The issue seems to be with SMB shares. This post outlines the process with NFS shares. Thanks to Alex in the comments for these findings. If you find other issues or learn why this is the case, leave a comment below or fill out the contact form.
When ever I run into an issue during GNU/Linux server administration, 9 times out of 10, it’s due to permissions. By this point, it’s only frustrating when I realize that I didn’t check the permissions first. Since starting my homelab years ago, one issue that has plagued me has been giving write access to my unprivileged LXC containers in a shared storage.
I could possibly sidestep this problem by starting a VM but I like containers. Why? Containers are great because they reduce resources consumed, segment logic, and are quickly reproduced. This is all accomplished by using existing features of the Linux kernel and its user space. The host machine already has a kernel (unlike a VM which is given its own kernel), so when running a container, the host machine kernel is shared with the container and is managed by the host as another user on the system. By design, unprivileged LXC containers (henceforth known as unpriv LXC) have no permissions on the host machine. They are relegated to the nobody user and nogroup group. This ensures that if an attacker were to compromise the container, they would have no permissions on the host machine. That’s all well and good, but what if you want to share storage across your unprivileged containers? There are presumably ways that you can punch holes in AppArmor but Proxmox does in fact, make it simple.
In my example, I have an unpriv LXC running Plex. I don’t want this container to be able to do anything on the host but I would like it to be able to read and write from a shared media folder on my network.
Mounting NFS
The first step, is making sure the Proxmox host has access to the Network File Share (NFS). This is incredibly simple via the GUI. Under Storage View, click the Datacenter, then click Storage. Click the “Add” button and select the appropriate storage option. In your case, it may be SMB/CIFS or Directory but in my case, it’s NFS.
** Update: As noted by commenters, this process does not work for SMB shares. **
The ID will be the name of the storage, the server is your NFS IP address, export is the NFS path, and the content is what Proxmox will use this storage for.
Bind Mount
Once our host has access to the NFS, we need to give the container access to that data via a bind mount. A bind mount is a folder on the host that is mapped inside the container. To create the bind mount, open the Proxmox CLI, and run
pct set 100 -mp0 /host/shared_dir_location,mp=/path/in/container
pct is the Proxmox Container Toolkit
set tells pct we’re going to set an option
100 is the container ID we’ll be working on
-mp0 is the name of the mount point
the first path listed is the directory on your host you’re attempting to share with the container
,mp=/path is the path where we want that directory mounted within the container
Permissions
From here, open the Proxmox GUI (web interface) and within the Server View, click Datacenter > Permissions > Groups.
Permissions management for a Proxmox Datacenter
After that, click the “Create” button to make a new group permission. Give it a name like “shared_file_access” and a description so you know what it does. We’ll also go on to select “Roles” below “Groups” and create a new role. Give the new role a name like “DataAccessRole” and assign it the storage related privileges, which are the ones that start with “Datastore.” as well as “Pool.Allocate” and “Pool.Audit.”
Once that’s complete, we can select our container from the server list on the left, navigate to it’s permissions, and click “Add” to give it our group permission and the role. We’ll want to do the same thing for the storage point we mounted earlier on the host. That can be found under Data center > Storage. Select your storage point, and navigate to the permissions of it. From there, give it the same permissions you gave to our container.
If your storage location is properly mounted inside of Proxmox, your unpriv LXC should now be able to read and write to the location we mounted earlier! You can test this by opening a terminal in your unpriv LXC, navigating to the bind mount point, and attempting to create a file there eg. touch test.txt.
Comment below with how well this worked for you, and if you liked this post, share it around.
I’ve recently done some work on a personal project that I have put off for far too long. The project itself is a chat bot for moderating Telegram chat groups but this post isn’t about that project. Instead, it will focus on how I simplified setup of the development environment using Docker Compose.
I don’t have much experience with Docker or Docker Compose because in the past, I’ve used Vagrant for creating my environments. And Vagrant has been great, but over the years, I’ve run into a few issues:
virtual machines can take up to a minute to start, sometimes longer
new boxes (images) are slow to download
software versions fall out of sync with production environments
unexpected power outages have led to corrupted VMs
These issues are likely addressable in each configuration but between custom shell scripts and having so little experience with Ruby (Vagrantfiles are written in Ruby), managing it can quickly become unwieldy. Plus, Docker has become a de facto industry standard so it’s about time I learned something about it.
Ideally, this project would have a single file that makes spinning up a consistent development environment quick and painless, which is where Docker Compose comes in. A docker-compose.yml file; located in the root directory of the project, will make starting the environment as simple as running a single command; docker-compose up -d. The “-d” flag runs the containers in detached mode (the background) which frees up your terminal.
MongoDB
My first goal was to get MongoDB working with the project. This was relatively easy since the docker community maintains an official MongoDB image with an example docker-compose.yml:
# Use root/example as user/password credentials
version: ‘3.1’
services:
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/
Since I wanted to start simple, I ran Node locally and connected to the MongoDB container as if it were another machine. To do that, ports needed to be exposed to the host machine. I also needed a database setup and for that same database to persist. I decided to keep the Mongo Express container as it creates a useful web interface for checking data.
version: ‘3.1’
services:
mongo:
container_name: toximongo
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: toxipass
MONGO_INITDB_DATABASE: toxichatdb
ports:
– 27017:27017
volumes:
– ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
– ./mongo-volume:/data/db
mongo-express:
container_name: toximongo-express
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: toxipass
ME_CONFIG_MONGODB_URL: mongodb://root:toxipass@mongo:27017/toxichatdb
Some of the changes to take note of:
name containers to make them easy to differentiate and access
usernames + passwords set
expose MongoDB to local machine on port 27017
create a persistent volume at ./mongo-volume/ in the root of the project
run /docker-entrypoint-initdb.d/init-mongo.js to create a user and initialize the database
init-mongo.js is a very simple script that runs when the container is started for the first time.
db.createUser({
user: ‘root’,
pwd: ‘toxipass’,
roles: [
{
role: ‘readWrite’,
db: ‘toxichatdb’,
},
],
});
Node
After getting access to a database, the next step was to include Node. Since Node versions change frequently, its great to ensure that all developers are supporting the same version. It also simplifies getting started working on the project if a developer isn’t expected to have to install something else. This was also straightforward since there is an official image supplied by the Node JS Docker team with extensive documentation.
version: ‘3.1’
services:
node:
container_name: toxinode
image: “node:16”
user: “node”
working_dir: /home/node/app
environment:
– NODE_ENV=development
volumes:
– ./:/home/node/app
expose:
– “8080”
command: [sh, -c, “yarn && yarn start”]
mongo:
container_name: toximongo
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: toxipass
MONGO_INITDB_DATABASE: toxichatdb
ports:
– 27017:27017
volumes:
– ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
– ./mongo-volume:/data/db
mongo-express:
container_name: toximongo-express
image: mongo-express
restart: always
ports:
– 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: toxipass
ME_CONFIG_MONGODB_URL: mongodb://root:toxipass@toximongo:27017/toxichatdb
Again, I’ve given the container a custom name. There isn’t much else changed except for the command which issues a shell inside the container, calls Yarn to install dependencies, and then starts the application.
Summary
Since incorporating Docker Compose with this project, I’ve been impressed at the speed with which my development environment can start. This setup doesn’t include a Dockerfile but because it isn’t building custom images, I didn’t think that was necessary to include. I’m certain there are ways to improve it and I’ve got lots to learn, especially if I’d like to incorporate deploying this to production environments. If you’ve got tips or suggestions, let me know!