Linux – Page 2 – closingtags </>
Categories
Linux Server Yii2

Starting Systemd Services with Vagrant machines

I recently ran into a minor inconvenience with a configuration on one of my Vagrant machines. You see, I’m implementing a queueing service on an application and I needed that service to be started whenever the machine starts up. Normally, this is quite simple and is done by creating a file named myservice@service in /etc/systemd/system with content like so:
[Unit]
Description=My Queue Worker %I
After=network.target
[Service]
User=www-data
Group=www-data
ExecStart=/usr/bin/php /var/www/my_project/my_script –verbose
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then run systemctl daemon-reload and systemctl enable myservice@1 myservice@2 to start two workers on system boot. Reboot your system, run systemctl status myservice@* and you should see both of those services running.
The problem with doing this on Vagrant occurs when the file your service is attempting to run is located in the shared folder that doesn’t end up getting mounted until the system has already started all services. But Vagrant has that handy file for provisioning and it can do all sorts of neat things like run shell commands after provisioning. To get these very same services to start up in a Vagrant VM, you simply need to add this bit to your Vagrantfile:
Vagrant.configure(“2”) do | config|
config.vm.provision “shell”, run: “always”,
inline: “systemctl start myservice@1 myservice@2”
end
Because this is a change to the Vagrantfile, you’ll have to re-provision the VM with vagrant reload –provision. This will shutdown the currently running VM, and re-run all provisions. Normally, these provisions are only run during the provisioning stage of Vagrant but because we added the run: “always” flag, this snippet will be run every time the machine is started. Now, once you’ve booted your VM with vagrant up, ssh into it with vagrant ssh and you should be able to run systemctl status myservice@* to see all of your services running.

Categories
Linux Server

Automating Proxmox LXC Container Creation, Updates, and more with Ansible

I love tinkering with my homelab but there is always a fear in the back of my mind, that one of my servers is running an outdated package that is being actively exploited. I don’t want to spend my free time cleaning up a mess that some nefarious party has made of my servers and network; I want to tinker! I like to keep everything up to date to prevent that, but I hate having to navigate my way into each and every server to run the updates manually. I’m a fan of automating anything that I have to do more than a couple times so I started researching Ansible.
If you’re doing any sort of server management or application deployment, you really ought to be looking into using this tool. It’s simple to get setup with and once you put in the initial time investment, it will undoubtedly save you time. Now when I want to update all of my servers, I can run two commands from my terminal. But it doesn’t just stop at updating servers, oh no. If I want to create a new server, it’s a matter of copying/pasting an existing configuration from one of my “playbooks,” changing a couple variables, and running the playbook. Just like that, I’ve got a brand new server running on my network.
By now, you agreed that Ansible is great and you should be using it, so how can you get started? I’ve got a repo setup on Github where I’ve shared what I have so far. If you’re looking to start using Ansible to automate Proxmox, I’ve done some of the heavy lifting already. A lot of what I have there, is taken from Nathan Curry’s post on his website. Give that a read first, then come back to my repo where you can tweak to your heart’s desire.
 

Categories
Linux Security Server

Converting Privileged LXC Containers to Unprivileged

Not long ago, I was looking through my container configurations in the Proxmox GUI and noticed that one very important container had been running as privileged. I must’ve forgotten to click the “Unprivileged” checkbox when I was creating it. For security sake, I try making all of my containers unprivileged. It makes things like sharing files between the host and containers slightly more difficult, but if that particular container is ever compromised by someone with malicious intent, it makes it much more difficult for that malicious actor to compromise the entire host. See the Proxmox documentation on unprivileged containers for more information.
To make this particular container more secure, and to avoid having to set everything up again, I thought it might be easier to simply try and converting it to an unprivileged container. While you can’t just shut the container down, go into the GUI and mark it unprivileged, you can create a backup and make a new container from that backup unprivileged. If you clicked the link to the Proxmox documentation from earlier, you’d see just what I was talking about. In it, you can see under the Creation section, that all you need to do is run
pct restore 1234 var/lib/vz/dump/vzdump-lxc-1234-2016_03_02-02_31_03.tar.gz
-ignore-unpack-errors 1 -unprivileged 
where the first 1234 is your new container ID, and the second (in the backup file) is the old container ID. You can overwrite the previous container with the restore, but it might be a safer bet to just create a new container and then shutdown your old one.
You can also do this through the GUI by navigating to the backups of your container, selecting your backup, and clicking restore. However, when I ran it through the GUI, it gave errors and destroyed the container. Thank goodness for backups, right? Even when running the above command in the CLI, I received errors. Fortunately, they were easy enough to troubleshoot. If you see something like
400 Parameter verification failed.
storage: storage ‘local’ does not support container directories
then you’ll need to specify your storage. This is easy enough to get around by providing the –storage option and selecting the proper storage location. In my case, the entire command looked like
pct restore 1234 /var/lib/vz/dump/vzdump-lxc-1234-2018_05_25-10_29_59.tar.lzo
-ignore-unpack-errors 1 -unprivileged –storage local-zfs.
With that done, you can start up your new container and use it the same way you were before, but this time, it’s a little more secure.

Categories
Linux Security Server

Storj-CLI Update + Shortcuts

I hate typing the same commands over and over into the terminal. I hate it so much that every time I have CLI deja-vu, the first thing I say to myself is “How can I automate this?” Usually, it’s as simple as adding a quick alias to my `~/.bash_aliases` file but this time I did that along with throwing together a quick bash script. If you ever install the storjshare client on an Ubuntu or Debian based machine, putting these aliases and script on your system might help you with a few of the commands I found myself typing frequently.
Storjshare Aliases

https://gist.github.com/Dilden/ec379a64532ec5e1d36586fd35ed0101

If these helped you out at all, let me know by starring the repo or leaving a comment here.
UPDATE!Something that bothered me about the storjshare daemon was that it didn’t startup when the system did. Depending on how you look at this, it can be a good thing. Fewer system resources consumed during startup is nice, but if it’s the only thing you’re hosting on your machine and you’re striving for up-time, then it’s a necessity. To get around this quickly and easily, you can update the cron jobs to fire on startup.
@reboot su – YOUR_STORJ_USER_HERE -c “storjshare daemon && storjshare start -c /home/YOUR_STORJ_USER_HERE/.config/storjshare/configs/YOUR_STORJ_NODE_ID.json”
This will likely need to be put in your root user’s cron which you can gain access to with the command `crontab -e` (while logged in as root).

Categories
Linux Security Server WordPress

WordPress Backup Bash Script

I threw together a simple bash script to be run via cron jobs that will backup an entire WordPress site. Technically, it will work for a lot more than that assuming the site you’re backing up has a single directory to be backed up and a MySQL/MariaDB database. It’s a fairly simple script and it is easy to expand to work with multiple directories. Just take a look at the source for yourself!
To use, fill in the information at the top of the script (site name, database name, database user + password, path to site, and backup path), upload the script to `/etc/cron.daily/` on your server and you’ll be good to go!
Take note of the `SITE_PATH` variable at the top of the script. For whatever reason, there needs to be a space between the first “/” and the rest of the path.

Categories
Linux PHP Server

Deployments w/Deployer

In keeping with my previous posts discussing deployments with Git and Capistrano, I thought it appropriate to mention the latest tool I’ve been using to automate shipping code; Deployer. Since discovering Deployer, I’ve dropped Capistrano and git deployments. Those tools were fine, and if you’re developing with Ruby, I’d encourage you to stick with Capistrano but since I’m doing most of my development with PHP, it only makes sense to use something that was made for PHP and can be easily installed alongside all of my other dependencies with Composer.
So what do you have to do to get started deploying your PHP code quickly, easily, and securely? Let’s dig in.
Installation
There are a few ways to handle this: 1) install Deployer globally with a curl/wget request, 2) install using composer on a per project basis, or 3) install globally with composer. If you install globally, Deployer will function in the same way a global composer install works. That is, you’ll download a .phar archive, move that .phar archive into a directory where it can be run that works with your environment’s PATH, and make that it executable.
curl -LO https://deployer.org/deployer.pharmv deployer.phar /usr/local/bin/dep
chmod +x /usr/local/bin/dep
That’s all you have to do for a global install of Deployer. Otherwise, you can install with one simple line using composer.
composer require deployer/deployer for per project basis or composer global require deployer/deployer for the global composer install.
If you did the curl request, Deployer should work using the command “dep.” If using composer, it’ll probably be “php vendor/bin/dep” but this can be corrected for by creating a quick alias in your system’s .bashrc file that looks like so:
alias dep=’php vendor/bin/dep’
Usage
Once we have Deployer installed, we can use it by navigating to our project’s root directory. In there, type dep init to create a deploy.php file. We’re going to modify ours so that it looks similar to the one below. Feel free to use as needed.
<?php
namespace Deployer;

require ‘recipe/common.php’;

// Project name
set(‘application’, ‘PROJECT_NAME_HERE’);

// Project repository
set(‘repository’, ‘YOUR_GIT_REPO_HERE’);

// [Optional] Allocate tty for git clone. Default value is false.
set(‘git_tty’, true);

// Shared files/dirs between deployments
set(‘shared_files’, []);
set(‘shared_dirs’, [’vendor’]);
set(‘keep_releases’, 5);

// Writable dirs by web server
set(‘writable_dirs’, []);

// Hosts
// live is the alias of the server, this would come in handy
// when specifying which server to deploy if I had another to include

host(‘live’)
-&gt;hostname(‘DEPLOY_TO_THIS_SERVER’)
-&gt;user(‘USER_TO_DEPLOY_AS’)
-&gt;identityFile(‘~/.ssh/id_rsa’)
-&gt;set(‘deploy_path’, ‘PATH_TO_DEPLOY_TO’)
-&gt;set(‘composer_options’, ‘install –no-dev’)
-&gt;set(‘branch’, ‘master’);

// Tasks

// This is sample task that I’ve included to show how simple
// it is to customize your deployer configuration. For now,
// it just needs to be declared and we’ll call it later.

// Runs database migrations using a package called Phinx (post to come later)
desc(‘Phinx DB migrations’);
task(‘deploy:migrate’, function () {
run(‘cd {{release_path}} &amp;&amp; php vendor/bin/phinx migrate -e live’);
});

desc(‘Deploy your project’);
task(‘deploy’, [
‘deploy:info’,
‘deploy:prepare’,
‘deploy:lock’,
‘deploy:release’,
‘deploy:update_code’,
‘deploy:shared’,
‘deploy:writable’,
‘deploy:vendors’,
‘deploy:migrate’,
‘deploy:clear_paths’,
‘deploy:symlink’,
‘deploy:unlock’,
‘cleanup’,
‘success’
]);

// [Optional] If deploy fails automatically unlock.
after(‘deploy:failed’, ‘deploy:unlock’);
This file should be pretty self explanatory so I won’t go through it line by line but notice there are a couple of things that should be pointed out. Firstly, the shared directories are useful so that on my production server, I don’t have 5 different vendor folders that need to be installed every time I deploy. Next, I’ve specified an alias for my server and called it live. That makes running the deployment command very simple and gives me the option to specify which host to deploy to, should I need to add another host. Thirdly, I’ve specified that for this host live, I should run composer with the –no-dev flag so that dependencies like Deployer aren’t installed. And finally, my custom task deploy:migrate is called after the deploy:vendors. This doesn’t necessarily need to be called here, but it does need to be called after deploy:update_code as that is the task that will pull my code from the git repo, and I don’t want to be running and older version of the migrations.
Launch!
Now what? Deploy! Just kidding, there is actually one other thing you should check before you deploy. Some services like Bitbucket, require that your production server can pull down your git repo. You may need to create an SSH key on your server and add the public key to your git repo’s access keys. Check your repo’s settings to make sure your server can pull code from there.
Now, you can launch and it’s as easy as dep deploy live. Assuming you’ve pushed all of your code to your repo, you should see the latest version running on your server!

Categories
Linux Server

Custom LXC Container Templates in Proxmox 5.0

If you’re at all into homelabbing, are a server administrator, or have an interest virtualization, you’ve probably heard of Proxmox. In my case, I just love to tinker with my homelab. One of the things I love about Proxmox, is how quickly I can get a containerized server up and running. All I need to do is open the web interface, click ‘Create CT’, fill out a couple of fields, and it’s done. It takes less than a minute to get an entirely new server on my network. The downside? I have to reinstall various services like Apache/nginx, PHP, and MySQL/MariaDB every time I spin up a new container. Now, I know what you’re thinking:
Couldn’t you just use a turnkey template provided by Proxmox with everything pre-installed?
And you’re right, I could do that but I want to have complete control over these containers. I want to know everything that’s installed them. So what’s next best option?
Create one container with everything that I regularly need, and use that as a template. It sounds intimidating, but it’s so ridiculously simple that I’m concerned with how short this blog post will be. Here’s what you do:

Create a new container using the whichever distro you prefer. I went with Ubuntu 16.04
Start that container
SSH into that container (or you can SSH into Proxmox and use the command pct enter <container ID> to access your new container)
Install all of the services that you need. Things like nginx, PHP 7, MariaDB, Git, and the Let’s Encrypt Certbot could be useful for web dev projects.
Verify everything you need is working with this container.
Exit your container and shut it down.
In the Proxmox web GUI under Server View, select your container and navigate to Backup
Create a new backup but be sure to select GZIP compression
After your backup finishes, open a terminal to your Proxmox environment (not the container)
Find the backup you just made under /var/lib/vz/dump/<backup-name>.tar.gz and copy it to /var/lib/vz/template/cache/<new-backup-name>.tar.gz

You’ve just created a new LXC template for use with your Proxmox 5.0 environment. Now anytime you want to spin up another container, you can just select that as your template! Also, the container that you previously created is still valid so feel free to use that as well.

Categories
Linux Python

Text to Speech in Python

I finally got around to looking at the Linux Voice 18 and I have to say, the Summer hacks section is really cool. I’ve always been curious about recompiling a kernel and playing around with the different settings, but the project that really caught my attention was the Python script that converted IRC chat into voice. That really just sounds like fun. So I tried going through with it, but ran into some issues with my Python versioning. And since I don’t really use IRC, I thought it might be fun to just whip up a quick script that converts whatever text is typed into the terminal into voice. It’s simple enough to do, so here it is!
from espeak import espeak
import time
while True:
response = input(“&gt;&gt; “)
espeak.synth(response)
time.sleep(1)
To install, make sure that the package espeak is installed. To install espeak, run:sudo apt-get install espeak espeak-dataThen, to run the script, runpython3 scriptname.pyHere’s a link to the Gist
At some point, I think it’d be pretty cool to turn this into a Telegram Bot via their API. Maybe use this to convert all text into a voice (like Gigolo Joe’s “say” function), but it would also be cool to allow Telegram to convert something a user says into text.