track7
sign in
sign in securely with your account from one of these sites:
note:  this is only for users who have already set up a password.

ubuntu linux server setup: file, web, tv, and gaming

advanced apache, backup, encryption, linux, minecraft, mysql, mythtv, nfs, php, raid, samba, setup 7243 misterhaan

the steps i went through when i installed ubuntu linux 14.04 on my “server” hecubus, which formerly ran fedora. hecubus features software raid1 through mdadm, two levels of backups using tar, cron, rsync, and cryptsetup, file sharing through samba and nfs, lamp web serving with apache, mysql, and php, tv recording with mythtv and schedulesdirect, and a minecraft game server. setup for ubuntu server as well as these other packages are detailed. mythtv setup includes setting up a hauppage hvr 2250 card. sections of this guide can be skipped for setting up servers without those features.

gnu/linux (ubuntu)

the machine i use as a household server is workstation-class hardware, but since it’s only for personal use it’s able to keep up. it consists of leftover hardware from when i upgrade my workstation (typically a few generations old) plus a sata hot-swap bay and a pair of large hard drives. hardware needs will vary with usage, and more specific hardware requirements are listed at the beginning of each section.

a usb stick or cd / dvd drive and several gigabytes of hard drive space are required for installing ubuntu server 14.04.

download the latest ubuntu server disc image at from ubuntu.com, or get it faster from the bittorrent section of the alternative downloads page and a bittorrent client (qbittorrent is my favorite for windows). use rufus to put the image onto a usb key and make it bootable. there are also other utilities that can do that, or burn the image to a cd or dvd.

insert the usb stick or disc and boot the machine, making sure to set the boot order in bios if necessary. it’s also a good idea to make sure quick boot is off (grub doesn’t like it) and ahci is on. i have everything connected to the computer at this point — ethernet, tv antenna, keyboard, mouse, and monitor.

the ubuntu server installer starts off asking which language it should use for the menu, which defaulted to english for me. then select the highlighted option install ubuntu server from the top of the menu. next it asks for a language and locale for the install, which it defaulted to english and united states for me. for keyboard layout it gives the option of detecting by pressing a key or selecting from a list. i tried the detection method and pressed the V key and it knew i wanted dvorak. i assume it would also work well for qwerty.

it’ll take a few seconds to detect hardware, network, and maybe some other stuff. then it prompts for a hostname. next it will prompt to set up a user account with administrative privileges by entering real name, username, password, and confirming the password. i used an username that already had a directory on my /home partition and it automatically used the same uid. choose not to encrypt the home directory so other users can read it and to make sure it’s easy enough to install a new / different linux without losing home directory contents. next it auto-detected my time zone correctly, but there’s the option to change it if it gets confused. i assume it uses the internet to figure it out based on the ip address.

it prompted me to unmount my bootable hard drive from old linux install, so i chose yes. choose manual partitioning to make sure everything’s where i want it and none of the data i wanted to keep gets formatted over. there are software raid options here, but since i only had one of my drives for my raid1 at install time (the other one was set up standalone with data i needed to copy onto the raid) it wouldn’t let me set it up and i just left that drive unpartitioned.

when installing over an existing installation, there’s need to create any partitions because the ones that are already there will work. edit each one though to tell it what filesystem it is, whether to format it, and where to mount it. two of my partitions need to be formatted: /boot and / (root). if /home doesn’t contain users’ files then format that one too, but definitely don’t format /files because that’s where i keep all my stuff! it doesn’t give much to go on other than physical location, previous filesystem (make sure to use it as the same filesystem if it’s not getting formatted), and size.

for starting fresh create all the partitions. i make all of them primary and don’t use lvm. at the beginning of the system disk, create a partition at least 200 meg (i used 1 gig), type ext4, mount point /boot. at the end of the same disk, create a partition at least 1 gig (i used 4 gig), type swap. just after the /boot partition add at least 10 gig type ext4, mount point / (root). split the rest of the space between /home and /files, where /home is ext4 and /files is xfs. home is for users’ files, so that could be very small (or absorbed into /) if users won’t be storing files there (i use it as temp space and a staging area). put the rest into /files (it can be named anything that’s not normally used by linux). since i run 4 permanent drives, my system drive only has /boot, root, and swap. another drive is split between /home and /backup (for automated weekly backups), and the last two drives are a raid1 array for /files. additionally i have a fifth drive in a sata hotswap port i use for manual monthly backups.

here’s what i set up coming from a previous fedora install:

remember which /dev has /boot — usually it’s /dev/sda but for me it was /dev/sdc because only the first two sata ports on my board are 6 gbps and i wanted them for the raid.

it will ask about http proxy, which i leave blank. wait while it configures apt and starts installing. it will stop occasionally to ask some questions. choose to install security updates automatically because it’s security after all. for software selection i choose openssh server (to get a server command line from other computers), lamp server (for web development and mythtv), and samba file server (for sharing files with windows). there’s more i’ll need, but no options for it here so i’ll add it later. set a password for mysql’s root user. remember this for mysql setup later.

when asked where to install grub, choose the drive that holds /boot. it chose the correct drive for me but verify it (remember which drive it was from back in the partitioning step). the next prompt is to restart and boot the newly-installed system. remember to remove the usb stick or disc.

log in as the admin user it created (it’s the only option so far). since ssh should be running, it should accept a login over ssh from a workstation by connecting to the hostname given to the server during installation. that will actually help since then it’s a whole lot easier to copy and paste from this guide. the first thing i do is allow ssh connections from my local network and enable the firewall (ufw) with these commands, replacing 192.168.1.0 with the lan ip (the /24 means the part after the last dot can be anything):

sudo ufw allow from 192.168.1.0/24 to any app OpenSSH
sudo ufw enable

next i make sure everything is fully updated. consider doing this monthly or weekly to stay up-to-date. it will show how many updates are available at each login over ssh. it may not find anything to do now if the internet was available during installation:

sudo apt-get update
sudo apt-get upgrade

upgrade may suggest running autoremove to get rid of stuff that’s no longer needed, so i always do that. it also sometimes says packages have been held back, which is more of a mystery. usually it means their dependencies have changed. take note of the list (copying to the clipboard is a good way) and then run sudo apt-get install <packages> where <packages> is the list in the same format (space-delimited) to install them with their new dependencies.

i create a group and a few more users. man groupadd or man useradd will explain exactly what these commands do. replace ### with the id of the user or group being added. be sure to give users and groups who own files the same ids as before so they get linked up appropriately. ls -l will show the ids for users and groups that own files and directories if nothing’s been defined with that id. note that these commands are all prefixed with sudo — that will require entering the user’s password if sudo hasn’t asked for the password recently. the usermod command changes the user created during installation to the group, which will take effect next time that user logs on.

sudo groupadd -g ### groupname
sudo usermod -g groupname firstusername
sudo useradd -g groupname -G group2,group3,group4 -m -s /bin/bash -u ### username
sudo passwd username

use the groups command to find a list of groups the current user belongs to. to create other users with the same permissions, give them the same groups in the -g and -G arguments (lower case is primary group, and upper case is all the others). all users for people in my house belong to the same primary group for easier file sharing. if i want to add people that don’t live in my house, they can be in a different group with less access.

raid (mdadm)

since i keep all my stuff in /files, i want to have more than one copy. raid 1 mirrors contents onto multiple disks, which means if one of the disks wears out i still have my files. while hardware raid controllers exist, linux has software raid support through mdadm which meets my needs. raid 1 requires at least two identical disks. i have a pair of 3-tb hgst deskstars. one of them was empty but the other started with all my files already on it.

leaving the drive with the data out for now, the empty drive can be set up as a degraded raid1 array, meaning only one of the required two disks will be present for now. use parted to set it up because most other tools have a 2.2-tb or so limit. it also needs a gpt partition table type instead of mbr / msdos for the same reason. use parted to create the partition table, create the partition, and mark it as raid. make sure to use the correct /dev and note that after the first command the others are entered at a parted prompt:

sudo parted -a optimal /dev/sda
mklabel gpt
mkpart primary 1MiB 3TiB
set 1 raid on
q

the mkpart command asks for start and end of the partition in some form of bytes. start at 1MiB because it can’t start before sector 40 and linux likes to round to mb, then end at the 3-tb mark for the full size. next create the xfs filesystem, again making sure to use the correct /dev:

sudo mkfs.xfs /dev/sda1

since i couldn’t set up raid while installing, ubuntu didn’t install mdadm (multi-disk admin), so get that:

sudo apt-get install mdadm

this will also install postfix so it can send an e-mail if something goes wrong. when it asks for configuration, choose internet site and enter mail.example.com for system mail name. i’ll set this up to send through my gmail address soon.

use mdadm create the raid1 array for two devices, using the partition we just set up and a missing drive:

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 missing

now that the raid array exists, check on it with cat /proc/mdstat, or for even more information, sudo mdadm --query --detail /dev/md0. now create its filesystem:

sudo mkfs.xfs /dev/md0

scan for the details of the new array:

sudo mdadm --detail --scan

it should show one line starting with ARRAY and ending with a uuid. copy it to the clipboard and edit the raid configuration file:

sudo vim /etc/mdadm/mdadm.conf

find “MAILADDR root” and replace “root” with the e-mail address to send alerts. i also add a line with “MAILFROM hecubus raid” which makes alert e-mails say they’re from “hecubus raid.” also paste the ARRAY line from the mdadm detail scan at the end of the file.

the e-mails won’t actually send yet since postfix isn’t set up. it’s time to set it up to send using my gmail address, which requires another package:

sudo apt-get install mailutils

edit the postfix configuration to set it to relay through gmail:

sudo vim /etc/postfix/main.cf

find the relayhost = and replace it with the following lines (the smtp_ lines are all new lines):

relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes

then create the file with to hold the gmail credentials:

sudo vim /etc/postfix/sasl/passwd

with these contents, except the actual e-mail address instead of username@gmail.com and the actual password instead of password. this works with google apps e-mails too:

[smtp.gmail.com]:587    username@gmail.com:password

protect that file so only root can see it, then finish up configuring postfix:

sudo chmod 400 /etc/postfix/sasl/passwd
sudo postmap /etc/postfix/sasl/passwd
cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem
sudo /etc/init.d/postfix reload

test the postfix setup by sending a test e-mail with this (replace you@example.com with the address to send it to):

echo "test message from postfix" | mail -s "postfix test" you@example.com

a reboot now should send a degraded event e-mail since the raid array only has one of its two disks. do this anyway because it probably wants to move the raid array to md127 or something else that isn’t md0. after some more reboots mine ended up back at md0 anyway, so it’s best to use its guid since it seems the md number can change. cat /proc/mdstat to figure out where it ended up. for me the second line starts with md127 : active. find the guid in the format fstab needs with sudo blkid | grep md127 and copy it for pasting later. now make a mount point:

sudo mkdir /files

i also change its owner and group to the user and group i created to own my files. add the raid guid from blkid to /etc/fstab before the line with swap like this:

UUID=raid-uuid-here /files xfs defaults 0 2

mount it with sudo mount /files, then mount the drive with the files somewhere else (create a temporary mount point and give the full mount command sudo mount -t xfs /dev/sdb1 /copyfiles). copy everything to the raid while preserving timestamps, ownership, and permissions:

sudo cp -a /copyfiles/* /files/

once everything has been copied to /files/ from the other drive of the pair, add the other drive to the raid array. it’s not necessary to reformat the drive being added unless it has a different partition size or filesystem type. for me, they’re both identical drives with one full-disk partition set up with the xfs filesystem. to make sure i have the correct device letter i check df to see which filesystems are in use and cat /proc/mdstat to see which drive is already in the array. that left me with /dev/sdb1 (which i expected due to which sata port i have it connected to), so i just add it with mdadm (make sure to use the correct /dev/md number):

sudo mdadm /dev/md0 -a /dev/sdb1

running cat /proc/mdstat will show what percent the “recovery” process is at and how long until it finishes. i thought it might go faster with less actual data on the drive, but it appeared to want to sync the entire 3 terabytes anyway, and it started out predicting about 5 hours. i went to bed and it was done the next morning.

backups (tar, rsync, cryptsetup)

even with raid mirroring backups are still important. when i was researching how to set up raid i found the phrase “raid is not for backup” just about everywhere. a backup is going to protect against accidental deletions and even the destruction of the computer (when the backup is stored somewhere else). i set up weekly automatic online backups to a disk that stays in the computer and monthly manual offline backups to a disk i store in a different building. the offline disk goes in and out of the computer using a sata hotswap bay.

the online backups require enough space for another copy of everything that gets backed up while the offline backups require a separate drive with enough space plus either a hotswap bay or a drive docking station, which should cost $25 or less. my hotswap bay fits in a 5¼″ drive bay just like a dvd drive and holds 3½″ drives. i also have a protective case for transporting and storing the drive when it’s not in the hotswap bay (which is all the time except for when it’s being updated).

during installation i mentioned i have a 200-gig xfs partition mounted at /backup, which is my online backups. this isn’t enough space for everything i have on my 3-tb raid array, so i’m being selective with my most important files. i like this being on a separate physical disk than the in-use copy so if somehow both of those disks die i still probably have this backup.

i use tar for these backups. it makes one file for each group of files and can also compress them to take up less space. the steps for each type of backup go in /backup/scripts/ as shell scripts. here’s my documents.sh as an example:

#!/bin/bash
if [ -d /files/documents ]; then
  cd /files
  tar cvjf /backup/documents.tar.bz2 documents/
else
  echo "/files/documents is not a directory" > /backup/documents.err
fi

it goes into /files before running tar so that the documents subdirectory is part of the archive. it’s first ensuring that /files/documents exists so it doesn’t overwrite last week’s backup with an empty archive. for photos or videos which are already compressed, use cvf instead of cjvf and .tar instead of .tar.bz2 — that will skip the bzip2 compression. to restore one of these backups, get to the /files directory and use tar xjvf /backup/documents.tar.bz2, again leaving off the j and the .bz2 for photo / video backups.

in addition to files i also back up some mysql data, using mysqldump to get the data and mysqlshow to make sure the databases exist and are accessible (fill in the correct username, password, and databasenames):

#!/bin/bash
# backup script for web databases
if service mysql status | grep -q running; then
  if mysqlshow --user=username --password=password | grep -q databasename1 || mysqlshow --user=username --password=password | grep -q databasename2; then
    if [ -w /backup/aegis.sql.bz2 ]; then
      if [ -w /backup/aegis.old.sql.bz2 ]; then
        rm /backup/aegis.old.sql.bz2
      fi
      mv /backup/aegis.sql.bz2 /backup/aegis.old.sql.bz2
    fi
    mysqldump -u username --password="password" --databases databasename1 databasename2 | bzip2 -c > /backup/aegis.sql.bz2
    chmod 640 /backup/aegis.sql.bz2
  else
    echo "could not confirm existence of databasename1 and databasename2" > /backup/aegis.err
  fi
else
  echo "mysql is not running" > /backup/aegis.err
fi

there’s no sense using tar since mysqldump only creates one file, so i just run that through bzip2 into the .sql.bz2 file. keep in mind that the password needs to be stored in the script file in order for it to run automatically. to restore from this backup (which will create the databases, delete and re-create the tables, and then load the data), use bunzip2 -ck /backup/databases.sql.bz2 | mysql -u username -p (it will prompt for the mysql password).

make sure to give the backup scripts execute permissions. chmod 755 /backup/script/scriptname.sh will work. to be more protective of the sql one with a password use 750 or even 700.

to automate these backups, the scripts need to be scheduled using cron. i only do one type of backup per night, and run them at 4:30 am. don’t choose a time between 1 and 3 am on a sunday because daylight saving time could run it twice or skip it. run crontab -e as the user who should own the backups (i use the same user that owns /files) to edit the cron table and enter the scripts in this format:

MAILTO=""
30 4 * * 1 /backup/scripts/documents.sh
30 4 * * 3 /backup/scripts/photos.sh
30 4 * * 5 /backup/scripts/databases.sh

the first line tells cron not to send results in an e-mail. the 30 and the 4 at the beginning mean when the minute is 30 and the hour is 4 am. the two asterisks mean any day of any month. the 1 / 3 / 5 mean monday / wednesday / friday. 0 and 7 both mean sunday and then 1 - 6 are monday through saturday. then it’s just the full path to the script to run. by default it edits in vim. as soon as the crontab is saved it’s all scheduled and automatic weekly backups are up and running.

that 1-tb drive in my hotswap bay mounted at /backup-hs is for manual monthly backups. most of the time that drive won’t actually be in the computer, which will cause problems if it needs to reboot and can’t find the disk. tell /etc/fstab that’s okay by editing that file (with sudo vim /etc/fstab) and changing the “defaults” on the /backup-hs line to “defaults,noauto,user” — noauto makes it not automatically mount the drive on boot and user allows any user with write access to the mount point to mount it.

since i store this drive in a place where other people have access to it, i encrypt the whole thing as a luks partition. that means i need to install cryptsetup:

sudo apt-get install cryptsetup

the drive needs to be reformatted for luks, so be sure to copy anything that needs to be kept somewhere else first. use sudo umount /backup-hs to unmount it, then chown filesuser:filesgroup /backup-hs so filesuser will be able to mount there. verify the partition letter and number (mine’s /dev/sde1, and with a full-disk partition the number is almost always 1), then format it for luks, open it, and set up xfs inside luks:

sudo cryptsetup --verbose --verify-passphrase luksFormat /dev/sde1
sudo cryptsetup luksOpen /dev/sde1 backup-hs
sudo mkfs.xfs /dev/mapper/backup-hs

the luksFormat command will prompt for and verify the encryption passphrase, which it will then use to encrypt the disk. don’t forget the passphrase or the backups will be lost forever inside the encryption. luksOpen will then ask for the passphrase to unlock the encryption. now fstab needs to set to look at the xfs inside luks for /backup-hs instead of trying to use the luks partition as xfs. run sudo cryptsetup luksUUID /dev/sde1 to get the uuid of the xfs created previously, then edit /etc/fstab and replace the uuid on the /backup-hs line with the new uuid. it should now work to mount /backup-hs and access the freshly-formatted, encrypted xfs partition.

it’s now possible to copy files to the drive using simple copy commands or rsync. use -a with either to preserve timestamps, owners, and permissions. before removing the drive from the hotswap bay, the partition needs to be unmounted, luks needs to suspend and then close, and the device needs to be deleted from linux (don’t worry, it’ll come back next time it’s plugged in):

umount /backup-hs
sudo cryptsetup luksSuspend backup-hs
sudo cryptsetup luksClose backup-hs
sudo bash -c "echo 1 > /sys/block/sde/device/delete"

there are a lot of lines to remember there every month when it’s time to update my offline backup, so put it into a script that will prompt for the user’s password for sudo, prompt for the encryption passphrase to unlock luks, mount the xfs partition, update the backup, unmount, suspend and close luks, and delete the device so it’s ready to remove again. i have something like this in my files user’s home directory as updatebackup. remember to chmod it 755 so it can run.

sudo cryptsetup luksOpen /dev/sde1 backup-hs
mount /backup-hs
cp /backup/databases.sql.bz2 /backup-hs
sudo rsync -av --delete /files/documents/ /backup-hs/documents/
sudo rsync -av --delete /files/music/ /backup-hs/music/
sudo rsync -av --delete /files/photos/ /backup-hs/photos/
sudo rsync -av --delete /files/videos/ /backup-hs/videos/
umount /backup-hs
sudo cryptsetup luksSuspend backup-hs
sudo cryptsetup luksClose backup-hs
sudo bash -c "echo 1 > /sys/block/sde/device/delete"

i’m copying my weekly database backup and then using rsync with deletion to apply any changes from certain subdirectories of /files to their backups on the offline backup drive. this drive doesn’t have as much space as the raid disks either, so i can’t back up everything here. using rsync is much faster than deleting everything and then copying it all over again because rsync only copies new or changed files (and for those, only the part that changed), and also deletes anything out of the backup that no longer exists in /files. now the monthly backup update steps are bring the drive home and plug it into the hotswap bay, log in as my files user and run ~/updatebackup, remove the drive from the hotswap bay and bring it back where i keep it when i’m not updating it.

file servers (samba, nfs)

samba server allows a linux machine to share some of its directories with windows machines. nfs allows a linux machine to share some of its directories with other linux machines. i set up both of these to be accessible only to my subnet.

the requirements for running a file server are hard drive space (the more the better — i have over 3.5 terabytes total in my server) and a network connection.

i set up samba first because there are more machines running windows than linux in my house. install samba now if samba server wasn’t selected during ubuntu setup:

sudo apt-get install samba

edit (as root) the file /etc/samba/smb.conf to set samba server options. if windows isn't using the default workgroup name WORKGROUP, set the name on the workgroup = line. disable the [printers] and [print$] shares unless the server has a printer connected to it that windows computers will need to use. uncomment the [homes] section and change read only to no to allow all users access to their home directory to allow storage there (i find it a convenient way to get files onto the server from windows). add other share sections to the end of the file. here’s what i end up with for sharing user home directories (minus other comments in the file), /files, and /backup:

[homes]
   comment = Home Directories
   browseable = no
   read only = no
   create mask = 0640
   directory mask = 0750

[files]
   comment = File Server
   path = /files
   read only = no
   create mask = 0640
   directory mask = 0750

[backup]
   comment = Weekly Backups
   path = /backup
   read only = no
   create mask = 0640
   directory mask = 0750

now restart both parts of samba and enable the local subnet (remember to replace 192.168.1.0 with the correct subnet if different) to connect:

sudo restart smbd
sudo restart nmbd
sudo ufw allow from 192.168.1.0/24 to any app Samba

at this point, windows machines on the lan can browse to \servername and see everything shared over samba, as well as map them as network drives. if the same account (username and password) is set up on windows and the samba server, it will use that automatically without asking for a login. selecting samba server during installation should sync linux usernames and passwords with samba, but if that’s not working look at smbpasswd to add or update samba users. it seems to update samba when the password is entered for login, so be sure to log in over ssh as each user that will use samba over.

to more seamlessly share files with linux systems, i also share the same directories over nfs. since it wasn’t an install option, install it now:

apt-get install nfs-kernel-server

its configuration is simpler than samba, with shares listed in /etc/exports one per line in this format:

/path/to/share 192.168.1.0/24(rw,insecure,sync,no_subtree_check)

getting nfs to work through the firewall requires manually setting up a number of ports. edit /etc/default/nfs-kernel-server and change the RPCMOUNTDOPTS line to this:

RPCMOUNTDOPTS="--manage-gids --port 892"

restart nfs for the changes to take effect:

sudo service nfs-kernel-server restart

nfs doesn't add itself as an app for ufw, so open the rpc.mountd port set earlier by number, then use the named service port for nfs:

sudo ufw allow from 192.168.1.0/24 to any port 892
sudo ufw allow from 192.168.1.0/24 to any port nfs

linux clients with nfs support (ubuntu needs to install nfs and portmap) can now mount shares from servername:/share as nfs.

web server (apache, mysql, php)

i do web development as a hobby and also for some simple in-home web applications using apache httpd, php, and mysql. hecubus is my web server, so it needs to be set up to be able to run php and mysql through apache.

requirements for running a web server are a moderate amount of disk space (a few hundred megabytes to a gigabyte) and a network connection.

i chose lamp server during install, so i have most of what i need. lamp stands for linux apache mysql php. apt-get can install them if lamp server wasn’t selected at installation. here’s the extra i needed to add even with having chosen lamp server:

sudo apt-get install php5-curl

for me, apache was running by default so open up ports 80 and 443 to the lan and then put the server name in the url bar of a browser (add https:// in front for browsers that search for the server name without):

sudo ufw allow from 192.168.1.0/24 to any app 'Apache Full'

if you only want http or only want https, you can use Apache for http or 'Apache Secure' for https.

most home internet providers don’t support hosting anything on port 80, but i still limit it at the firewall to only allow my lan. port 80 hosts my web applications i use when i’m at home. the default web server path is /var/www/html/ but i have mine on my /files partition. this can be changed in the default virtualhost file at /etc/apache2/site-available/000-default.conf — change the DocumentRoot directive. the ServerAdmin can also be changed to an actual e-mail address. make the same changes to default-ssl.conf

by default, apache isn’t allowed to serve files outside /var/www, so even though i set a DocumentRoot i still need to provide access. edit /etc/apache2/apache2.conf and find the Directory /var/www/ section. copy it but change the /var/www/ to the web directory on /files, and change AllowOverride from None to All so .htaccess works. the /var/www/ section will still be used for mythtv’s web interface.

some of my web sites need to be able to write to disk, so i grant permissions by running apache in the context of the linux group created in step one. edit /etc/apache2/envvars and change the value of APACHE_RUN_GROUP to the group name.

i run multiple test sites on various ports above 8080 that i do open to the internet. apache by default only listens on port 80 (and 443 if ssl is enabled), so each additional port needs to be added. there's a single config file dedicated entirely to ports, so just add new Listen lines after Listen 80 in /etc/apache2/ports.conf.

next, go into /etc/apache2/sites-available/ and copy 000-default.conf for every site. for example, sudo cp 000-default.conf track7.org.conf. inside the copied files, update the port number on the first line and then update the DocumentRoot. usually the rest can remain the same, but if any other customizations are needed they can be done here. i add a few for mythtv in the next section. enable each site’s config file with the a2ensite command, such as sudo a2ensite track7.org.conf. a2ensite will say to reload apache to activate the newly-enabled site. it also needs to be allowed through the firewall. the range of ports should match the ports with Listen lines in ports.conf:

sudo ufw allow proto tcp to any port 8080:8090

not specifying a from rule actually opens these ports to the internet (which is what i want), and also requires port forwarding setup at the router.

i also need to enable the rewrite module, which is similar to enabling sites:

sudo a2enmod rewrite
sudo service apache2 restart

php defaults to requiring <?php instead of simply <? but my older code uses <? and i still want it to work. edit /etc/php5/apache2/php.ini and change short_open_tag = Off to On. make sure to change the line that doesn't start with # since that one is just a comment. restart the web server with sudo service apache2 restart, which is necessary after any php.ini changes.

run mysql to create databases and users for the websites. it will prompt for the mysql root password set up when it was installed. be sure to replace the all-caps words with the desired names and password.

mysql -u root -p
create database DBNAME character set utf8mb4 collate utf8mb4_unicode_ci;
grant all on DBNAME.* to 'DBUSER'@'localhost' identified by 'DBPASSWORD';
exit

since my live sites use a hostname other than localhost to connect to the database and i run the same code for the test server as the live server, i need to redirect any connections back to my test server. edit /etc/hosts and add a line like this for each mysql hostname:

127.0.0.1 mysql.example.com

while i create databases and users from the command line, i prefer phpmyadmin for most other interactions with the databases. download the latest .tar.bz2 from phpmyadmin.net (i use the english version since i don’t need any other languages). i like to copy the url and paste it to wget (inside single quotes so bash doesn’t try to interpret it) so it downloads directly. unpack it and make a generic symlink for easier upgrades:

cd /opt/
sudo wget 'https://files.phpmyadmin.net/phpMyAdmin/4.6.0/phpMyAdmin-4.6.0-english.tar.bz2'
sudo tar xjf phpMyAdmin-4.6.0-english.tar.bz2
sudo rm phpMyAdmin-4.6.0-english.tar.bz2
sudo rm phpmyadmin
sudo ln -s phpMyAdmin-4.6.0-english/ phpmyadmin

next add these lines to /etc/apache2/apache2.conf, probably after the last <Directory> section:

<Directory /opt/phpmyadmin>
    Options Indexes FollowSymLinks
    AllowOverride All
    Require all granted
</Directory>
Alias /phpmyadmin /opt/phpmyadmin

to upgrade phpmyadmin, download, unpack, and symlink the latest phpmyadmin as before, then copy config.inc.php from the previous directory. once the new version is verified working, the old directory can be deleted.

personal video recorder (mythtv)

since i’m setting up an always-on computer that also happens to have a large amount of storage space, it makes a lot of sense to install a tv tuner card and have it also serve as a dvr. these days if paying for tv service tends to provide a decoder box that’s also a dvr, but i get my tv for free through a coat hanger antenna. i have a hauppage hvr 2250 dual tuner card that i use with mythtv to record, then play them back on my htpc through a little program i wrote.

requirements for mythtv are a tv tuner card (this guide works with saa7164 digital tuners) and a large amount of hard drive space to hold the shows and movies it records. for reference, a half-hour 1080p surround show is about a gigabyte in mpeg2 format. a tv listings source subscription is also required to get the most out of mythtv. i use schedulesdirect.org which has been inexpensive and reliable, and also has a free trial period which is helpful for confirming that it works. note that i install and configure apache, php, and mysql before installing mythtv. i suspect installing mythtv will pull those in with it if they aren’t already there, but if not then back up to the previous page in this guide.

there’s a version of ubuntu (mythbuntu) that comes with mythtv installed by default, and that same team provides a ppa for each mythtv version. the development version of mythtv is available with daily updates, but the current stable version is probably a better idea. currently 0.29 is stable — check at mythtv.org in case it’s changed since i wrote this. add the ppa, update, and install mythtv:

sudo add-apt-repository ppa:mythbuntu/0.29
sudo apt-get update
sudo apt-get install mythtv-backend-master mythtv-frontend

the last step will take a while installing a large number of packages, and will ask three yes / no questions. i answer yes that other computers will run mythtv, even though i don't plan to, just so things are ready if i decide i want to. i answer no about password-protecting mythweb (i'll expose it to the internet with password protection later) and no about using the webserver only for mythweb.

maybe i accidentally answered yes about only using the webserver for mythweb, but it definitely made a copy of 000-default.conf called 000-default-mythbuntu.conf and switched to that, so i had to disable that one and re-enable the old one:

sudo a2dissite 000-default-mythbuntu.conf
sudo a2ensite 000-default.conf
sudo service apache2 reload

ubuntu 14.04 didn’t set up my tv tuner card for me, so i had to get the firmware and add it myself. the following gets the firmware into a new directory in /opt/ and sets it up:

cd /opt/
sudo mkdir saa7164
cd saa7164
sudo wget http://www.steventoth.net/linux/hvr22xx/22xxdrv_27086.zip
sudo wget http://www.steventoth.net/linux/hvr22xx/HVR-12x0-14x0-17x0_1_25_25271_WHQL.zip
sudo wget http://www.steventoth.net/linux/hvr22xx/firmwares/4019072/NXP7164-2010-03-10.1.fw
sudo wget http://www.steventoth.net/linux/hvr22xx/extract.sh
sudo apt-get install unzip
sudo sh extract.sh
sudo cp *.fw /lib/firmware/
sudo modprobe saa7164

there should now be a /dev/dvb/adapter0 and /dev/dvb/adapter1. i missed the NXP file my first time and had to reboot to get them to show up, or maybe a reboot or some other command is necessary anyway.

mythtv unfortunately still doesn’t seem to support setup without a gui, so i run it from an different computer running ubuntu client. it should work fine to boot a windows computer with the ubuntu live cd or usb if there’s no ubuntu client computer available. i have ubuntu in a virtual machine on my main computer. from the server, run sudo service mythtv-backend stop to stop the backend, then sudo passwd mythtv to set a password for mythtv.

now, from the client machine, run ssh -X mythtv@SERVERNAME mythtv-setup. if this is the first time connecting to the server over ssh from this machine, it will show some numbers that identify the server and ask if it should still connect. it requires the full word “yes” to continue. next it will ask for the password for the mythtv account, which is why i had to set it earlier. finally it will launch the mythtv setup gui, which is designed to be used from a tv with a remote rather than with a mouse and keyboard.

choose 1. general and change the ip address from 127.0.0.1 to the server’s lan ip so that the services api can be accessed by other machines on the network (for example, mythtv recorded programs which i use to play back mythtv recordings from windows machines). verify that the second page locale settings is correct. since i’m in the united states and using an antenna, i need tv format set to ntsc and channel frequency table set to us-bcast. back when i got my tv from cable i used the us-cable frequency table. on the next page, it’s apparently safe to uncheck delete files slowly when recordings are stored in xfs (that option will come up later). on the last screen (the next button turns into finish), check the box for automatically update program listings. the defaults are fine for the settings that show up once it’s checked, so click finish to get back to the main setup menu.

choose 2. capture cards to see an empty list. choose new capture card and it will hopefully pop in the dvb card type and list /dev/dvb/adapter0/frontend0 for dvb device. if not, entering those values should work if /dev/dvb/adapter0/ exists. use the recording options button to change max recordings to 1 since it starts out at 2. finish and then choose new capture card again to set up the second adapter. this should be the same as the first except adapter0 changes to adapter1 (frontend0 remains). press escape to get back to the main setup menu from the capture cards menu.

skip recording profiles and choose 4. video sources. choose new video source and give it a name such as “antenna tv.” listings grabber should be set to north america (schedulesdirect.org) (internal). fill in the schedulesdirect user id and password, then retrieve lineups. if the account has more than one lineup defined select one in data direct lineup, and then probably add another source for the other lineup(s). set the channel frequency table if it’s different from what was set for locale settings (this way it’s possible to have both cable and antenna) and then click finish. add other sources if there are more than one, then escape back to the main setup menu.

choose 5. input connections and see both dvb devices. select one and give it a short display name to help when setting which tuner to use (such as DVB0), then choose the video source that was created in the last step. next add channels with scan for channels or fetch channels from listings source. the default scan options should be fine, so click next and then wait a few minutes while it finds channels. when it’s done, choose to input all channels. the starting channel dropdown will show everything it found. click next and finish to get back to input connections and select the other device. choose the same video source and skip scanning for channels this time since it will find the same ones. next, finish, and escape back to the main setup menu.

choose 6. channel editor and also browse separately to schedulesdirect.org, log in, and edit the listing. each channel has a tooltip with the xmltv id. in mythtv setup, i delete the channels i don’t plan to record from by arrowing to them and pressing D. press enter on channels to keep and change the channel name to the network rather than the call letters. i also change the channel number to use - instead of _ because i find it easier to look at. fill in a path to a 4:3 aspect ratio jpeg or png image file for icon. to make sure schedulesdirect information matches up, the xmltv id value needs to match what is found on the schedulesdirect site for that channel. i leave the filters blank. i had limited success with the icon download button from the channel editor, so mostly just download channel icons from wikipedia or the website for the station and store them in a directory on my server. escape back to the main setup menu. go back into 5. input connections and pick any valid channel for each device so it won’t complain when i quit setup that it’s set to a channel that doesn’t exist (because i changed it from 3_1 to 3-1 for example).

choose 7. storage directories, then select default. this is where i move from the default recording directory to something in the /files partition that’s shared so the htpc can directly access the recorded files to play them back. choose the directory it shows by default and then change it to the desired directory. that’s all the setup i do here, so escape back to the main setup menu and escape again to quit. it will ask to start the mythtv backend, but the mythtv user doesn’t have permission so say no. same for mythfilldatabase.

back on the server, we’ll actually start the backend and fill the database:

sudo service mythtv-backend start
sudo -H -u mythtv mythfilldatabase

it’s important to run mythfilldatabase as the mythtv user or it won’t be allowed into the database. back on the ubuntu client, there’s more setup to be done in the mythtv frontend. run it with ssh -X mythtv@SERVERNAME mythfrontend. choose setup, then video. there used to be a setting to turn off recordings being eligible for expiring by default, but it’s gone now so i have to change it on each individual recording schedule i set up. on the advanced screen, i set time to record before start and past end to 120 (2 minutes) and 300 (5 minutes) to help catch the entire show when it’s a little off-schedule. for category of sports event i set past end to 45 minutes because they never seem to end on schedule. exit the frontend, then on the server remove the mythtv password:

sudo passwd -d mythtv

while it’s possible to schedule recordings (and even watch them) through the mythtv frontend, i will be scheduling through mythweb. if mythweb is the only site on the server, it may already work to visit http://servername/mythweb/ or possibly even without the mythweb at the end. because of my web development setup i need to point a mythweb alias to /var/www/html/mythweb/ and configure it correctly to run alongside. edit /etc/apache2/sites-available/000-default.conf, and add the following after the DocumentRoot line:

Alias /mythweb /var/www/html/mythweb

i also need to fix the mythweb site config /etc/apache2/mythweb.conf. uncomment the RewriteBase line. restart apache for the changes to take effect:

sudo service apache2 restart

if apache runs with a group other than www-data (as suggested in the apache section of this guide), use chown to update mythweb’s data directory. this needs to be done after each mythweb update:

sudo chown root:GROUPNAME /usr/share/mythtv/mythweb/data/
sudo chown root:GROUPNAME /var/cache/mythweb/image_cache/

http://servername/mythweb/ should now work as expected. search for shows to record and set schedule options. be sure to show advanced options and uncheck auto-expire recordings under schedule options.

it’s handy to have mythweb available over the internet in case i want to set to record something when i’m not home, but it’s important then to password-protect it so some random internet person doesn’t cancel everything i actually want to see and set me up to record dr. oz or something. set up another site by copying the default:

cd /etc/apache2/sites-available/
sudo cp 000-default.conf mythweb-internet.conf

edit the new file and change the port away from 80 to the port to use for the internet mythweb. change DocumentRoot to something like /var/www/html/mythweb-internet (i’ll create this in a bit), and add the following lines after the DocumentRoot line:

    Alias /mythweb /var/www/html/mythweb
<Directory /var/www/html/mythweb>
    AuthType Basic
    AuthName "MythTV on SERVERNAME"
    AuthUserFile /etc/apache2/.htpasswd/mythweb
    Require valid-user
</Directory>

now create that document root with sudo mkdir /var/www/html/mythweb-internet and create an index.php file with the following contents:

<?php
  header('Location: http://' . $_SERVER['HTTP_HOST'] . '/mythweb/');
?>

then create the AuthUserFile:

sudo mkdir /etc/apache2/.htpasswd
sudo apt-get install apache2-utils
sudo htpasswd -c /etc/apache2/.htpasswd/mythweb someuser
sudo htpasswd /etc/apache2/.htpasswd/mythweb anotheruser

both of the htpasswd commands will prompt for a password twice. note that only the first user added to the file needs -c to create the file. to get it going, enable the site and restart apache:

sudo a2ensite mythweb-internet.conf
sudo service apache2 reload

the mythtv services api is still being blocked by the firewall, so open it up to the lan to allow other machines to actually use it (update the lan ip block and mythtv services port if necessary):

sudo ufw allow from 192.168.1.0/24 to any port 6544

now that mythtv is recording to a directory shared through samba and the services api is accessible, install mythtv recorded programs on any windows machines that should be able to watch recordings from mythtv. mythweb provides a way to update the recording schedule both over the lan and over the internet with a password.

game server (minecraft)

i like to play minecraft with a custom selection of mods. many of them benefit from having a minecraft server that’s always running, plus it makes it easier to play with friends. there’s definitely a market out there for minecraft servers, but since i already have a computer running other types of servers, i can add a minecraft server to it for free. i’m using minecraft 1.10.2 since most of the mods i like are available.

the requirements for running a minecraft server are going to vary depending on the mix of mods, number of players logged in, etc. mine takes up a couple gigs of space, and it’ll help to have a reasonably fast cpu and 8 gigabytes of ram.

minecraft runs on java, which isn’t included with a default ubuntu server install. openjdk 7 is the latest openjdk available for ubuntu 14.04 but one of the mods i use requires openjdk 8. ubuntu 14.10 has openjdk 8, but to get it on 14.04 i need to add a ppa. i also install screen, which allows keeping a program running in a virtual screen so the minecraft server can run without an active terminal session:

sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install openjdk-8-jre-headless screen

in case a different version of java was already installed, use sudo update-alternatives --config java to select openjdk 8 as the default java. run java -version to make sure it’s using the correct version.

the forge installer gets minecraft itself and prepares for mods. visit minecraftforge.net’s 1.10.2 download page and copy the link from the information icon to the right of “installer” for the latest version. make a minecraft directory under /opt/, then some versioned directories under that (minecraft version and forge version — i’m using forge 12.18.2.2171). download the forge installer by pasting that link after wget, install minecraft and forge from the forge installer, and then remove the installer. here’s all the commands:

cd /opt/
sudo mkdir minecraft
chown thisuser:somegroup minecraft
cd minecraft/
mkdir 1.10.2
cd 1.10.2/
mkdir 12.18.2.2171
cd 12.18.2.2171/
wget http://files.minecraftforge.net/maven/net/minecraftforge/forge/1.10.2-12.18.2.2171/forge-1.10.2-12.18.2.2171-installer.jar
java -jar forge-1.10.2-12.18.2.2171-installer.jar --installServer
rm forge-1.10.2-12.18.2.2171-installer.jar

make sure to use an appropriate username and group for chown, and also run the commands as the user in the chown command. installing a minecraft server leaves out a bunch of configuration files that get created the first time minecraft is launched. before it can be launched in a command-line mode though, it needs to be told the eula has been accepted. accept the eula and run the minecraft server the first time:

echo "eula=true" > eula.txt
java -Xmx4G -Xms512M -jar forge-1.10.2-12.18.2.2171-universal.jar nogui

i add myself as an op from the minecraft console using op misterhaan, then close the server with stop. now that the config files exist, it’s time to configure minecraft. starting the server for the first time created a default world, so i delete that because i’m going to create a world with the mods installed:

rm -rf world

i use the curse client for minecraft because it makes adding and updating a lot of mods easier. i typically copy the contents of mods/ from the curse minecraft onto the server. the same goes for scripts/ (if using minetweaker) and config/.

next edit server.properties. i like to change level-name to a different directory than world, and motd to a short description of the server. it’s sometimes helpful to set allow-flight to true so people don’t get disconnected for using some mod features that minecraft might think are cheating. to have the server generate its own world using the biomes o’ plenty mod, change level-type to BIOMESOP. set white-list to true to keep out anyone not in the whitelist, though if the server address doesn’t get posted on the internet it’s unlikely to be found anyway.

allow minecraft through the firewall, and if playing with friends or from locations outside the server’s lan be sure to forward port 25565 from the router. it’s possible to change the port in server.properties if necessary (for example, running multiple minecraft servers):

sudo ufw allow 25565

now the minecraft server can start up with all its mods and start accepting connections:

screen -dmS minecraftforge java -Xmx4G -Xms512M -jar forge-1.10.2-12.18.2.2171-universal.jar nogui

join the running screen to access the console with screen -r minecraftforge (use screen -ls to check if it’s running or to remember its name). to leave the console without shutting it down, press ctrl-a and then d. shut down the minecraft server by joining its screen and entering “stop.”

alternatively to starting and stopping the minecraft server manually, create an init script to have linux handle it like it does for apache, mysql, and mythtv. create /etc/init.d/minecraft (using sudo) with the following contents:

#!/bin/bash
# /etc/init.d/minecraft

### BEGIN INIT INFO
# Provides:   minecraft
# Required-Start: $local_fs $remote_fs
# Required-Stop:  $local_fs $remote_fs
# Should-Start:   $network
# Should-Stop:    $network
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# Short-Description:    Minecraft 1.10.2 Forge
# Description:    Minecraft 1.10.2 server with the latest Forge
### END INIT INFO

#Settings
MCPATH=`ls -d /opt/minecraft/1.10.2/* | tail -1`
SERVICE=`cd $MCPATH && ls forge-*.jar`
USERNAME="minecraft"

ME=`whoami`
as_user() {
  if [ "$ME" == "$USERNAME" ] ; then
    bash -c "$1"
  else
    su - $USERNAME -c "$1"
  fi
}

start() {
  if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null
  then
    echo "Tried to start but $SERVICE was already running!"
  else
    echo "$SERVICE was not running... starting."
    cd $MCPATH
    as_user "cd $MCPATH && screen -dmS minecraftforge java -Xmx4G -Xms512M -jar $SERVICE nogui"
    sleep 7
    if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null
    then
      echo "$SERVICE is now running."
    else
      echo "Could not start $SERVICE."
    fi
  fi
}

stop() {
  if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null
  then
    echo "$SERVICE is running... stopping."
    as_user "screen -p 0 -S minecraftforge -X eval 'stuff \"stop\"\015'"
    sleep 7
    if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null
    then
      echo "$SERVICE could not be shut down... still running."
    else
      echo "$SERVICE has shut down."
    fi
  else
    echo "$SERVICE was not running."
  fi
}

case "$1" in
  'start')
    start
    ;;
  'stop')
    stop
    ;;
  'force-reload')
    ;&
  'restart')
    stop
    start
    ;;
  'status')
    if ps ax | grep -v grep | grep -v -i SCREEN | grep $SERVICE > /dev/null
    then
      echo "$SERVICE is running."
    else
      echo "$SERVICE is not running."
    fi
    ;;
  *)
    echo "Usage: /etc/init.d/minecraft {start|stop|status|restart|force-reload}"
    exit 1
    ;;
esac

exit 0

i found a script similar to this on the internet and modified it to find the latest forge version inside /opt/minecraft/1.10.2/ and run the file named forge-*.jar — make sure to change MCPATH if minecraft is somewhere else. also change USER if the owner of the minecraft directory isn’t minecraft. it may be a good idea to create a user specifically for running minecraft. once the script is set up, give it execute permission, tell linux to start and stop it as defined in the lsb header, and start the minecraft server:

sudo chmod 755 /etc/init.d/minecraft
sudo update-rc.d minecraft defaults
sudo service minecraft start

to update forge, mods, scripts, or config, stop the server with sudo service minecraft stop, make changes in /opt/minecraft/, and start the server back up with sudo service minecraft start. when updating forge i make a new subdirectory with the forge version, install the new forge, then copy world/ (or whatever level-name is set to in server.properties), mods/, config/, scripts/, eula.txt, server.properties, and ops.json from the old forge directory. use cp -r for the directories to also get the files inside them. the old forge directory can stay there — the minecraft script will find and use the latest one.

how was it?

comments

{{error}}

there are no comments on this guide so far. you could be the first!

{{comment.name}}
posted