mssql server service status screenshot

Microsoft SQL Server on Arch Linux

Part of my day job involves managing a system that runs on a MSSQL database. Because of that, I like to spend some time improving my knowledge of MSSQL and having a local installation is a great tool to play with. I recently installed Arch Linux on a Dell OptiPlex 3040 that I had lying around and decided to use it as a dev machine. This article covers how I installed Microsoft SQL Server on Arch Linux. I do this using AUR repositories.

Creating and Installing the Packages

First up, I create a location for my AUR repositories. Then I clone the GIT repositories that I need to get things up and running:

mkdir ~/aur
cd ~/aur
git clone https://aur.archlinux.org/mssql-server.git
git clone https://aur.archlinux.org/msodbcsql.git
git clone https://aur.archlinux.org/mssql-tools.git

mssql-server – this is for the main MS SQL Server installation
msodbcsql – this is for connecting to MS SQL Server
mssql-tools – these are the command line tools for interacting with MS SQL Server

Once the GIT repos have been cloned, it’s time to build the packages for installation using makepgk.

cd ~/aur/mssql-server
makepkg -sirc
cd ~/aur/msodbcsql
makepkg -sirc
cd ~/aur/mssql-tools
mkpkg -sirc

When running makepkg I use the following options:

-s – installs any missing dependencies using the package manager
-i – installs the package after it has been built
-r – removes any dependencies required by the build process that aren’t required afterwards
-c – cleans up any leftover work files or directories created during the build process

Now all three of the packages should be installed. The mssql-server service should also now be running and enabled to start automatically. You can check by running:

systemctl status mssql-server.service

The output should hopefully look something like this:

mssql server service status screenshot

If it’s showing as “inactive (dead)” you can easily start the service and enable it to run automatically:

systemctl enable --now mssql-server.service

You can then confirm that it’s running by using the command shown previously.

Initial Setup of SQL Server

At this point it’s time to run the setup utility for SQL Server. You can view the options for this tool by running:

/opt/mssql/bin/mssql-conf --help

There are a few useful options in here, like resetting the sa (database admin) password. However in this instance I just want to run the initial setup utility:

/opt/mssql/bin/mssql-conf setup

This tool will walk you through setting the admin password, choosing which edition of SQL Server to run, setting the language and accepting the licencing terms.

After running that setup utility you’re done! You should now have a working installation of Microsoft SQL Server running on Arch Linux. Easy-peasy!

Installing Azure Data Studio

Unfortunately Microsoft haven’t released a Linux version of SQL Server Management Studio. However, there is an alternative, albeit one that isn’t as feature packed as SSMS. That alternative is Azure Data Studio.

Azure Data Studio a MSSQL management tool that is built on top of the Microsoft Visual Studio Code editor. It’s perfectly fine for interacting with SQL Server but it does lack all the “wizards” that you can access from the menus in SSMS.

If you want to install Azure Data Studio there is an AUR repo for that too!

cd ~/aur
git clone https://aur.archlinux.org/azuredatastudio-bin.git
cd ~/aur/azuredatastudio-bin
makepkg -sirc

Once installed you should be able to launch Azure Data Studio, connect to your local SQL instance and start playing!

azure data studio screenshot

Deploying Django Applications with Nginx and Gunicorn

This week I spent some time working through the Mozilla Developer Network’s Django Tutorial. One of the final chapters covers the process of deploying a Django application. To my disappointment, it only covered deployment to Heroku. I frequently see Nginx and Gunicorn mentioned in discussions about Django deployment, so wanted to try using these instead.

Although the setup covered in this post still isn’t ideal for production use, it’s hopefully enough to point you in the right direction. I took some shortcuts to prevent this post from being too long, but have left a summary of some suggested changes at the end of the post.

As a side note, the MDN Django tutorial was really good. If you’re interested in learning how to write Django apps, I’d recommend that tutorial rather than the one on the official Django website. However, this post only covers the deployment of Django apps, not the creation of them.

I should also mention that I tend to use CentOS as my server OS of choice. Command examples shown in this post were all executed on a CentOS 8 server.

Step 1: Creating a Python Virtual Environment

If you’re not familiar with Python virtual environments, they essentially allow you to have multiple Python environments running on the same machine. Each Python environment also has it’s own set of modules. This is useful if you’re running multiple Python apps on the same machine requiring different versions of the same module. E.g. one app may require Django 2 while the other requires Django 3.

In this case I probably don’t need a Python virtual environment, as I only run the one app, but it’s a good habit to get into regardless.

# Create the project directory
mkdir djangoproject

# Create the virtual environment
python3 -m venv ~/djangoproject/djangoenv

# Enable the virtual environment
source ~/djangoproject/djangoenv/bin/activate

# Upgrade PIP
pip install --upgrade pip

The third line in the set of commands above is where I enable the virtual environment I just created. So whenever I run python or pip from this point on, it’s running the binaries stored in ~/djangoproject/djangoenv/bin. Running “deactivate” would take me out of the virtual environment and back to using /bin/python3

Step 2: Installing Django

For the purposes of this overview, I’ve just installed Django and created a new Django Project without adding to it. This is good enough to test the basic deployment.

# Install Django
pip install django

# Initialize a new Django project
django-admin startproject djangoproject ~/djangoproject/

# Run the Django development server to test it worked
python ~/djangoproject/manage.py runserver

Now when I go to http://localhost:8000 I can see that the Django app is running:

Django success screen

Step 3: Installing and Testing Gunicorn

As mentioned earlier, the Django development server is not intended for production use. One common approach is to use the Gunicorn WSGI (web server gateway interface) server in it’s place.

# Install Gunicorn
pip install gunicorn

# Move into the Django project directory
cd ~/djangoproject

# Have Gunicorn listen on port 8000 and serve the Django project
gunicorn --bind localhost:8000 djangoproject.wsgi:application

The last command in the block above is telling Gunicorn to listen on port 8000. It will also translate and pass on the web requests to the Django app. http://localhost:8000 returns the same page as before, but this time served via Gunicorn rather than the Django development server.

Gunicorn sits between the web server and the Django application. Client requests for static files (images, CSS, JavaScript, etc.) can be dealt with by the web server or a CDN. Any requests for the dynamic Django content will be passed from the web server to Gunicorn.

Step 4: Running Gunicorn as a SystemD Service

If this was a production server, I wouldn’t want to have to manually restart the Gunicorn server if it crashed or after a system reboot. Thankfully, SystemD makes it really easy to create a service to do this for us.

The code block below shows the contents of my Gunicorn service file: /etc/systemd/system/gunicorn.service

[Unit]
Description=gunicorn daemon
After=network.target

[Service]
User=dean
Group=dean
Restart=on-failure
WorkingDirectory=/home/dean/djangoproject
ExecStart=/home/dean/djangoproject/djangoenv/bin/gunicorn --bind unix:/home/dean/djangoproject/gunicorn.sock djangoproject.wsgi:application

[Install]
WantedBy=multi-user.target

Notice that I’m running the service using my user account – “dean”. If this was a production server, I’d create a gunicorn account and use that instead. I would also move the Django project files out of my home directory.

Now that the service has been defined, it can be enabled (to start automatically after a reboot) and started:

sudo systemctl enable --now gunicorn.service

Step 5: Setting Up Nginx

To save some hassle, I’ve set SELinux to permissive mode at this point (setenfore 0). By default, SELinux doesn’t like the gunicorn.service trying to access files in my home directory. Given more time, I’d use the system audit logs to create an appropriate SELinux policy. That is a whole topic in itself.

The code block below shows the partial contents of my Nginx config file. I’ve added this “server” block as the first element of the “http” block in /etc/nginx/nginx.conf

...
server {
    listen 80;
    server_name localhost;

    location / {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://unix:/home/dean/djangoproject/gunicorn.sock;
    }
}
...

The screenshot below gives some more context of where this fits in the Nginx config file. There are also a few extra lines that I’ve commented out. The first just prevents requests for the favicon being logged. The next two are for getting Nginx to serve Django’s static files, but that’s not something I’ve covered in this post.

nginx config file contents

Nginx is really just a reverse proxy in this example. It passes all the requests over to Gunicorn.

All that’s left to do is enable the nginx service (sudo systemctl enable –now nginx), go to http://localhost:80 and see the same webpage that we saw earlier. Exciting stuff!

Web requests hit the server. Nginx forwards them on to Gunicorn. Gunicorn forwards them on to the Django project and back comes the HTTP response.

You’ll notice that the port has changed from 8000 to 80 because Nginx is listening on port 80. The previous examples used port 8000 as this is the default port that the Django development server listens on. However, as defined in the SystemD service file, Gunicorn is now using a Unix socket to communicate with Nginx rather than a TCP/IP port, so is no longer listening on port 8000. Because Nginx and Gunicorn are both running on the same machine there is no real need for them to use a network socket to communicate with each other. The Unix socket is more efficient for this.

Finishing Up

If, like me, you’re new to deploying Django web apps, hopefully this helps get you started. Initially it felt like a lot of work to get running, but looking back over this post there really wasn’t much to it.

At the start of the post also I mentioned that this setup still isn’t ideal for production environments. Below are a few of the potential issues with using this setup in production:

  • Django is not configured to serve static files. Nginx can be configured to do this or the files can be hosted and served from elsewhere
  • My home directory probably isn’t the best place to store the Django project files
  • The SystemD Gunicorn service could be running as a gunicorn user account with folder permissions changed accordingly (rather than running as my user)
  • Rather than disabling SELinux, extra time could be taken to create a custom policy for the Gunicorn service
  • Although databases weren’t covered in this post, Django should be reconfigured (djangoproject/settings.py) to use something other than SQLite
  • Serve over HTTPS
zabbix 5.0 screenshot

Installing Zabbix Using Containers

I recently had a bit of an ordeal trying to upgrade Zabbix to version 5.0 on a CentOS 7 server. This led to me installing Zabbix using containers. The problem was that Zabbix 5.0 requires a newer version of PHP than CentOS 7 ships with. Somehow I managed to miss that note before I started working through the upgrade.

It got me thinking “wouldn’t this be so much easier if Zabbix just came bundled with all it’s dependencies?” Then it struck me; that’s what containers do! After a quick search on Dockerhub I could see that Zabbix containers were available. I’d been wanting to upgrade that server to CentOS 8, so seemed like a good excuse for a re-install.

Full disclaimer – I’d never worked with containers before this. My setup probably isn’t ideal for production use, but it worked as a learning exercise.

Getting started with Podman (or Docker) to manage the Zabbix containers

First up, have Podman (or Docker, the commands should be the same) download copies of the container images from Docker Hub:

podman pull docker.io/library/mariadb:latest
podman pull docker.io/zabbix/zabbix-server-mysql:centos-5.0-latest
podman pull docker.io/zabbix/zabbix-web-apache-mysql:centos-5.0-latest

Create a Dockerfile for each container. These will be used to create a customised container image. You’ll obviously want to set the passwords to something more sensible than the ones in my example. You’ll also need to change the IP addresses to fit your environment. You can get more info on what each parameter does on the Dockerhub pages.

# ~/dockerfiles/mariadb/Dockerfile

FROM docker.io/library/mariadb:latest
ENV MYSQL_ROOT_PASSWORD=password

# ~/dockerfiles/zabbix-server/Dockerfile 

FROM docker.io/zabbix/zabbix-server-mysql:centos-5.0-latest 
ENV DB_SERVER_HOST=192.168.179.128 
ENV DB_SERVER_PORT=33306 
ENV MYSQL_ROOT_PASSWORD=root 
ENV MYSQL_USER=zabbix 
ENV MYSQL_PASSWORD=password
ENV MYSQL_DATABASE=zabbix 

# ~/dockerfiles/zabbix-web/Dockerfile

FROM docker.io/zabbix/zabbix-web-apache-mysql:centos-5.0-latest 
ENV ZBX_SERVER_HOST=192.168.179.128 
ENV ZBX_SERVER_PORT=10051 
ENV DB_SERVER_HOST=192.168.179.128 
ENV DB_SERVER_PORT=33306 
ENV MYSQL_USER=zabbix 
ENV MYSQL_PASSWORD=password
ENV MYSQL_DATABASE=zabbix 
ENV PHP_TZ="Europe/London" 
ENV ZBX_SERVER_NAME=zabbix 

Create the custom container images using the Dockerfiles:

podman build –t mariadb ~/dockerfiles/mariadb/ 
podman build –t zabbix-server ~/dockerfiles/zabbix-server 
podman build –t zabbix-web ~/dockerfiles/zabbix-web 

Run the containers, mapping the host ports to the container ports. Because I’m running in rootless mode, I can’t use port 80 or 3306 on the host. Instead I’m using ports 8080 and 33306. If you were running as root I don’t believe the default port mappings would be an issue.

podman run –d –p 33306:3306 localhost/mariadb 
podman run –d –p 10051:10051 localhost/zabbix-server 
podman run –d –p 8080:8080 localhost/zabbix-web 

All I had left to do at this point was add some firewall rules to allow outside access to Zabbix:

firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-port=10051/tcp --permanent
firewall-cmd --reload

And just like that, I had successfully finished installing Zabbix using containers!

A screenshot of the Zabbix 5.0 dashboard page after installing using containers.