Setting up the perfect web-development environment using docker and ZOL

I have to admit I am as far from winning the world’s best web-developer as it gets, but I give it my best. And also I mostly develop for myself apart from the very rare and little WordPress PHP snippets I have shared with my even more technologically inept friends. Recently though I found myself needing to put on that web-dev fedora for a project on one of my sites.

With that being said, I was rusty and even finding the right local development environment was a struggle. Usually, I just use FlyWheel’s LocalWP and Atom. That works well for my much smaller WordPress customisation projects that never seem to go beyond a single page of code but this time it was a more complex project and the limits of LocalWP became quickly apparent.

The plugin involves the use of an external API. Not only should I be able to call the API but the API must also be able to reach my local server. I do have a VPS (Virtual Private Server) lying around but having to constantly push code to it is annoying given my connection’s limited upload speed.

Some people suggested I use a tool called Ngrok to expose my local site to the internet but accessing the site using the supplied URL was painfully slow, I suspect my internet connection was to blame here.

So I turned to Docker

After going through the same hoops with XAMPP and Bitnami I just gave up and turned to Docker. A bit of overkill I know but I was really desperate. For those of you, not familiar with Docker it’s like the proverbial Hindu elephant that allows you to do quickly deploy apps in isolated containers. I already use it to manage my media server so I like to think that I am pretty good at it even though it’s much simpler than it sounds as I will presently demonstrate.

I use Ubuntu so I am going to assume you are also using Ubuntu. I mean real Ubuntu here and not WSL Ubuntu. The first thing you need to do is install Docker. Open your terminal and run the following commands:

  • Do not install Docker from the Ubuntu repositories. Although it will probably work it is usually a few versions behind which sucks
  • Add Official Docker PPA and install the latest version of docker from PPA: wget -qO- https://get.docker.com/ | sh’
  • Add the current user to Docker: sudo usermod -aG docker $USER
  • Reboot

That is all you need to do and you will have Docker installed. I wanted to use a LEMP environment so I simply went and found a prepackaged LEMP container on Github. You don’t need to download anything. You are just run a given command that allows you to utilise this image. In the terminal navigate to the folder you want to use to host your files for example my project was in ~/source/woocommerce/ .

  • docker run -p 8080:80 -p 8888:88 -v pwd:/var/www/html –name lemp -d adhocore/lemp:7.4

And just like that, you have the entire LEMP stack installed and preconfigured. Next, I downloaded and installed WordPress. This was a bummer because I couldn’t install WordPress plugins due to file permission issues.

Nginx expects files to be owned by user www-data inside the container. To fix this I opened a terminal and connected to the container’s terminal. Remember the container is almost like a VPS on your computer so it comes with its own shell.

  • docker exec -it lemp /bin/bash
  • In the terminal I then fixed the permission issues by running: chown www-data:ww-data -R /var/www/html

That’s it my local WordPress environment was up and running on localhost:8080 but remember the whole purpose of doing this was so that I would be able to expose my local dev environment to the internet.

Enter ZOL and IPV6

ZOL has been dishing out IPV6 /64 blocks. On Fibroniks you don’t need to do anything, IPV6 is turned on by default. You can confirm this by Googling “my ip”. If it’s an IP with letters and or: characters in it you are good to go. If not, it probably means there is a misconfiguration somewhere. Since it’s Fibre you can probably make do with Ngrok anyway.

On outdoor Wibroniks you will need to bridge your WiFi router. The exact instructions vary between routers but it’s not rocket science. A few button clicks and a reboot and you should be good to go. In any case, if you mess things up you can just press the router’s reset button and give it another go.

IPV6 addresses are routable i.e. they can be reached from the internet. The protocol is all about making sure there are no barriers between devices and the internet, unlike IPV4 which makes constant use of NAT. I hate NAT. Once we have an IPV6 address it means that with a few changes our site will now be accessible from the internet.

Once you have enabled IPV6 on your LAN it’s time to enable it in Docker as well. To do so open your terminal:

  • Open or create a daemon.json file for Docker: sudo nano /etc/docker/daemon.json
  • Paste the following content into it:

{
“ipv6”: true,
“fixed-cidr-v6”: “fd00::/80”
}

  • Save and close the file. Ctrl+x and then press enter.
  • Reboot your computer
  • Now if you open your browser and visit http://[::]:8080 you should be able to see your website in my case my WordPress site
  • You can also see your site by visiting http://[YOUR_IPV6_IP]:8080

Now your site is available on the internet. Well, only part of it. The thing is like most people on the internet the API that I was working with was on an IPV4 address and could only see IPV6 computers. To fix this I used Cloudflare.

Cloudflare to the rescue

Cloudflare is a Web Application Firewall which means that when a domain name such as example.com is hosted with them, computers only see Cloudflare IP address. Traffic to example.com is then first routed through Cloudflare to your computer. Cloudflare supports both IPV4 and IPV6. So I set up my domain with them example.com and set up AAA records for foo.example.com where foo is the subdomain I am using for my projects.

Now when I visit foo.example.com:8080 I see my WordPress site. But then I hit a snag and although I could access my site using this method I couldn’t turn on the orange cloud that allows you to turn on proxying. So my Cloudflare work around hadn’t solved the issue-yet.

To fix this I had two options:

  1. Go back to Docker and change the listening port to my LEMP server to :80 instead of :8080:
    • Then I would have to turn on proxy in CloudFlare, turn on Flexible SSL and then set up page rules allowing me to redirect all traffic to the https version of the local site
    • This was simple and would not involve other software but there is one huge drawback. All my traffic including when local traffic. The result would be slow internet like the Ngrok setup. Using this solution would be stupid to say the list.
  2. Make use of a reverse proxy like Caddy. Which is what I did. Now setting up Caddy was trivial. I just ran the following commands:
    • echo “deb [trusted=yes] https://apt.fury.io/caddy/ /” | sudo tee -a /etc/apt/sources.list.d/caddy-fury.list
    • sudo apt update
    • sudo apt install caddy
    • Set up reverse proxying: sudo caddy –reverse-proxy -from foo.example.com -to localhost:80
    • That’s it. You can now just set an AAA record in Cloudflare pointing to foo.example.com and turn on that orange cloud
    • To prevent local traffic from being sent to through Cloudflare you need to edit the /etc/host file and add the following two records:

127.0.0.1 foo.example.com#Local Site
::1 foo.example.com#Local Site

Caddy will redirect all traffic through HTTPS even locally. On the Cloudflare side IP4 hosts are served an IPV4 address and then Cloudflare sends the traffic to our computer via IPV6 with the person on the other end being none-the wiser. Even our API can call us back using https://foo.example.com/end-point.

Finishing up

Now with all those ZESA powercuts and sometimes as a matter of routine IPV6 addresses do change a lot. It would be tedious to Google our IP, copy it, log in into Cloudflare and update it. Fortunately, Ubuntu was made for such things. All we need to do is download a DDNS client from Github and set it up via Cron to periodically check the IP and update it on Cloudflare if it changes. You can check out the client that I use here.

All that was left to do is download Atom and get to work which I did. The dev environment worked out well. Looks long and complicated but the entire process took me less than an hour from go to finish.

What’s your current local development set up?

NB:

There was absolutely no need to turn on Docker’s IPV6 as I later learnt after wasting time doing it. Sometimes you learn things the hard way. To disable IPV6 in docker just remove the entries we made above and restart.

It’s also important to note that you can just use Xampp with Cady without Docker and be done with it. But where is the fun in that?

If you are a part-time developer like me who rarely does this stuff you can easily get rid of the containers, Caddy, Docker and the project folder and your system will be back to its original untainted self.

One response

What’s your take?

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Imi vanhu musadaro

    Seems like a convoluted development setup. I would recommend looking for alternative development plugins, given that the one you use requires you to be online all the time (at least that what it sounds like). If the Internet is down, work stops, that’s very inconvenient. But, I guess plugin choice largely depends on your specific needs.

2023 © Techzim All rights reserved. Hosted By Cloud Unboxed

Exit mobile version