<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Under the hood - FirePress]]></title><description><![CDATA[Publish your website & monetize your content]]></description><link>https://firepress.org/en/</link><generator>Ghost 3.42</generator><lastBuildDate>Thu, 12 Mar 2026 19:01:22 GMT</lastBuildDate><atom:link href="https://firepress.org/en/tag/under-the-hood/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How do we update hundreds of Ghost's websites on Docker Swarm?]]></title><description><![CDATA[Our clients care about their Ghost updates. In this post, we will reveal how we update for all Ghost sites. Under the hood, many updates happen all the time. We maintain our own Ghost's docker image. Check it out on our GitHub release page.]]></description><link>https://firepress.org/en/how-do-we-update-hundreds-of-ghosts-websites-on-docker-swarm/</link><guid isPermaLink="false">5d1e7f3bc85611000755f343</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Thu, 27 May 2021 22:38:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-1.jpg" alt="How do we update hundreds of Ghost's websites on Docker Swarm?"><p>Our clients care about their <strong>Ghost updates</strong>. In this post, we will reveal how we update for all Ghost sites. Under the hood, many updates happen all the time. We maintain our own Ghost's docker image. Check it out on our <a href="https://github.com/firepress-org/ghostfire/releases">GitHub release</a> page.</p><p>At high level, we manage the <strong>backend</strong> for our clients. Our whole cluster runs on top of the public cloud, Linux OS, Docker Swarm, orchestrated services (containers), CI/CD, zero-downtime deployments and many other <a href="https://firepress.org/en/how-we-update-hundreds-of-ghosts-websites-on-docker-swarm/">DevOps best-practices</a>.</p><p><strong>DevOps best practices</strong> are essential to us. Many checkpoints ensure our Ghost sites run smoothly.</p><!--kg-card-begin: markdown--><p><a id="ghost-updates"></a></p>
<!--kg-card-end: markdown--><h2 id="ghost-updates">Ghost updates</h2><p>Now let's talk about the way we update Ghost for our clients. <strong>We usually wait 1-2 weeks</strong> before applying updates to our Ghost sites (and all our software packages really).</p><p>Here is <a href="https://github.com/TryGhost/Ghost/issues/10482#issuecomment-462906913">an example</a> showing why we don't automatically update Ghost as soon as it's released. This way, as a community, we can catch any emergency bugs that could emerge.</p><!--kg-card-begin: markdown--><p><a id="ci-cd"></a></p>
<!--kg-card-end: markdown--><h2 id="ci-cd">CI/CD</h2><p>We take <strong>Continuous Integration</strong> and <strong>Continuous Deployment</strong> very seriously at FirePress. Only if you are curious to see what is happening with our Ghost builds, you can follow our Ghost builds here:<br><a href="https://github.com/firepress-org/ghostfire/actions/workflows/build.yml">github.com/firepress-org/ghostfire/actions/workflows/build.yml</a></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://firepress.org/en/content/images/2021/05/Screen-Shot-2021-05-28-at-4.09.36-PM.jpg" class="kg-image" alt="How do we update hundreds of Ghost's websites on Docker Swarm?"></figure><p>The <code>master branch</code> is probably the one you want to watch as it's the build your website will run on.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/firepress-org/ghostfire/actions/workflows/build.yml"><div class="kg-bookmark-content"><div class="kg-bookmark-title">firepress-org/ghostfire</div><div class="kg-bookmark-description">Docker image 🐳 for Ghost V3.x.x. This docker image is used on FirePress.org and play-with-ghost.com - firepress-org/ghostfire</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="How do we update hundreds of Ghost's websites on Docker Swarm?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">firepress-org</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://repository-images.githubusercontent.com/131502775/4b411e00-c031-11e9-9f3a-fbdcc97713d5" alt="How do we update hundreds of Ghost's websites on Docker Swarm?"></div></a></figure><h2 id="best-practices">Best practices</h2><p>This is how we carefully push every one of the Ghost updates. We use three kinds of <strong>tags</strong>:</p><ol><li>site <strong>edge</strong></li><li>site <strong>stable</strong></li><li>site <strong>stable-hash</strong> (client's site)</li></ol><p>It goes like this:</p><ol><li>We use a repeatable <a href="https://github.com/firepress-org/ghostfire/blob/master/Dockerfile">Dockerfile declaration</a> that executes everything that needs to happen to have a perfect installation of the software. No manual mistake can happen here as the Dockerfile contains the OS, libraries and the Ghost app<a href="https://firepress.org/en/faq/#what-is-ghost">.</a></li><li>A commit happens via Github. A webhook builds the docker image <strong>edge</strong> using Travis in a <strong>CI</strong> <em>(continuous integration)</em> system using the <strong>edge</strong> branch.</li><li>Within the <strong>CI</strong>, the system executes many tests. You can see <a href="https://travis-ci.org/firepress-org/ghostfire/jobs/489601142#L593">one of them here</a>. When the build is successful, the system tag this docker image as devmtl/ghostfire:<strong>edge</strong>.  <em>If something fails, everything stops at this point, and we receive an email saying something has failed.</em></li><li>In our PROD environment, our CD <em>(continuous deployment) </em>checks every minute if a new <strong>edge</strong> image is available on the Dockerhub. This is almost magic! This applies only for the site <em>edgetest.firepress.org (fake site, this is only to help you understand the concept)</em></li><li>We manually surf on <em>edgetest.firepress.org </em>. If we can navigate the site normally we consider the docker image <strong>edge</strong> as "passed. <em>In the future, we might automate this part as well. Follow <a href="https://github.com/firepress-org/ghostfire/issues/13">this issue</a> for all details.</em></li><li>A commit happens via Github. A webhook builds the docker image <strong>stable</strong> using Travis in a <strong>CI</strong> (continuous integration) system using the <strong>stable</strong> branch.</li><li>Within the <strong>CI</strong>, the system executes many tests. You can see <a href="https://travis-ci.org/firepress-org/ghostfire/jobs/489601142#L593">one of them here</a>. When the build is successful, the system tag two specific tags: devmtl/ghostfire:<strong>stable</strong> and devmtl/ghostfire:<strong>stable-hash</strong>. <em>If something fails, everything stops at this point, and we receive an email saying something has failed.</em></li><li>In our PROD environment, our CD <em>(continuous deployment) </em>checks every minute if a new <strong>stable</strong> image is available on Docker hub. It is still magical! This applies only for the site <em>edgestable.firepress.org (fake site, this is only to help to reader understand the concept)</em></li><li>We manually surf on <em>stabletest.firepress.org </em>. If we can navigate the site normally we consider the docker image <strong>stable</strong> as "passed. <em>In the future, we might automate this part as well. Follow <a href="https://github.com/firepress-org/ghostfire/issues/13">this issue</a> for all details.</em></li></ol><h2 id="now-all-our-tests-are-passed-">Now all our tests are passed!</h2><ol><li>We are now ready to update all our client's sites using the Docker image devmtl/<strong>ghostfire:stable-hash</strong>.</li><li>We need to understand that the tags devmtl/<strong>ghostfire:stable and </strong>devmtl/<strong>ghostfire:stable-hash </strong>are actually the same docker image. This make our CD process so easy.</li><li>We update a configuration that specifies a tag (devmtl/<strong>ghostfire:stable-hash</strong>) that every site must run on. For a real-world example: <a href="https://api.travis-ci.org/v3/job/544570482/log.txt">see at the end</a> of this log file and you will find: <code>devmtl/ghostfire:2.23.3-30da41f</code></li><li>We commit the fact that every Ghost site must now run <code>devmtl/ghostfire:2.23.3-30da41f</code></li><li>We launch the command which is a <strong>rolling update </strong>from the previous docker image to the newest one (<code>devmtl/ghostfire:2.23.3-30da41f</code>). This means <strong>zero downtime</strong>. Not even one second down.</li><li>If something is failing, it's probably not related to the docker container. It might be a network issue, a proxy issue, a load balancer issue, or something else.</li></ol><p>Below, is a screenshot of three sites running in our cluster:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://firepress.org/en/content/images/2019/06/best-practice.jpg" class="kg-image" alt="How do we update hundreds of Ghost's websites on Docker Swarm?"></figure><h2 id="our-cluster">Our cluster</h2><ul><li>There is no staging cluster. The staging is done in Github Action CI. We only have "staging sites" (<strong>edge, stable</strong>) along with all other sites we manage in the cluster.</li><li>Our main cluster lives in New-York on top of one of the big Cloud providers.</li><li>In the future, we might run two clusters in two regions. Per example, one in New-York and the other in Amsterdam. This will depend on our client's location. Fortunately, this is not a challenge at the moment as our sites run behind a powerful <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> powered by Cloudflare.</li></ul><h2 id="conclusion">Conclusion</h2><p>You can see how we do everything we can to avoid human errors and to ensure that you are always running a fresh build.</p><p>It's good to understand that we never update the existing docker image. We always build a new one from scratch. This way, all patches and security fixes are applied onto your Ghost site during every update.</p><p>Cheers!<br>Pascal — FirePress Team 🔥📰</p>]]></content:encoded></item><item><title><![CDATA[How to run bash scripts like a crontab for Mac? (Big Sur)]]></title><description><![CDATA[The trick is there is no crontab on mac since 6-7 years. You must use launchctl. If you are confortable running cronjob on linux]]></description><link>https://firepress.org/en/how-to-run-bash-scripts-like-a-crontab-for-mac-big-sur/</link><guid isPermaLink="false">60898282a41a38000772a278</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Wed, 28 Apr 2021 15:49:24 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2021/04/firepress-rg-tag-under-the-hood-v1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2021/04/firepress-rg-tag-under-the-hood-v1.jpg" alt="How to run bash scripts like a crontab for Mac? (Big Sur)"><p>The trick is... there is no crontab on Mac since 6-7 years. You must use <code>launchctl</code> to schedule a "cron job".</p><p>In my case, I simply want to run a <strong>bash script</strong> for my local backup. I guess that if you are confortable running cron jobs on Linux, you will be at home with these instructions.</p><h3 id="how-to">How To</h3><pre><code>launchctl unload ~/Library/LaunchAgents/com.pascalandy.macbackup.plist

cd ~/Library/LaunchAgents
nano com.pascalandy.macbackup.plist

&lt;INSERT XML CODE BELOW&gt;
&lt;SAVE AND QUIT NANO&gt;

launchctl load ~/Library/LaunchAgents/com.pascalandy.macbackup.plist
launchctl list | grep 'pascalandy'
</code></pre><h3 id="code-example">Code example</h3><pre><code class="language-xml">&lt;?xml version="1.0" encoding="UTF-8"?&gt;&lt;!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt;
&lt;plist version="1.0"&gt;
    &lt;dict&gt;

        &lt;key&gt;Label&lt;/key&gt;
        &lt;string&gt;com.pascalandy.macbackup&lt;/string&gt;

        &lt;key&gt;ProgramArguments&lt;/key&gt;
        &lt;array&gt;
            &lt;string&gt;/bin/bash&lt;/string&gt;
            &lt;string&gt;/Volumesxyz/pascalandy/macbkp/runup.sh&lt;/string&gt;
        &lt;/array&gt;

        &lt;key&gt;LowPriorityIO&lt;/key&gt;
        &lt;true/&gt;

        &lt;key&gt;Nice&lt;/key&gt;
        &lt;integer&gt;1&lt;/integer&gt;

        &lt;key&gt;StartCalendarInterval&lt;/key&gt;
        &lt;dict&gt;
            &lt;key&gt;Hour&lt;/key&gt;
            &lt;integer&gt;11&lt;/integer&gt;
            &lt;key&gt;Minute&lt;/key&gt;
            &lt;integer&gt;39&lt;/integer&gt;
        &lt;/dict&gt;

    &lt;/dict&gt;
&lt;/plist&gt;
</code></pre><p><em>(XML format)</em></p><h3 id="sources">Sources</h3><p>If you need more blabla, there you go :)</p><ul><li>https://alvinalexander.com/mac-os-x/mac-osx-startup-crontab-launchd-jobs/</li><li>https://thejandroman.wordpress.com/2013/02/13/introduction-to-launchd/</li><li>https://www.launchd.info/</li><li><a href="https://firepress.org/en/how-to-run-bash-scripts-like-a-crontab-for-mac-big-sur/">https://firepress.org/en/how-to-run-bash-scripts-like-a-crontab-for-mac-big-sur/</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Server maintenance on Nov 28]]></title><description><![CDATA[<p>To perform an upgrade on our proxies, we need to regenerate a few SSL certificates. This action might impact your website and, it would cause downtime of a few minutes on your website.</p><p>Please <strong>avoid to update </strong>your website during the maintenance window. There is no action required on your</p>]]></description><link>https://firepress.org/en/server-maintenance-on-nov-28/</link><guid isPermaLink="false">5fc285d1af50ce00067cb0ac</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Sat, 28 Nov 2020 17:16:46 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1.jpg" alt="Server maintenance on Nov 28"><p>To perform an upgrade on our proxies, we need to regenerate a few SSL certificates. This action might impact your website and, it would cause downtime of a few minutes on your website.</p><p>Please <strong>avoid to update </strong>your website during the maintenance window. There is no action required on your part.</p><figure class="kg-card kg-embed-card"><iframe src="https://player.vimeo.com/video/484853691?app_id=122963" width="1280" height="720" frameborder="0" allow="autoplay; fullscreen" allowfullscreen title="2020-11-28 Emergency Server Maintenance on FirePress"></iframe></figure><p><strong>Benefit</strong>:<br>The upgrade will also allow our client to point their domain name to a CNAME. It's the Option C as detailed here: <a href="https://firepress.org/en/how-can-i-configure-my-domain-or-dns-to-firepress-servers/">https://firepress.org/en/how-can-i-configure-my-domain-or-dns-to-firepress-servers/</a></p><p><strong>Maintenance window</strong>:<br>this Saturday night, Montreal time zone (UTC -5	EST)</p><p><strong>Start</strong>: 2020-11-28 20h00<br><strong>End</strong>: 2020-11-29 04h00</p><p>Thank you for your patience. If you face any issues, please reach out to us.</p>]]></content:encoded></item><item><title><![CDATA[Post-mortem regarding the Cloudflare outage — 2nd July 2019]]></title><description><![CDATA[We had some hickup on that day. We were running after our tail try to understand what could have gone wrong on our side.
Finally, it was on Cloudflare's side. Here is what happened:https://blog.cloudflare.com/cloudflare-outage/
We also updated our checklist to better react to such event as well:]]></description><link>https://firepress.org/en/cloudflare-outage-impacts-2nd-july-2019/</link><guid isPermaLink="false">5d1de662c85611000755f29e</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Thu, 04 Jul 2019 11:47:16 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-11.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-11.jpg" alt="Post-mortem regarding the Cloudflare outage — 2nd July 2019"><p>We had some hickup on that day. We were running after our tail try to understand what could have gone wrong on our side.</p><p>Finally, it was on Cloudflare's side and it was a human mistake. Here is what happened: <a href="https://blog.cloudflare.com/cloudflare-outage/">https://blog.cloudflare.com/cloudflare-outage/</a></p><p>Not all FirePress's clients were affected as not all sites under Cloudflare's were concerned.</p><p>In any case, we are sorry for the trouble that may have caused you.</p><p>We also updated our <a href="https://trello.com/c/VxQ66G65/317-checklist-what-website-are-not-stable">p</a>rocess to better react to such event in the future.</p><p>Cheers!<br>Pascal Andy</p>]]></content:encoded></item><item><title><![CDATA[Container as an external hard drive: sharing a common directory (BitTorrent style) using Docker Swarm]]></title><description><![CDATA[How can we share a common directory between our nodes easily? In other words, how can we make our app stateful in a cluster? I described the challenge over our Technical Challenge post.]]></description><link>https://firepress.org/en/container-as-an-external-hard-drive/</link><guid isPermaLink="false">5c8300c2e3d4a500066e66a5</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Sat, 09 Mar 2019 00:42:04 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-2.jpg" alt="Container as an external hard drive: sharing a common directory (BitTorrent style) using Docker Swarm"><p><strong>The question</strong>: How can we share a common directory between our nodes easily? In other words, how can we make our app stateful in a cluster? I described the challenge over our <a href="https://firepress.org/en/technical-challenges/">Technical Challenge</a> post.</p><h3 id="requirement-specs">Requirement &amp; specs</h3><p><strong>As a DevOps hero</strong>:</p><ul><li>As a DevOps hero, I'm looking for a private ZFS / GlusterFS server or whatever application that mounts a common directory between all my nodes.</li><li>As a DevOps hero, I want to have a common directory (not a docker volume) that all nodes can share. I suspect that having hundreds of docker volume will slow down over time. Something like /mnt/shared/permdata/</li><li>As a DevOps hero, I want to run the solution as a docker service create (...). No manual configs on each node and especially no hard IP to set up.</li><li>As a DevOps hero, I want to create a new node on my existing cluster. The data should sync automatically.</li><li>As a DevOps hero, I have a 3 nodes set up on docker swarm</li><li>As a DevOps hero, everything needs to happen via the CLI (no GUI operation)</li><li>No need external sync to the cloud (like AWS s3)</li><li>The traffic must use the swarm ingress, not via public traffic</li><li>Don't use a "volume" or a plug-in. The container must do the work of synching.</li></ul><p>Per example, I would use it this way where <em>permdata </em>is the common directory :</p><!--kg-card-begin: markdown--><pre><code>/mnt/shared/permdata/app1/
/mnt/shared/permdata/app2/
/mnt/shared/permdata/bkp/
/mnt/shared/permdata/etc/
</code></pre>
<!--kg-card-end: markdown--><h3 id="solution">Solution</h3><p>I use <strong><a href="https://www.resilio.com/">Resilio Sync</a></strong> since over a year and it's perfectly stable.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://firepress.org/en/content/images/2019/03/resilio.gif" class="kg-image" alt="Container as an external hard drive: sharing a common directory (BitTorrent style) using Docker Swarm"></figure><p><strong>Performances</strong>: I tested it a lot! Most of the time, a file like a SQLite3 myapp.db or a picture get sync <strong>under 5 seconds</strong>. Sometimes it takes longer but most of the time it's fast.</p><h2 id="private-network">Private network</h2><p><strong>UPDATE (2020-02-25)</strong>: As expected, by default Resilio will use the public network. The good news is that you can <a href="https://trello.com/c/cUWeIVpN/410-resilio-config-update">set a configuration file</a> to limit the network to <code>eht0</code>. You might know that on Digital Ocean the private network is happening over <code>eth0</code>. This way our network requirement is meet!</p><p></p><figure class="kg-card kg-image-card"><img src="https://firepress.org/en/content/images/2019/03/Screen-Shot-2019-03-08-at-7.04.02-PM.jpg" class="kg-image" alt="Container as an external hard drive: sharing a common directory (BitTorrent style) using Docker Swarm"></figure><h2 id="the-stack">The stack</h2><ul><li>Have a docker swarm cluster running (3 nodes in my example).</li><li>Define our VAR:</li></ul><!--kg-card-begin: markdown--><pre><code>MNT_SOURCE_RESILIO=&quot;/mnt/shared/permdata&quot;
IMG_resilio=&quot;devmtl/resilio:2.6.3&quot;
CTN_resilio1=&quot;node1&quot;
CTN_resilio2=&quot;node2&quot;
CTN_resilio3=&quot;node3&quot;
</code></pre>
<!--kg-card-end: markdown--><ul><li>Create this network:</li></ul><!--kg-card-begin: markdown--><pre><code>NTW_RESILIO=&quot;ntw_resilio&quot;

if [ ! &quot;$(docker network ls --filter name=${NTW_RESILIO} -q)&quot; ]; then
  docker network create --driver overlay --attachable --subnet 10.23.10.0/24 --opt encrypted ${NTW_RESILIO}
else
  echo &quot;Network: ${NTW_RESILIO} already exist!&quot;
fi
</code></pre>
<!--kg-card-end: markdown--><ul><li>Create those labels:</li></ul><!--kg-card-begin: markdown--><pre><code>docker node update --label-add nodeid=1 node1
docker node update --label-add nodeid=2 node2
docker node update --label-add nodeid=3 node3
</code></pre>
<!--kg-card-end: markdown--><h2 id="how-to">How to</h2><h3 id="step-1">Step #1</h3><p>Create a token:</p><!--kg-card-begin: markdown--><pre><code>docker run -dit \
-v $(pwd)/data:/data \
-p 33333:33333 \
&quot;$IMG_resilio&quot; &amp;&amp; docker ps;

# find the secret
docker logs -f NAME; echo;
</code></pre>
<!--kg-card-end: markdown--><p>Find the TOKEN that looks like: <em>A2QYBAQPK7SOP4O4ETFJEFHO5VLGHE747</em><br>What I do then is to copy this token on my nodes outside my git repo (along other secrets). In my case the file is here:</p><!--kg-card-begin: markdown--><pre><code>${MNT_DEPLOY_SETUP}/config/resilio/token
</code></pre>
<!--kg-card-end: markdown--><p>Delete this container. It was only to generate a token.</p><h3 id="step-2">Step #2</h3><p>Launch a Resilio Sync instance on each node.</p><p><strong>Warning</strong>: Here we can't use --global because we need to configure different ports on each instance. This is because our service is using our public IP to sync data.</p><!--kg-card-begin: markdown--><pre><code># First host
docker service rm ${CTN_resilio1}; \

docker service create \
  --name ${CTN_resilio1} --hostname ${CTN_resilio1} \
  --network ${NTW_RESILIO} --replicas &quot;1&quot; \
  --restart-condition &quot;any&quot; --restart-max-attempts &quot;20&quot; \
  --reserve-memory &quot;192M&quot; --limit-memory &quot;512M&quot; \
  --limit-cpu &quot;0.333&quot; \
  --constraint 'node.labels.nodeid == 1' \
  --publish &quot;33331:33333&quot; \
  -e RSLSYNC_SECRET=$(cat ${MNT_DEPLOY_SETUP}/config/resilio/token) \
  --mount type=bind,source=${MNT_SOURCE_RESILIO},target=/data \
  ${IMG_resilio}


# Second host
docker service rm ${CTN_resilio2}; \

docker service create \
  --name ${CTN_resilio2} --hostname ${CTN_resilio2} \
  --network ${NTW_RESILIO} --replicas &quot;1&quot; \
  --restart-condition &quot;any&quot; --restart-max-attempts &quot;20&quot; \
  --reserve-memory &quot;192M&quot; --limit-memory &quot;512M&quot; \
  --limit-cpu &quot;0.333&quot; \
  --constraint 'node.labels.nodeid == 2' \
  --publish &quot;33332:33333&quot; \
  -e RSLSYNC_SECRET=$(cat ${MNT_DEPLOY_SETUP}/config/resilio/token) \
  --mount type=bind,source=${MNT_SOURCE_RESILIO},target=/data \
  ${IMG_resilio}

# Third host
docker service rm ${CTN_resilio3}; \

docker service create \
  --name ${CTN_resilio3} --hostname ${CTN_resilio3} \
  --network ${NTW_RESILIO} --replicas &quot;1&quot; \
  --restart-condition &quot;any&quot; --restart-max-attempts &quot;20&quot; \
  --reserve-memory &quot;192M&quot; --limit-memory &quot;512M&quot; \
  --limit-cpu &quot;0.333&quot; \
  --constraint 'node.labels.nodeid == 3' \
  --publish &quot;33333:33333&quot; \
  -e RSLSYNC_SECRET=$(cat ${MNT_DEPLOY_SETUP}/config/resilio/token) \
  --mount type=bind,source=${MNT_SOURCE_RESILIO},target=/data \
  ${IMG_resilio}
</code></pre>
<!--kg-card-end: markdown--><h3 id="step-3">Step #3</h3><p>Now it's time to test it. From node1, put a file in <em><code>${MNT_SOURCE_RESILIO}</code>. </em>Wait a few seconds.<em> </em>Check on node2 and node3.</p><p>The file should appear quickly :)</p><h3 id="step-4">Step #4</h3><p>You might want to remove files and directories in the <em><code>/mnt/shared/permdata/.sync/Archive</code></em> directory as Resilio Sync will archive everything by default.</p><p>I have a crontab script that cleans this directory every hour.</p><h3 id="step-5">Step #5</h3><p>It's now time to build your own docker image with this Dockerfile. I shared my project over <a href="https://github.com/firepress-org/resilio-in-docker">https://github.com/firepress-org/resilio-in-docker</a></p><p>Feel free to Buzz me on <a href="https://twitter.com/askpascalandy">Twitter</a> or in the <a href="https://github.com/firepress-org/resilio-in-docker">Github repo</a>. </p><p>Cheers!<br>Pascal</p>]]></content:encoded></item><item><title><![CDATA[Technical challenges]]></title><description><![CDATA[First you might be interested by our Roadmap. This page is about deep technical & architectural challenge we have.]]></description><link>https://firepress.org/en/technical-challenges/</link><guid isPermaLink="false">5c759338f1bd970006e0a0fd</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Tue, 26 Feb 2019 19:27:57 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-3.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-3.jpg" alt="Technical challenges"><p>Hi!</p><p>First you might be interested by <a href="https://firepress.org/en/roadmap/">our Roadmap</a>. This page is about <strong>deep technical &amp; architectural challenge</strong> we have. </p><p>Sharing challenges feels like the right thing to do as I get so much from the open-source community. If this can help people to better understand what we are building here, I'm glad to share it.</p><p><em><strong>NOTE: </strong>The text below is written by a voice recognition software. It's might look funny and is not edited by a human.</em></p><hr><!--kg-card-begin: html--><a id="table-of-content"></a>
<!--kg-card-end: html--><h3 id="table-of-content"><strong>Table of Content</strong></h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li><a href="#backlog">Backlog</a></li><li><a href="#dropped">Dropped</a></li><li><a href="#up-and-running">Up and Running</a></li><li><a href="#get-involved">Get involved</a></li></ul><hr><!--kg-card-begin: html--><a id="backlog"></a><!--kg-card-end: html--><h2 id="backlog">Backlog</h2><h3 id="-container-as-an-external-hard-drive">🙊 Container as an external hard drive</h3><p><strong>User stories / specs</strong></p><p>As a DevOps hero:</p><ul><li>As a DevOps hero, I'm looking for a nfs/zfs/GlusterFS or whatever application that mounts a common directory between all my nodes.</li><li>This needs to run as a docker service create XYZ --global with Docker Swarm. No manual configs on each node and no hard IP to set up.</li><li>As a DevOps hero, I want to create a new node on my existing cluster. The data should sync automatically.</li><li>As a DevOps hero, I want to have a common directory (not a docker volume) that all nodes can share. Something like /mnt/shared/permdata/</li></ul><p>Per example, I would use it this way:</p><ul><li>/mnt/shared/permdata/app1/</li><li>/mnt/shared/permdata/app2/</li><li>/mnt/shared/permdata/bkp/</li><li>/mnt/shared/permdata/etc/</li></ul><p><strong>Work around</strong></p><p>At the moment I use Resilio which is great. The thing I don't like is the fact that it use the the public network to sync. There is no need for this. I want my service to use only the swarm network of my choice.</p><p>Maybe I could force resilio to sync only within an overlay network?</p><p>by: Pascal Andy / 2019-02-26</p><h3 id="-cluster-crash-mitigation">🙊 Cluster crash mitigation</h3><p><strong>EDIT: 2019-02-26_15h05: </strong>The scenario below is well managed. It's not in prod yet only because we don't have a lot of nodes at the moment. It would be too costly for now. But everything is in place to make it work very quickly.</p><p><strong>Scenario:</strong></p><p>This is a big one. Let's say a whole cluster is not available for 6 hours. Whatever the reason. Shall we, as a business, cry on Twitter that our server vendor are down? Absolutely not! Remember the S3 crash in April 2017? Shit happens and I don't want this to happen to us at FirePress.</p><p>The idea here is that we would have two independent clusters running in two zones (data centre).</p><ul><li>50% of our clients are in NYC</li><li>50% of our clients are in AMS</li></ul><p>Let's say NYC crash. Fuck. OK no panic.</p><p>Deploy 100% of our clients to AMS.</p><p>The challenge is to this very quickly. Database merging + picture merging.</p><p>Then, went things are back to normal, redistribute 50%/50%.</p><p>With this setup, it also allows an easy transition from one cluster to a new one. I love it. Don't patch. Scrap and start from scratch.</p><h3 id="-roadmap">🙊 Roadmap</h3><p><a href="https://trello.com/b/0fCwwzqc/firepress-roadmap">https://trello.com/b/0fCwwzqc/firepress-roadmap</a></p><p></p><p>Go back to<strong> <a href="#table-of-content">Table of content</a></strong></p><hr><!--kg-card-begin: html--><a id="dropped"></a><!--kg-card-end: html--><h2 id="dropped"><strong>Dropped</strong></h2><h3 id="caching-website-blogs">Caching website / blogs</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Challenge — Add a Varnish caching container for each blog (or maybe one for every domain we host??)</li><li>CMO, a request goes to Traefik CNT &gt; Ghost CTN &gt; MySQL CTN</li><li>FMO, I want Traefik CNT &gt; Varnish Cache &gt; (if contain is not cached...) &gt; Ghost CTN &gt; MySQL CTN</li></ul><h3 id="minio-storage-for-our-private-docker-registry">Minio storage for our private Docker registry</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>All nodes in the cluster shall have access to Minio bucket</li><li>Would be nice to use Backblaze B2 as storage provider - wip</li><li>To consider | https://github.com/cloudflavor/miniovol</li><li>Storage pricing is key. No AWS S3.</li><li>Backblaze is the best deal at the moment. I use them to do our back up.</li><li><strong>maybe REX-Ray</strong></li><li>To test | https://twitter.com/askpascalandy/status/862271673072058368</li><li>https://github.com/codedellemc/labs</li><li>maybe Portworx and Minio together</li><li>https://www.youtube.com/watch?v=5gRQN9WxsIk</li></ul><h3 id="deploy-a-ha-mysql-database">Deploy a HA MySQL database</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>2019-02-26: See <a href="https://gist.github.com/pascalandy/5735bcae8257e861f29e06da46754aef">https://gist.github.com/pascalandy/5735bcae8257e861f29e06da46754aef</a></li><li>I use Percona and I should be able to do HA. I don't know how yet</li><li>Galera Cluster looks promising</li><li>Mysql 8 will support HA natively</li><li>At the moment, I run one instance of Percona (no HA). Resilio syncing a common directory between 3 nodes.</li><li>Still trying to find a solution to easily run a MySQL cluster master-master-master</li><li>To consider | https://github.com/pingcap/tidb</li><li>This setup looks promising but it’s not quite perfect yet.</li><li>http://severalnines.com/en/mysql-docker-deploy-homogeneous-galera-cluster-etcd</li><li>https://github.com/pingcap/docs/blob/master/op-guide/docker-deployment.md</li></ul><h3 id="monitoring-our-db-pmm">Monitoring our DB | PMM</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li><a href="https://www.percona.com/en/2017/04/21/percona-monitoring-management-1-1-3-now-available/">https://www.percona.com/en/2017/04/21/percona-monitoring-management-1-1-3-now-available/</a></li></ul><h3 id="chatops">ChatOps</h3><p><strong>DROPPED</strong>. Ops will use a terminal. That's it.</p><p>It would be nice to use Slack as a terminal. Why is that?? Here is my use case.</p><p>I want to let none-technical folks (the operations) run Docker stack without having to setup their user/pass/local environment and all the pain that come with welcoming a new user in your DevOps stuff. I assume I could prevent from doing some actions as well like <code>rm *</code>.</p><p>Go back to<strong> <a href="#table-of-content">Table of content</a></strong></p><hr><!--kg-card-begin: html--><a id="up-and-running"></a><!--kg-card-end: html--><h2 id="up-and-running"><strong>Up and running</strong></h2><p>To see how we roll (technically speaking) at FirePress, please check the post <a href="https://firepress.org/en/what-kind-of-back-end-drives-firepress/">What kind of Back-End drives FirePress</a>.</p><p>In short we have <strong>hosting challenges</strong>. Think static website and blog/CMS (Ghost) sites. This site is actually running within a container at <a href="https://firepress.org/en/">http://firepress.org/en/</a>. The home page is running into another one at <a href="https://firepress.org/">http://firepress.org/</a>.</p><ul><li>✅ Our stack is <strong>cloud agnostic</strong>. No AWS/Azure/Google locked in.</li><li>✅ We use <strong>Ubuntu servers</strong> a deploy them via CLI <strong>Docker Machine</strong></li><li>✅ We configure our servers via a bash script / docker-machine. No need for teraform at the moment but probably will some day.</li><li>✅ We set <strong>UFW</strong> rules to work along Docker</li><li>✅ We run <strong>services</strong> <code>docker service create</code> (well 95% of the time).</li><li>✅ We use <strong>Resilio</strong> service to share a common folder between all nodes. Looking to switch… see below.</li><li>✅ Reverse <strong>proxy</strong> to redirection public traffic</li><li>✅ Docker <strong>label</strong> and deploy services against those <strong>constraints</strong></li></ul><p><strong>✅ Fancy bash script to launch services like:</strong></p><ul><li>Traefik</li><li>Percona (MySQL)</li><li>Ghost</li><li>Nginx</li><li>Portainer</li><li>Sematext</li><li>rClone</li></ul><p>Most containers are built on <strong>Alpine</strong>.</p><ul><li>✅ We deploy each website via an unique ID</li><li>✅ Generate <strong>dynamic landing page</strong> via a script from an HTML template. Nothing fancy yet, but great at this stage.</li></ul><p>✅ Our <strong>back up</strong> processes are solid.</p><ul><li>Via <strong>cron</strong></li><li>Internval: every 4 hours, every day</li><li>Compressed and <strong>encrypt</strong> before going outside the cluster on <strong>Backblaze B2</strong>.</li><li>Notified in <strong>Slack</strong> when the backup is done</li><li>Keeping only the last 2 backup on the DB node</li><li>Swarm (raft) is also backed up</li><li>✅ Cron <code>docker system prune --all --force</code> on each node</li><li>✅ Cron back up the Swarm Raft</li></ul><p>✅ <strong>Docker build</strong></p><ul><li>Highly standardized for all containers</li><li>Tagging edge, stable, version are made automatically. We build our containers simply by running ./builder.sh + directory name</li><li>Versioning is A1. We use tags: <strong>edge</strong> and <strong>stable</strong></li><li>✅ We deploy our web app with a <strong>PathPrefix</strong> (Traefik)</li><li>mycie.com/green/</li><li>mycie.com/blue/</li><li>mycie.com/yellow/</li><li>We use <strong>Cloudflare CLI</strong> - Create, update, delete | Zone, A, CNAME etc via flarectn which run within a sporadic container</li></ul><p>✅ We contribute to making Docker a better place</p><ul><li>Feature Request: Show --global instance numbers when docker service</li><li>Fixed — <a href="https://github.com/moby/moby/issues/27670">https://github.com/moby/moby/issues/27670</a></li><li>Scheduler limits the # of ctn at 40 per nodes worker (overlay network limit is 252 ctn) | Swarm 1.12.1</li><li>Fixed — <a href="https://github.com/moby/moby/issues/26702">https://github.com/moby/moby/issues/26702</a></li></ul><h3 id="monitoring-stack-swarmprom-portainer">Monitoring stack Swarmprom / portainer</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li><strong>Metrics</strong> | Collects, processes, and publishes metrics</li><li>Intel Snap | Collects, processes, and publishes metrics</li><li>InfluxDB | Stores metrics</li><li>Grafana | Displays metrics visually</li><li><strong>Logs</strong> ELK (ElasticSearch, Logstash, Kibana)</li><li><strong>Alerts</strong> management (i.e. one node is not responsive)</li><li>Monitoring Percona Mysql performance DB (in docker of course)</li></ul><h3 id="traefik-config">Traefik config</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Traefik is a beast. So many configs!</li><li>Traefik allows me to automatically create https for each site. But I can’t make it work along Cloudflare service. It’s one or the other. I’m screwed so I don’t use SSL at the moment.</li><li>Test ACME renewal</li></ul><h3 id="dns-load-balance-before-hitting-the-swarm-cluster">DNS load balance BEFORE hitting the swarm cluster</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Challenge — At the moment, Cloudflare point to to ONE node. If this node crash, all our site goes down !</li><li>Cloudflare are working on their load balancing solution but let's be proactive. See this ticket.</li><li>We need a health check to see if our 3 managers are health and do a round robin sticky session between them. If one manager is not healthy, the round-robin system shall stop sending traffic to this node. If node Leader 1 is down, the system shall point traffic to node Leader 2 or 3 (health check).</li></ul><h3 id="zero-downtime-deployments-with-rolling-upgrades">Zero-downtime deployments with rolling upgrades</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Will be fixed by the docker team</li><li><a href="https://github.com/moby/moby/issues/30321">https://github.com/moby/moby/issues/30321</a></li></ul><h3 id="find-the-best-practice-to-update-each-node">Find the best practice to update each node</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>At the moment the docker deamon needs to restart... and the DB goes down for 1-2 minutes</li></ul><h3 id="redirect-path-to-domain-com-web-">Redirect path to domain.com/web'/'</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Known issue with Traefik See <a href="https://github.com/containous/traefik/issues/1123#issue-205597693">https://github.com/containous/traefik/issues/1123#issue-205597693</a></li><li>Should be fix on traefik with 1.3. <a href="https://github.com/containous/traefik/pull/1638">See PR</a></li></ul><h3 id="deploying-servers-at-scale">Deploying servers at scale</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Build a Packer / Terraform routine to deploy new nodes (see also <a href="https://www.scaleway.com/developer/scw-builder/">SCW Builder</a>)</li><li>Minimize manual processes (of running bash scripts) to setup up Docker Swarm join / Gluster, UFW rules for private networks</li><li>Better use of Docker-machine so I can use eval more efficient instead of switching between terminal windows. </li></ul><h3 id="cicd">CICD</h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Of course one day it will make sense to get there</li><li>I don't feel the need for this at the moment, the docker workflow by itself is solid enough</li><li>Would be great to rebuild image every night</li></ul><p>Go back to<strong> <a href="#table-of-content">Table of content</a></strong></p><hr><!--kg-card-begin: html--><a id="get-involved"></a><!--kg-card-end: html--><h2 id="get-involved">Get involved</h2><p>If you have solid skills 🤓 with Docker Swarm, Linux bash and the gang* and you would love to help a startup to launch 🔥 a solid project, I would love to get to know you 🍻. Buzz me 👋 on Twitter <a href="https://twitter.com/askpascalandy">@askpascalandy</a>. You can see the things that are done and the things we have to do <a href="https://firepress.org/en/technical-challenges-we-are-facing-now/">here</a>.</p><p>I’m looking for bright and caring people to join this <a href="https://firepress.org/en/tag/from-the-heart/">journey</a> with me.</p><p>To see how we roll (technically speaking) at FirePress, please check the post <a href="https://firepress.org/en/what-kind-of-back-end-drives-firepress/">What kind of Back-End drives FirePress</a>.</p><p>We are hosting between 30 to 60 websites/en/services at any given moment. Not so much at this point as we are in the Beta phase. I’m looking to define an official SLA for our stack.</p><p>In short we have <strong>hoster challenges</strong>. Think static website and blog/CMS (Ghost) sites. This site is actually running within a container at <a href="https://firepress.org/en/">firepress.org/en/</a>.</p><p>Thanks in advance!<br>Pascal</p><hr><p>Go back to<strong> <a href="#table-of-content">Table of content</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Adding custom elements to Ghost themes]]></title><description><![CDATA[Here is a checklist for people maintaining their own Ghost themes based on themes we maintain at FirePress.
When a new release is available, just override the themes but not those parts. It avoids the pain of comparing each of the files and minimizes human error.]]></description><link>https://firepress.org/en/adding-custom-elements-to-ghost-themes/</link><guid isPermaLink="false">5c4261a8140c550006e8b58a</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Fri, 18 Jan 2019 23:30:58 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-4.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-4.jpg" alt="Adding custom elements to Ghost themes"><p>Here is a checklist for people maintaining their own Ghost themes based on themes <a href="https://github.com/topics/firepress-ghost-theme">we maintain at FirePress</a>.</p><p>When a <strong>new release</strong> is available, just override the themes but not those parts. It avoids the pain of comparing each of the files and minimizes human error.</p><!--kg-card-begin: markdown--><h3 id="1customfiles">1) Custom files</h3>
<pre><code>/partials/custom_footer.hbs
/partials/custom_header.hbs
/package.json
</code></pre>
<h3 id="2customdirectory">2) Custom directory</h3>
<pre><code>/assets/css_firepress/
</code></pre>
<h3 id="3customdefaulthbs">3) Custom default.hbs</h3>
<p>In <code>default.hbs</code>, we do reference:</p>
<pre><code>{{&gt;custom_header}}
{{&gt;custom_footer}}
custom &lt;footer class=&quot;site-foot&quot;&gt;
</code></pre>
<!--kg-card-end: markdown--><p>Then, follow <a href="https://trello.com/c/a6jfQeCp/138-global-update-for-live-demo-on-play-with-ghost-part-2">this checklist</a> available on our Roadmap.</p>]]></content:encoded></item><item><title><![CDATA[Using TMUX on Mac]]></title><description><![CDATA[In this post, we will describe how to use TMUX on mac and how to use shortcuts, commands, scripts, configurations and more.]]></description><link>https://firepress.org/en/using-tmux-on-mac/</link><guid isPermaLink="false">5c474d85140c550006e8b5cf</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Thu, 17 Jan 2019 17:07:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-5.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-5.jpg" alt="Using TMUX on Mac"><p>Here is a great video that shows how to use TMUX on mac and how to use shortcuts, commands, scripts, configurations and more.</p><!--kg-card-begin: markdown--><iframe width="560" height="315" src="https://www.youtube.com/embed/BHhA_ZKjyxo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The Definitive Cheatsheet for Docker Secrets]]></title><description><![CDATA[Use Docker secrets to avoid saving sensitive credentials within your image or passing them directly on the command line.]]></description><link>https://firepress.org/en/the-definitive-cheatsheet-for-docker-secrets/</link><guid isPermaLink="false">5bfc715d21198c0007a56ce3</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Tue, 09 May 2017 17:30:09 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-6.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h4 id="goal">Goal</h4>
<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-6.jpg" alt="The Definitive Cheatsheet for Docker Secrets"><p>Use Docker secrets to avoid saving sensitive credentials within your image or passing them directly on the command line.</p>
<p>Source: <a href="https://docs.docker.com/engine/swarm/secrets/#about-secrets">https://docs.docker.com/engine/swarm/secrets/#about-secrets</a></p>
<h4 id="hashtag">#hashtag:</h4>
<p>/ origin: 	RUN FROM ON NODE MANAGER<br><br>
/ on_node: 	RUN FROM ON NODE WHERE THE SERVICE WAS DEPLOYED</p>
<h1 id="basicredisexample">Basic Redis example</h1>
<h4 id="1createasecretorigin">1) Create a secret #origin</h4>
<pre><code>echo &quot;mysecret1b1ee244779d3b7a90bc80be3721fac5f26b350480680&quot; | docker secret create SECRET_POSTGRES_ROOT -;
</code></pre>
<h4 id="2rmasecretorigin">2) RM a secret #origin</h4>
<pre><code>docker secret rm SECRET_POSTGRES_ROOT -;
</code></pre>
<h4 id="3createademocontainerorigin">3) Create a demo container #origin</h4>
<pre><code>docker service  create --name=&quot;redis&quot; --secret=&quot;SECRET_POSTGRES_ROOT&quot; redis:alpine;
</code></pre>
<h4 id="4afindthecontaineridon_node">4a) Find the container ID #on_node</h4>
<pre><code>docker ps --filter name=redis -q;
</code></pre>
<h4 id="4bensurethatsecret_postgres_rootisavailableon_node">4b) Ensure that 'SECRET_POSTGRES_ROOT' is available #on_node</h4>
<pre><code>docker exec $(docker ps --filter name=redis -q) ls -l /run/secrets;
</code></pre>
<h4 id="4cshowvalueforsecret_postgres_rooton_node">4c) Show value for 'SECRET_POSTGRES_ROOT' #on_node</h4>
<pre><code>docker exec $(docker ps --filter name=redis -q) cat /run/secrets/SECRET_POSTGRES_ROOT;
</code></pre>
<h4 id="5removeasecretorigin">5) Remove a secret #origin</h4>
<pre><code>docker service update --secret-rm=&quot;SECRET_POSTGRES_ROOT&quot; redis
</code></pre>
<h4 id="ensureitsremoved">Ensure it's removed</h4>
<p>See Step 4c</p>
<h4 id="6addsecrettoanexistingcontainerorigin">6) Add secret to an existing container #origin</h4>
<pre><code>docker service update --secret-add=&quot;SECRET_POSTGRES_ROOT&quot; redis;
</code></pre>
<h4 id="ensureitsavailable">Ensure it's available</h4>
<p>See Step 4c</p>
<h4 id="7cleanupstufffromthisdemo">7) Clean up stuff from this demo</h4>
<pre><code>docker service rm redis mysql;
docker secret rm SECRET_POSTGRES_ROOT;
</code></pre>
<br>
<hr>
<h1 id="wordpressandmysqlexample">Wordpress and Mysql example</h1>
<h4 id="createarandompasswordorigin">Create a random password #origin</h4>
<pre><code>openssl rand -base64 20 | docker secret create mysql_password -;
openssl rand -base64 20 | docker secret create mysql_root_password -;
docker secret ls;
</code></pre>
<h4 id="createnetworkorigin">Create network #origin</h4>
<pre><code>docker network create -d overlay mysql_private;
</code></pre>
<h4 id="createmysqlservice">Create mysql service</h4>
<pre><code>docker service create \
 --name mysql \
 --replicas 1 \
 --network mysql_private \
 --mount type=volume,source=mysql-data,destination=/var/lib/mysql \
 --secret source=mysql_root_password,target=mysql_root_password \
 --secret source=mysql_password,target=mysql_password \
 -e MYSQL_ROOT_PASSWORD_FILE=&quot;/run/secrets/mysql_root_password&quot; \
 -e MYSQL_PASSWORD_FILE=&quot;/run/secrets/mysql_password&quot; \
 -e MYSQL_USER=&quot;wordpress&quot; \
 -e MYSQL_DATABASE=&quot;wordpress&quot; \
 mysql:latest;

docker service ls;
docker service ps mysql;
</code></pre>
<h4 id="createwordpressservice">Create wordpress service</h4>
<pre><code>docker service create \
     --name wordpress \
     --replicas 1 \
     --network mysql_private \
     --publish 80:80 \
     --mount type=volume,source=wpdata,destination=/var/www/html \
     --secret source=mysql_password,target=wp_db_password,mode=0400 \
     -e WORDPRESS_DB_USER=&quot;wordpress&quot; \
     -e WORDPRESS_DB_PASSWORD_FILE=&quot;/run/secrets/wp_db_password&quot; \
     -e WORDPRESS_DB_HOST=&quot;mysql:3306&quot; \
     -e WORDPRESS_DB_NAME=&quot;wordpress&quot; \
     wordpress:latest;
</code></pre>
<p>/// mode - specifies that the secret is not group-or-world-readable, by setting the mode to 0400.</p>
<pre><code>docker service ls;
docker service ps wordpress;
</code></pre>
<h4 id="logonthewordpresssite">Log on the wordpress site</h4>
<p>127.0.0.1:80</p>
<br>
<hr>
<h1 id="rotateasecret">Rotate a secret¶</h1>
<h4 id="createa2ndrandompasswordorigin">Create a 2ND random password #origin</h4>
<pre><code>openssl rand -base64 44 | docker secret create mysql_password_v2 -;
docker secret ls;
</code></pre>
<h4 id="seewhatisthepasswordwejustcreated">See what is the password we just created</h4>
<pre><code>docker service create --name=&quot;checkthis&quot; --secret=&quot;mysql_password_v2&quot; alpine sleep 20;
echo; sleep 3;
docker exec $(docker ps --filter name=checkthis -q) cat /run/secrets/mysql_password_v2;
echo; sleep 3;
docker service rm checkthis; echo;
</code></pre>
<h4 id="updatethemysqlservice">Update the MySQL service</h4>
<pre><code>docker service update \
--secret-rm mysql_password mysql;

docker service update \
--secret-add source=mysql_password,target=old_mysql_password \
--secret-add source=mysql_password_v2,target=mysql_password \
mysql;
</code></pre>
<p>/// mysql is restarting</p>
<pre><code>docker service ls;
docker service ps mysql;
</code></pre>
<p>/// Even though the MySQL service has access to both the old and new secrets now, the MySQL password for the WordPress user has not yet been changed.</p>
<h4 id="updatemysqlpasswordforthewordpressuserusingthemysqladmin">Update MySQL password for the wordpress user using the mysqladmin</h4>
<pre><code>docker exec $(docker ps --filter name=redis -q)
docker exec $(docker ps --filter name=mysql -q) \
bash -c 'mysqladmin --user=wordpress --password=&quot;$(&lt; /run/secrets/old_mysql_password)&quot; password &quot;$(&lt; /run/secrets/mysql_password)&quot;';
</code></pre>
<h4 id="updatethewordpressservicetousethenewpassword">Update the wordpress service to use the new password</h4>
<pre><code>docker service update \
--secret-rm mysql_password \
--secret-add source=mysql_password_v2,target=wp_db_password,mode=0400 \
wordpress;

</code></pre>
<p>/// wordpress is restarting</p>
<pre><code>docker service ls;
docker service ps wordpress;
</code></pre>
<p>/// Verify that WordPress works by browsing to <a href="http://localhost:30000/">http://localhost:30000/</a> on any swarm node again</p>
<h4 id="revokeaccesstotheoldsecretfromthemysqlserviceandremovetheoldsecretfromdocker">Revoke access to the old secret from the MySQL service and remove the old secret from Docker.</h4>
<pre><code>docker service update \
--secret-rm mysql_password \
mysql;

docker secret rm mysql_password;
</code></pre>
<h4 id="cleanupstufffromthisdemo">Clean up stuff from this demo</h4>
<pre><code>docker service rm wordpress mysql;

docker volume rm mydata wpdata;

docker secret rm mysql_password_v2 mysql_root_password;
</code></pre>
<p>/// That was easy!</p>
<p>See ya soon!<br>
Pascal Andy | <a href="https://twitter.com/_pascalandy">Twitter</a></p>
<p><br>[<img src="https://firepress.org/en/content/image!%5B%5D(https://firepress.org/en/content/images/2017/05/souscrire_au_blog_de_Pascal_Andy-1467085572429.gif)rg/en/souscrire-au-blog-de-pascal-andy/" alt="The Definitive Cheatsheet for Docker Secrets"><br></p>
<hr>
<h1 id="iclassfafaspinnerfapulsefafwi"><i class="fa fa-spinner fa-pulse fa-fw"></i></h1>
<p>P.S. If you have solid skills 🤓 with Docker Swarm, Linux and the <a href="https://firepress.org/en/what-kind-of-back-end-drives-firepress/">things mention here</a> and you would love 💚 to help a startup to launch 🔥 a solid project, I would love to get to know you 🍻. Buzz me 👋 on Twitter <a href="https://twitter.com/askpascalandy">@askpascalandy</a>. You can see the things that are done and the things we have to do <a href="https://firepress.org/en/looking-for-a-geek-that-kicks-ass-with-docker/">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The complete guide to attach a Docker volume with Minio on your Docker Swarm Cluster]]></title><description><![CDATA[I want to share/sync a common folder between 4 nodes.
You know like dropbox but without a 3td party server of course.
Let's see if (Minio Erasure Code) can help.]]></description><link>https://firepress.org/en/the-complete-guide-to-attach-a-docker-volume-with-minio-on-your-docker-swarm-cluster/</link><guid isPermaLink="false">5bfc715d21198c0007a56ce2</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Tue, 09 May 2017 17:28:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-7.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="thegoal">The goal</h3>
<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-7.jpg" alt="The complete guide to attach a Docker volume with Minio on your Docker Swarm Cluster"><p>I want to share/sync a common folder between 4 nodes.<br>
You know like dropbox but without a 3td party server of course.<br>
Let's see if <a href="https://docs.minio.io/docs/minio-erasure-code-quickstart-guide">(Minio Erasure Code)</a> can help.</p>
<p><a href="https://github.com/NitishT/minio/tree/bf04b7e31bc93d1c9b958ec6f857b51af4bd9e2b/docs/orchestration/docker-swarm">This doc</a> is not on Minio website yet but it really helped me.</p>
<h3 id="createthefoldertosharebetweenour4nodes">Create the folder to share between our 4 nodes:</h3>
<p>Run this on all nodes:</p>
<pre><code>rm -rf /mnt/minio;
mkdir -p /mnt/minio/dev-e;
cd /mnt/minio/dev-e; ls -AlhF;
</code></pre>
<p>About my path <strong>SOURCE</strong>:</p>
<ul>
<li><code>mnt</code> is for things shared</li>
<li><code>minio</code> is the driver or the applications used to share</li>
<li><code>dev-d</code> is my cluster ID. It could be <code>prod-a</code>, <code>prod-b</code>, <code>dev-b</code> ...</li>
</ul>
<h3 id="network">Network</h3>
<p>Run this the leader node:</p>
<pre><code>docker network create --driver overlay ntw_minio
</code></pre>
<h3 id="deploying4instancesminioerasurecode">Deploying 4 instances (Minio Erasure Code)</h3>
<p>Run this the leader node:</p>
<p>Create your own MINIO_ACCESS_KEY and MINIO_SECRET_KEY values!</p>
<ul>
<li>Ensure access key = 5 to 20 characters</li>
<li>Ensure secret key = 8 to 40 characters</li>
</ul>
<pre><code>
### Start service 1
CTN_NAME=minio_N01
SVR_NAME=node1
ENV_PORT=9001
\
docker service create \
--name &quot;$CTN_NAME&quot; \
--network &quot;ntw_minio&quot; \
--replicas &quot;1&quot; \
-p &quot;$ENV_PORT&quot;:9000 \
--constraint node.hostname==&quot;$SVR_NAME&quot; \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e &quot;MINIO_ACCESS_KEY=A5a0a87b725552daXd&quot; \
-e &quot;MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90&quot; \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export


### Start service 2
CTN_NAME=minio_N11
SVR_NAME=node2
ENV_PORT=9002
\
docker service create \
--name &quot;$CTN_NAME&quot; \
--network &quot;ntw_minio&quot; \
--replicas &quot;1&quot; \
-p &quot;$ENV_PORT&quot;:9000 \
--constraint node.hostname==&quot;$SVR_NAME&quot; \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e &quot;MINIO_ACCESS_KEY=A5a0a87b725552daXd&quot; \
-e &quot;MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90&quot; \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export

### Start service 3
CTN_NAME=minio_N12
SVR_NAME=node3
ENV_PORT=9003
\
docker service create \
--name &quot;$CTN_NAME&quot; \
--network &quot;ntw_minio&quot; \
--replicas &quot;1&quot; \
-p &quot;$ENV_PORT&quot;:9000 \
--constraint node.hostname==&quot;$SVR_NAME&quot; \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e &quot;MINIO_ACCESS_KEY=A5a0a87b725552daXd&quot; \
-e &quot;MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90&quot; \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export


### Start service 4
CTN_NAME=minio_N13
SVR_NAME=node4
ENV_PORT=9004
\
docker service create \
--name &quot;$CTN_NAME&quot; \
--network &quot;ntw_minio&quot; \
--replicas &quot;1&quot; \
-p &quot;$ENV_PORT&quot;:9000 \
--constraint node.hostname==&quot;$SVR_NAME&quot; \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e &quot;MINIO_ACCESS_KEY=A5a0a87b725552daXd&quot; \
-e &quot;MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90&quot; \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export

</code></pre>
<h3 id="dockerservicels">docker service ls</h3>
<pre><code>docker service ps minio_N01

ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE          ERROR  PORTS
bx6ayw4hw43q  minio1.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node1  Running        Running 2 minutes ago
[node1] (local) root@10.0.17.3 /mnt/minio/dev-d

docker service ps minio_N11
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
ommy53chajmh  minio2.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node2  Running        Running 51 seconds ago
[node1] (local) root@10.0.17.3 /mnt/minio/dev-d

docker service ps minio_N12
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
iykg3oeo56mh  minio3.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node3  Running        Running 33 seconds ago
[node1] (local) root@10.0.17.3 /mnt/minio/dev-d

docker service ps minio_N13
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
wmf3aim5f3gr  minio4.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node4  Running        Running 21 seconds ago
[node1] (local) root@10.0.17.3 /mnt/minio/dev-d
</code></pre>
<h3 id="logsfromminio1">logs from minio1</h3>
<pre><code>ctn_NAME=minio_N01 &amp;&amp; \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) &amp;&amp; \
docker logs --follow $ctnID

ctn_NAME=minio_N11 &amp;&amp; \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) &amp;&amp; \
docker logs --follow $ctnID

ctn_NAME=minio_N12 &amp;&amp; \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) &amp;&amp; \
docker logs --follow $ctnID

ctn_NAME=minio_N13 &amp;&amp; \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) &amp;&amp; \
docker logs --follow $ctnID


lStorage()]&quot;
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 9s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 24s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 40s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 56s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 1m9s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 1m29s)
Disk minio4:9000:/minio/storage/export is still unreachable, with error disk not found
Initializing data volume for first time. Waiting for other servers to come online (elapsed 1m51s)

Initializing data volume for the first time.
[01/04] http://minio1:9000/export - 10 GiB online
[02/04] http://minio2:9000/export - 10 GiB online
[03/04] http://minio3:9000/export - 10 GiB online
[04/04] http://minio4:9000/export - 10 GiB online

Endpoint:  http://10.255.0.8:9000  http://10.255.0.7:9000  http://172.19.0.3:9000  http://10.0.1.3:9000  http://10.0.1.2:9000  http://127.0.0.1:9000
AccessKey: A18d29a3a0256b1e606
SecretKey: Sdf128a527d40fd6811df3f0a72136b9e9201
Region:    us-east-1
SQS ARNs:  &lt;none&gt;

Browser Access:
   http://10.255.0.8:9000  http://10.255.0.7:9000  http://172.19.0.3:9000  http://10.0.1.3:9000  http://10.0.1.2:9000  http://127.0.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://10.255.0.8:9000 A18d29a3a0256b1e606 Sdf128a527d40fd6811df3f0a72136b9e9201

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide

Drive Capacity: 18 GiB Free, 20 GiB Total
Status:         4 Online, 0 Offline. We can withstand [2] more drive failure(s).
</code></pre>
<h3 id="status1">Status 1)</h3>
<p>The services are running good.</p>
<h3 id="createabucket">Create a bucket</h3>
<ul>
<li>Open a new tab on your browser</li>
<li>Go to: http://ip10_0_25_6-9001.play-with-docker.com/minio</li>
<li>Enter credits</li>
<li>Create bucket 'tester'</li>
<li>Upload a picture 'animated-good-job.gif' from the browser</li>
</ul>
<p>On your 4 nodes, check if the file is there:</p>
<pre><code>$ cd /mnt/minio/dev-d/tester; ls -AlhF;
total 0
drwxr-xr-x    2 root     root          35 Feb 14 00:31 animated-good-job.gif/
[node1] (local) root@10.0.25.3 /mnt/minio/dev-d/tester
</code></pre>
<h3 id="status2">Status 2)</h3>
<p>When uploading a file from the web GUI, all nodes sync the files as expected. Good!</p>
<h3 id="22testingfilesharingbycreatingafilefromthenodes">2/2 Testing file sharing by creating a file from the nodes</h3>
<pre><code>#### from node1, Create dummy files (unit test)
FROM_NODE=node1; \
FILE_SIZE=11M; \
\
LENGTH=&quot;8&quot;; \
RAND_STRING=null; \
STEP1=$((RANDOM%975+211)); \
STEP2=$(openssl rand -base64 &quot;$STEP1&quot;); \
STEP3=$(echo &quot;$STEP2&quot; | shasum -a 512 | head -c &quot;$LENGTH&quot;); echo; \
RAND_STRING=&quot;$STEP3&quot;; STEP1=&quot;null&quot;; STEP2=&quot;null&quot;; STEP3=&quot;null&quot;; \
echo &quot;$RAND_STRING&quot;; echo; \
\
cd /mnt/minio/dev-d/tester; \
echo &quot;Create a dummy text file:&quot;; echo; \
pwd; ls -AlhF; du -sh; echo; \
WHEN=&quot;$(date +%Y-%m-%d_%H-%M-%S)&quot;; \
echo &quot;Created from $FROM_NODE - $WHEN&quot; &gt;&gt; &quot;$FROM_NODE&quot;_&quot;$RAND_STRING&quot;.txt; \
pwd; ls -AlhF; du -sh; echo; cat &quot;$FROM_NODE&quot;.txt; echo; echo; \
\
pwd; ls -AlhF; du -sh; echo; \
WHEN=&quot;$(date +%Y-%m-%d_%H-%M-%S)&quot;; \
dd if=/dev/zero of=&quot;$FROM_NODE&quot;_&quot;$RAND_STRING&quot;.dat  bs=$FILE_SIZE  count=1; \
pwd; ls -AlhF; du -sh; echo; \
watch -d -n 1 ls -AlhF;
</code></pre>
<p>Then ...</p>
<pre><code>#### from node2, Create dummy files (unit test)
FROM_NODE=node2; \
FILE_SIZE=12M; \
\
LENGTH=&quot;8&quot;; \h
RAND_STRING=null; \
STEP1=$((RANDOM%975+211)); \
STEP2=$(openssl rand -base64 &quot;$STEP1&quot;); \
STEP3=$(echo &quot;$STEP2&quot; | shasum -a 512 | head -c &quot;$LENGTH&quot;); echo; \
RAND_STRING=&quot;$STEP3&quot;; STEP1=&quot;null&quot;; STEP2=&quot;null&quot;; STEP3=&quot;null&quot;; \
echo &quot;$RAND_STRING&quot;; echo; \
\
cd /mnt/minio/dev-d/tester; \
echo &quot;Create a dummy text file:&quot;; echo; \
pwd; ls -AlhF; du -sh; echo; \
WHEN=&quot;$(date +%Y-%m-%d_%H-%M-%S)&quot;; \
echo &quot;Created from $FROM_NODE - $WHEN&quot; &gt;&gt; &quot;$FROM_NODE&quot;_&quot;$RAND_STRING&quot;.txt; \
pwd; ls -AlhF; du -sh; echo; cat &quot;$FROM_NODE&quot;.txt; echo; echo; \
\
pwd; ls -AlhF; du -sh; echo; \
WHEN=&quot;$(date +%Y-%m-%d_%H-%M-%S)&quot;; \
dd if=/dev/zero of=&quot;$FROM_NODE&quot;_&quot;$RAND_STRING&quot;.dat  bs=$FILE_SIZE  count=1; \
pwd; ls -AlhF; du -sh; echo; \
watch -d -n 1 ls -AlhF;
</code></pre>
<h4 id="fromnode3createdummyfilesunittest">from node3, Create dummy files (unit test)</h4>
<p>You get the pattern at this point :)</p>
<h4 id="fromnode4createdummyfilesunittest">from node4, Create dummy files (unit test)</h4>
<p>You get the pattern at this point :)</p>
<h3 id="status3">Status 3)</h3>
<p>Files are NOT SYNCED when they are created from the nodes. Is it normal?</p>
<h3 id="askingforhelponslack">Asking for help on Slack</h3>
<p><a href="https://minio.slack.com/archives/general/p1487034715005607">Original conversation is here.</a></p>
<p>Hello folks!</p>
<p>Regarding Minio Erasure Code Mode,<br>
I want to share/sync a common folder between 4 nodes using Erasure Code Mode.<br>
You know like dropbox (but without a 3td party main server of course).</p>
<p>I took many hours to test this setup and this is my conclusion:</p>
<ul>
<li>When uploading a file from the web GUI, all nodes sync the files as expected. Good!</li>
<li>But files are NOT SYNCED when they are created from the nodes. Damm :-/</li>
</ul>
<p>May I ask your help here?<br>
<a href="https://github.com/minio/minio/issues/3713#issuecomment-279573366">https://github.com/minio/minio/issues/3713#issuecomment-279573366</a></p>
<p>Cheers!</p>
<h3 id="answersonslack">Answers on Slack!</h3>
<p>y4m4b4 [8:18 PM]<br>
mounting a common DIR you can either use MinFS or S3FS</p>
<p>[8:18]<br>
which would mount the relevant bucket on the nodes..</p>
<p>pascalandy [8:18 PM]<br>
OK tell me about it :)))</p>
<p>y4m4b4 [8:18 PM]<br>
<a href="https://github.com/minio/minfs#docker-simple">https://github.com/minio/minfs#docker-simple</a><br>
minio/minfs: A network filesystem client to connect to Minio and Amazon S3 compatible cloud storage servers<br>
minfs - A network filesystem client to connect to Minio and Amazon S3 compatible cloud storage servers</p>
<p>all you need to do is this..</p>
<p>pascalandy [8:18 PM]<br>
OMG!<br>
You guys are doing this as well?!<br>
You saved the day!</p>
<h3 id="themissingpartinstallthevolumedriver">The missing part - Install the volume driver</h3>
<p><a href="https://github.com/minio/minfs">https://github.com/minio/minfs</a></p>
<pre><code>docker plugin install minio/minfs
</code></pre>
<h3 id="dockervolumecreate">docker volume create</h3>
<pre><code>docker volume create -d minio/minfs \
--name bucket-dev-e \
-o endpoint=http://ip10_0_23_3-9001.play-with-docker.com/ \
-o access-key=A5a0a87b725552daXd \
-o secret-key=369f5e7b4a41e25452c353D629a24c372b62c90 \
-o bucket=bucket-dev-e

docker volume ls
</code></pre>
<h3 id="testingthevolumewithinacontainer">Testing the volume within a container</h3>
<pre><code>docker run -d --name nginxtest1 -p 80:80 -v bucket-dev-e:/usr/share/nginx/html:ro nginx
</code></pre>
<h3 id="status4">Status 4)</h3>
<p>By using our docker volume <code>bucket-dev-e</code> we can mount the bucket into any container. Very good!</p>
<h3 id="usingsubdirectoriesfromabucket">Using sub directories from a bucket.</h3>
<p>This part is work in progress. See <a href="https://github.com/minio/minfs/issues/20">https://github.com/minio/minfs/issues/20</a></p>
<p>For all details about my setup, please check my post:<br>
<a href="http://blog.pascalandy.com/the-complete-guide-to-attach-a-docker-volume-with-minio-on-your-docker-swarm-cluster/">The complete guide to attach a Docker volume with Minio on your Docker Swarm Cluster</a></p>
<p>— — —</p>
<p>Let’s say that my Minio's bucket is named: <code>bucket-dev-e</code>.<br>
I mounted it here <code>/mnt/minio00000/dev-e</code> using <code>docker volume create …</code></p>
<p>Let's start one blog (This works perfectly):</p>
<pre><code>docker run --name some-ghost -v bucket-dev-e:/var/lib/ghost/content/images ghost
</code></pre>
<p>What if I need to run multiple websites:</p>
<pre><code>docker run --name some-ghost -v bucket-dev-e/ghost/site1/images:/var/lib/ghost/content/images ghost

docker run --name some-ghost -v bucket-dev-e/ghost/site2/images:/var/lib/ghost/content/images ghost
</code></pre>
<p><strong>My challange is …</strong> the commands above are not working. By default we cannot specify subpaths <code>bucket-dev-e/ghost/site2/images</code> from a Docker Volume.<br>
What can we do ? (I DON’T KNOW THE ANSWER YET)</p>
<p>I don't want to use one Docker Volume for each of the 100x (potentially 1000x) site I’m hosting.</p>
<p>Any other ideas?</p>
<h3 id="conclusion">Conclusion</h3>
<p>By using Minio along their minfs (<a href="https://github.com/minio/minfs">https://github.com/minio/minfs</a>) we can have best of both worlds.<br>
A solid object storage and connect Docker volume to this storage. Any container can have access to the bucket created in Minio.</p>
<p>Another great thing with Minio is that you don't have to pre-define disk space (like GlusterFS, Infinit, Portworx, etc). Minio use whatever space you have a disk.</p>
<p>You can also create another data store easily on hyper.sh and rock to the world. It's been a long journey and now this will help me to move to production.</p>
<p>Cheers!<br>
Pascal Andy | <a href="https://twitter.com/askpascalandy">Twitter</a><br>
<br>[<img src="https://firepress.org/en/content/image!%5B%5D(https://firepress.org/en/content/images/2017/04/souscrire_au_blog_de_Pascal_Andy-1467085572429.gif)rg/en/souscrire-au-blog-de-pascal-andy/" alt="The complete guide to attach a Docker volume with Minio on your Docker Swarm Cluster"><br></p>
<hr>
<p>Don't be shy to buzz me 👋 on Twitter <a href="https://twitter.com/askpascalandy">@askpascalandy</a>. Cheers!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Best practices for getting code into a container]]></title><description><![CDATA[The question is: What is the best practices for getting code into a container (git clone vs. copy vs. data container). My answer: I use wget on the Github repo (branch master). Hope it helps!]]></description><link>https://firepress.org/en/best-practices-for-getting-code-into-a-container/</link><guid isPermaLink="false">5bfc715d21198c0007a56ce1</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Tue, 09 May 2017 17:26:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-8.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-8.jpg" alt="Best practices for getting code into a container"><p>The <a href="https://forums.docker.com/t/best-practices-for-getting-code-into-a-container-git-clone-vs-copy-vs-data-container/4077/14">question is</a>: What is the best practices for getting code into a container (git clone vs. copy vs. data container)</p>
<p>My answer: I use wget on the Github repo (branch master). Hope it helps!</p>
<pre><code>FROM alpine:3.5


##############################################################################
# Install App
##############################################################################


WORKDIR $APP

RUN apk update &amp;&amp; \
	apk upgrade &amp;&amp; \
	apk --no-cache add tar curl tini \
    &amp;&amp; apk --no-cache add --virtual devs gcc make python wget unzip ca-certificates \
	&amp;&amp; apk del devs gcc make python wget unzip ca-certificates \
	&amp;&amp; npm cache clean \
	&amp;&amp; rm -rf /tmp/npm*


##############################################################################
# PART ONE
# Install/copy FirePress_Klimax into casper from Github
##############################################################################
echo; echo; echo; \
echo &quot;PART TWO ...&quot;; echo; \

THEME_NAME_FROM=&quot;FirePress_Klimax&quot;; \
THEME_NAME_INTO=&quot;casper&quot;; \

GIT_URL=&quot;https://github.com/firepress-org/$THEME_NAME_FROM/archive/master.zip&quot;; \

DIR_FROM=&quot;$DIR_THEMES/$THEME_NAME_FROM&quot;; \
DIR_INTO=&quot;$DIR_THEMES/$THEME_NAME_INTO&quot;; \

cd $DIR_THEMES; \
wget --no-check-certificate -O master.zip $GIT_URL; \
echo; echo; echo &quot;List (12) $DIR_THEMES ...&quot;; echo; ls -AlhF $DIR_THEMES; du -sh; echo; \

unzip $DIR_THEMES/master.zip; \
echo; echo; echo &quot;List (13) $DIR_THEMES ...&quot;; echo; ls -AlhF $DIR_THEMES; du -sh; echo; \

rm $DIR_THEMES/master.zip; \
echo; echo; echo &quot;List (14) $DIR_THEMES ...&quot;; echo; ls -AlhF $DIR_THEMES; du -sh; echo; \

mv $THEME_NAME_FROM-master $THEME_NAME_INTO; \
echo; echo; echo &quot;List (15) $DIR_THEMES ...&quot;; echo; ls -AlhF $DIR_THEMES; du -sh; echo; \

cd $GHOST_SOURCE; \
echo; echo; echo &quot;List (16) $DIR_INTO ...&quot;; echo; ls -AlhF $DIR_INTO; du -sh; echo; \

echo; echo; echo &quot;Show $THEME_NAME_FROM version (17) ($DIR_INTO)&quot;; echo; \
cat $DIR_INTO/package.json | grep &quot;version&quot;; \


##############################################################################
# # PART TWO
# Install Theme: XYZ
##############################################################################

# ... future themes
# ...
# ...


##############################################################################
# Clean up
##############################################################################
rm -rf /var/cache/apk/*; \
apk del wget unzip ca-certificates; \
echo &quot;End of /RUN&quot;; echo; echo; echo;à
</code></pre>
<p><br>[<img src="https://firepress.org/en/content/image!%5B%5D(https://firepress.org/en/content/images/2017/04/souscrire_au_blog_de_Pascal_Andy-1467085572429.gif)rg/en/souscrire-au-blog-de-pascal-andy/" alt="Best practices for getting code into a container"><br></p>
<hr>
<h1 id="iclassfafaspinnerfapulsefafwi"><i class="fa fa-spinner fa-pulse fa-fw"></i></h1>
<p>P.S. If you have solid skills 🤓 with Docker Swarm, Linux and the <a href="https://firepress.org/en/what-kind-of-back-end-drives-firepress/">things mention here</a> and you would love 💚 to help a startup to launch 🔥 a solid project, I would love to get to know you 🍻. Buzz me 👋 on Twitter <a href="https://twitter.com/askpascalandy">@askpascalandy</a>. You can see the things that are done and the things we have to do <a href="https://firepress.org/en/looking-for-a-geek-that-kicks-ass-with-docker/">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What kind of Back-End drives FirePress]]></title><description><![CDATA[If you are curious about how things work like I am, I bet you will enjoy this post. I'll share the core elements that make FirePress an actual product you can use to grow your brand on the Web.]]></description><link>https://firepress.org/en/what-kind-of-back-end-drives-firepress/</link><guid isPermaLink="false">5bfc715d21198c0007a56cda</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Tue, 25 Apr 2017 20:59:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-9.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-9.jpg" alt="What kind of Back-End drives FirePress"><p>If you are curious about how things work like I am, I bet you will enjoy this post. I'll share the core elements that make FirePress an actual product you can use to grow your brand on the Web.</p><p>If you are not sure what FirePress is at this point, please check out our <a href="https://firepress.org/en/tag/from-the-heart/"><a href="https://firepress.org/en/tag/about/">About section</a></a> first.</p><p>Overall, we can easily deploy new clusters thanks to the way we design our backend. We even aim to completely wipe out our production(s) cluster(s) with a new one every 3-4 months.</p><p>We are cloud agnostic and there is no vendor lock-in at FirePress.</p><h3 id="services-in-production-application-deployed-via-containers-"><strong>Services in production (application deployed via containers)</strong></h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Ghost is the core application that clients use to manage their websites (Node.js on the backend)</li><li>Caddy serves landing pages</li><li>Traefik proxy</li><li>Letencrypt</li><li>Portainer</li><li>Resilio</li><li>rClone</li><li>EKL</li><li>Prometheus</li><li>Grafana</li></ul><h3 id="devops-high-level"><strong>DevOps | High level</strong></h3><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>Digital Ocean, Linux servers on Ubuntu 16.xx</li><li>Cloud vendor (undisclosed at the moment)</li><li>Micro services architecture</li><li>Docker Swarm Orchestrator (Moby project)</li><li>Well, you already understand that we deploy things in containers at this point.</li><li>Container OS: Alpine (mainly)</li><li>Bash-script wrapping around Docker commands</li><li>Python</li><li>Cloudflare</li><li>Backblaze B2 (Object store)</li></ul>]]></content:encoded></item><item><title><![CDATA[Open source]]></title><description><![CDATA[Our official account on GitHub is https://github.com/firepress-org/]]></description><link>https://firepress.org/en/open-source/</link><guid isPermaLink="false">5bfc715d21198c0007a56ccf</guid><category><![CDATA[Under the hood]]></category><dc:creator><![CDATA[FirePress Team]]></dc:creator><pubDate>Mon, 11 Jul 2016 02:10:00 GMT</pubDate><media:content url="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-10.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://firepress.org/en/content/images/2020/12/firepress-rg-tag-under-the-hood-v1-10.jpg" alt="Open source"><p>Our official account on GitHub is <a href="https://github.com/firepress-org/">https://github.com/firepress-org/</a></p>]]></content:encoded></item></channel></rss>