The goal

I want to share/sync a common folder between 4 nodes.
You know like dropbox but without a 3td party server of course.
Let's see if (Minio Erasure Code) can help.

This doc is not on Minio website yet but it really helped me.

Create the folder to share between our 4 nodes:

Run this on all nodes:

rm -rf /mnt/minio;
mkdir -p /mnt/minio/dev-e;
cd /mnt/minio/dev-e; ls -AlhF;

About my path SOURCE:

  • mnt is for things shared
  • minio is the driver or the applications used to share
  • dev-d is my cluster ID. It could be prod-a, prod-b, dev-b ...

Network

Run this the leader node:

docker network create --driver overlay ntw_minio

Deploying 4 instances (Minio Erasure Code)

Run this the leader node:

Create your own MINIO_ACCESS_KEY and MINIO_SECRET_KEY values!

  • Ensure access key = 5 to 20 characters
  • Ensure secret key = 8 to 40 characters

### Start service 1
CTN_NAME=minio_N01
SVR_NAME=node1
ENV_PORT=9001
\
docker service create \
--name "$CTN_NAME" \
--network "ntw_minio" \
--replicas "1" \
-p "$ENV_PORT":9000 \
--constraint node.hostname=="$SVR_NAME" \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e "MINIO_ACCESS_KEY=A5a0a87b725552daXd" \
-e "MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90" \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export


### Start service 2
CTN_NAME=minio_N11
SVR_NAME=node2
ENV_PORT=9002
\
docker service create \
--name "$CTN_NAME" \
--network "ntw_minio" \
--replicas "1" \
-p "$ENV_PORT":9000 \
--constraint node.hostname=="$SVR_NAME" \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e "MINIO_ACCESS_KEY=A5a0a87b725552daXd" \
-e "MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90" \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export

### Start service 3
CTN_NAME=minio_N12
SVR_NAME=node3
ENV_PORT=9003
\
docker service create \
--name "$CTN_NAME" \
--network "ntw_minio" \
--replicas "1" \
-p "$ENV_PORT":9000 \
--constraint node.hostname=="$SVR_NAME" \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e "MINIO_ACCESS_KEY=A5a0a87b725552daXd" \
-e "MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90" \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export


### Start service 4
CTN_NAME=minio_N13
SVR_NAME=node4
ENV_PORT=9004
\
docker service create \
--name "$CTN_NAME" \
--network "ntw_minio" \
--replicas "1" \
-p "$ENV_PORT":9000 \
--constraint node.hostname=="$SVR_NAME" \
--mount type=bind,src=/mnt/minio/dev-e,dst=/export \
-e "MINIO_ACCESS_KEY=A5a0a87b725552daXd" \
-e "MINIO_SECRET_KEY=369f5e7b4a41e25452c353D629a24c372b62c90" \
minio/minio \
server \
http://minio_N01/export \
http://minio_N11/export \
http://minio_N12/export \
http://minio_N13/export

docker service ls

docker service ps minio_N01

ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE          ERROR  PORTS
bx6ayw4hw43q  minio1.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node1  Running        Running 2 minutes ago
[node1] (local) [email protected] /mnt/minio/dev-d

docker service ps minio_N11
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
ommy53chajmh  minio2.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node2  Running        Running 51 seconds ago
[node1] (local) [email protected] /mnt/minio/dev-d

docker service ps minio_N12
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
iykg3oeo56mh  minio3.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node3  Running        Running 33 seconds ago
[node1] (local) [email protected] /mnt/minio/dev-d

docker service ps minio_N13
ID            NAME      IMAGE                                     NODE   DESIRED STATE  CURRENT STATE           ERROR  PORTS
wmf3aim5f3gr  minio4.1  minio/minio:RELEASE.2017-01-25T03-14-52Z  node4  Running        Running 21 seconds ago
[node1] (local) [email protected] /mnt/minio/dev-d

logs from minio1

ctn_NAME=minio_N01 && \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) && \
docker logs --follow $ctnID

ctn_NAME=minio_N11 && \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) && \
docker logs --follow $ctnID

ctn_NAME=minio_N12 && \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) && \
docker logs --follow $ctnID

ctn_NAME=minio_N13 && \
ctnID=$(docker ps -q --filter label=com.docker.swarm.service.name=$ctn_NAME) && \
docker logs --follow $ctnID


lStorage()]"
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 9s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 24s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 40s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 56s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 1m9s)
Initializing data volume. Waiting for minimum 3 servers to come online. (elapsed 1m29s)
Disk minio4:9000:/minio/storage/export is still unreachable, with error disk not found
Initializing data volume for first time. Waiting for other servers to come online (elapsed 1m51s)

Initializing data volume for the first time.
[01/04] http://minio1:9000/export - 10 GiB online
[02/04] http://minio2:9000/export - 10 GiB online
[03/04] http://minio3:9000/export - 10 GiB online
[04/04] http://minio4:9000/export - 10 GiB online

Endpoint:  http://10.255.0.8:9000  http://10.255.0.7:9000  http://172.19.0.3:9000  http://10.0.1.3:9000  http://10.0.1.2:9000  http://127.0.0.1:9000
AccessKey: A18d29a3a0256b1e606
SecretKey: Sdf128a527d40fd6811df3f0a72136b9e9201
Region:    us-east-1
SQS ARNs:  <none>

Browser Access:
   http://10.255.0.8:9000  http://10.255.0.7:9000  http://172.19.0.3:9000  http://10.0.1.3:9000  http://10.0.1.2:9000  http://127.0.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://10.255.0.8:9000 A18d29a3a0256b1e606 Sdf128a527d40fd6811df3f0a72136b9e9201

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide

Drive Capacity: 18 GiB Free, 20 GiB Total
Status:         4 Online, 0 Offline. We can withstand [2] more drive failure(s).

Status 1)

The services are running good.

Create a bucket

  • Open a new tab on your browser
  • Go to: http://ip10_0_25_6-9001.play-with-docker.com/minio
  • Enter credits
  • Create bucket 'tester'
  • Upload a picture 'animated-good-job.gif' from the browser

On your 4 nodes, check if the file is there:

$ cd /mnt/minio/dev-d/tester; ls -AlhF;
total 0
drwxr-xr-x    2 root     root          35 Feb 14 00:31 animated-good-job.gif/
[node1] (local) [email protected] /mnt/minio/dev-d/tester

Status 2)

When uploading a file from the web GUI, all nodes sync the files as expected. Good!

2/2 Testing file sharing by creating a file from the nodes

#### from node1, Create dummy files (unit test)
FROM_NODE=node1; \
FILE_SIZE=11M; \
\
LENGTH="8"; \
RAND_STRING=null; \
STEP1=$((RANDOM%975+211)); \
STEP2=$(openssl rand -base64 "$STEP1"); \
STEP3=$(echo "$STEP2" | shasum -a 512 | head -c "$LENGTH"); echo; \
RAND_STRING="$STEP3"; STEP1="null"; STEP2="null"; STEP3="null"; \
echo "$RAND_STRING"; echo; \
\
cd /mnt/minio/dev-d/tester; \
echo "Create a dummy text file:"; echo; \
pwd; ls -AlhF; du -sh; echo; \
WHEN="$(date +%Y-%m-%d_%H-%M-%S)"; \
echo "Created from $FROM_NODE - $WHEN" >> "$FROM_NODE"_"$RAND_STRING".txt; \
pwd; ls -AlhF; du -sh; echo; cat "$FROM_NODE".txt; echo; echo; \
\
pwd; ls -AlhF; du -sh; echo; \
WHEN="$(date +%Y-%m-%d_%H-%M-%S)"; \
dd if=/dev/zero of="$FROM_NODE"_"$RAND_STRING".dat  bs=$FILE_SIZE  count=1; \
pwd; ls -AlhF; du -sh; echo; \
watch -d -n 1 ls -AlhF;

Then ...

#### from node2, Create dummy files (unit test)
FROM_NODE=node2; \
FILE_SIZE=12M; \
\
LENGTH="8"; \h
RAND_STRING=null; \
STEP1=$((RANDOM%975+211)); \
STEP2=$(openssl rand -base64 "$STEP1"); \
STEP3=$(echo "$STEP2" | shasum -a 512 | head -c "$LENGTH"); echo; \
RAND_STRING="$STEP3"; STEP1="null"; STEP2="null"; STEP3="null"; \
echo "$RAND_STRING"; echo; \
\
cd /mnt/minio/dev-d/tester; \
echo "Create a dummy text file:"; echo; \
pwd; ls -AlhF; du -sh; echo; \
WHEN="$(date +%Y-%m-%d_%H-%M-%S)"; \
echo "Created from $FROM_NODE - $WHEN" >> "$FROM_NODE"_"$RAND_STRING".txt; \
pwd; ls -AlhF; du -sh; echo; cat "$FROM_NODE".txt; echo; echo; \
\
pwd; ls -AlhF; du -sh; echo; \
WHEN="$(date +%Y-%m-%d_%H-%M-%S)"; \
dd if=/dev/zero of="$FROM_NODE"_"$RAND_STRING".dat  bs=$FILE_SIZE  count=1; \
pwd; ls -AlhF; du -sh; echo; \
watch -d -n 1 ls -AlhF;

from node3, Create dummy files (unit test)

You get the pattern at this point :)

from node4, Create dummy files (unit test)

You get the pattern at this point :)

Status 3)

Files are NOT SYNCED when they are created from the nodes. Is it normal?

Asking for help on Slack

Original conversation is here.

Hello folks!

Regarding Minio Erasure Code Mode,
I want to share/sync a common folder between 4 nodes using Erasure Code Mode.
You know like dropbox (but without a 3td party main server of course).

I took many hours to test this setup and this is my conclusion:

  • When uploading a file from the web GUI, all nodes sync the files as expected. Good!
  • But files are NOT SYNCED when they are created from the nodes. Damm :-/

May I ask your help here?
https://github.com/minio/minio/issues/3713#issuecomment-279573366

Cheers!

Answers on Slack!

y4m4b4 [8:18 PM]
mounting a common DIR you can either use MinFS or S3FS

[8:18]
which would mount the relevant bucket on the nodes..

pascalandy [8:18 PM]
OK tell me about it :)))

y4m4b4 [8:18 PM]
https://github.com/minio/minfs#docker-simple
minio/minfs: A network filesystem client to connect to Minio and Amazon S3 compatible cloud storage servers
minfs - A network filesystem client to connect to Minio and Amazon S3 compatible cloud storage servers

all you need to do is this..

pascalandy [8:18 PM]
OMG!
You guys are doing this as well?!
You saved the day!

The missing part - Install the volume driver

https://github.com/minio/minfs

docker plugin install minio/minfs

docker volume create

docker volume create -d minio/minfs \
--name bucket-dev-e \
-o endpoint=http://ip10_0_23_3-9001.play-with-docker.com/ \
-o access-key=A5a0a87b725552daXd \
-o secret-key=369f5e7b4a41e25452c353D629a24c372b62c90 \
-o bucket=bucket-dev-e

docker volume ls

Testing the volume within a container

docker run -d --name nginxtest1 -p 80:80 -v bucket-dev-e:/usr/share/nginx/html:ro nginx

Status 4)

By using our docker volume bucket-dev-e we can mount the bucket into any container. Very good!

Using sub directories from a bucket.

This part is work in progress. See https://github.com/minio/minfs/issues/20

For all details about my setup, please check my post:
The complete guide to attach a Docker volume with Minio on your Docker Swarm Cluster

— — —

Let’s say that my Minio's bucket is named: bucket-dev-e.
I mounted it here /mnt/minio00000/dev-e using docker volume create …

Let's start one blog (This works perfectly):

docker run --name some-ghost -v bucket-dev-e:/var/lib/ghost/content/images ghost

What if I need to run multiple websites:

docker run --name some-ghost -v bucket-dev-e/ghost/site1/images:/var/lib/ghost/content/images ghost

docker run --name some-ghost -v bucket-dev-e/ghost/site2/images:/var/lib/ghost/content/images ghost

My challange is … the commands above are not working. By default we cannot specify subpaths bucket-dev-e/ghost/site2/images from a Docker Volume.
What can we do ? (I DON’T KNOW THE ANSWER YET)

I don't want to use one Docker Volume for each of the 100x (potentially 1000x) site I’m hosting.

Any other ideas?

Conclusion

By using Minio along their minfs (https://github.com/minio/minfs) we can have best of both worlds.
A solid object storage and connect Docker volume to this storage. Any container can have access to the bucket created in Minio.

Another great thing with Minio is that you don't have to pre-define disk space (like GlusterFS, Infinit, Portworx, etc). Minio use whatever space you have a disk.

You can also create another data store easily on hyper.sh and rock to the world. It's been a long journey and now this will help me to move to production.

Cheers!
Pascal Andy | Twitter

[


Don't be shy to buzz me 👋 on Twitter @askpascalandy. Cheers!



You've successfully subscribed to FirePress
Welcome back! You've successfully signed in.
Great! You've successfully signed up.
Success! Your account is fully activated, you now have access to all content.