

If you consider moving your development to use dockers or you are looking to find how to do it in a harmonic way both for development and delivery to production this is for you.
Since I started playing with dockers several years ago and tried to develop with it on all levels I’ve faced many things like how to reuse and share the dockers of my local environment to prevent setups of 1–2 days again and again. That was my first question.
Another thing was when I was working with several repositories. How should I handle code from many repositories? As a developer I want all the team around me to look and work on the same code.
Today, In Linnovate, we have many projects of different types and I saw this question (and others) in my mind again and again. So I decided to put a lot of energy to create some method that will handle most of our cases (if not all of them). What I will show is the result of what we use in all of our projects with dockers and came after many project setup and improvement iterations.
I will describe how to manage a microservices-based project with dockers in git and will cover the git repository designing, development, and delivery requirements.
Note: We work with Gitlab but the concept can be transitioned to other repository managers in a similar way
To handle these challenges we decided to base our solution on git submodules and docker-compose to describe and run the local environment
The GS3D Pattern — Git Submodules and Dockers Driven Development
In order to fetch all the repositories, we will use git submodules. Git submodule is a bit tricky to work with but there are ways to work with as can be seen here
Besides fetching the code, each submodule is represented with a specific commit that indicates its version. We’ll use it on the integration repository.
This is actually the project code repository — The final picture of the code. It represents each code from each microservice in a specific version and also the development environment. This repo will be cloned in order to get the code and set up the local environment in order to develop with its files and dockers.
To set up the local environment
git clone –recursive link-to-integration-repo
cp .env.example .env
docker-compose up -d
All deployment-related assets such as k8s files will be stored on another repository since they are not related to the code itself nor to the local environment. In this example, the remote environments will use docker-compose and the code will be fetched by sub-modeling the integration repository.
With CI we can push the microservices as images to some container registry and fetch them in remote environments.
This example is of a WordPress website with react and a NodeJS with GraphQL API near it to work with the react components.
Step1 — Create the project and the micro-services in it.
Create the project and the repositories with the right structure:
The integration repository is actually the GS3D project you created inside the GS3D group.
git submodule add [wordpress-repository-url]
git submodule add [react-repository-url]
git submodule add [graphql-repository-url]
git commit git commit
The Group/Group/Project convention
Since we have several clients and each client could have several projects each client needs to have its own zone (GitLab group). And since we are using microservices — each project should have its own microservices under it (GitLab group). The hierarchy we use on GitLab is the following :
The first level of the hierarchy is optional if you are the client who manages its own projects
Today we have a clear pattern to set up our projects. The consolidation we made helps both our managers and developers to understand every project right away. Our local environment setup time was reduced from 1 or 2 days to several minutes.
Based on that pattern we implemented our CI, on all our projects.
It also supports the ability to integrate other products that we have to our project by sub-modeling their repositories (easy reuse and integration)
Would love to hear if you find it helpful.