10 minutes read
Thursday, January 30, 2025
Infrastructure & Application Monitoring with Checkmk
Source: https://checkmk.com/
docker container run -dit -p 8080:5000 -p 8000:8000 --tmpfs /opt/omd/sites/cmk/tmp:uid=1000,gid=1000 -v monitoring:/omd/sites --name monitoring -v /etc/localtime:/etc/localtime:ro --restart always checkmk/check-mk-cloud:2.3.0p24
Shodan - Search Engine for the Internet of Everything
Search Engine for the Internet of Everything
Shodan is the world's first search engine for Internet-connected devices. Discover how Internet intelligence can help you make better decisions.
Network Monitoring Made Easy
Within 5 minutes of using Shodan Monitor you will see what you currently have connected to the Internet within your network range and be setup with real-time notifications when something unexpected shows up.
ČRa new data center
CRA se stanou jedničkou mezi provozovateli datových center, získaly územní rozhodnutí pro nové DC

České Radiokomunikace (CRA) finišují s přípravami jednoho z nejambicióznějších projektů v oblasti digitální infrastruktury v České republice, nového datového centra. Podařil se další významný krok, CRA získaly územní rozhodnutí. V lokalitě Praha Zbraslav vznikne do dvou let jedno z největších zařízení svého druhu nejen v České republice, ale i v Evropě, které bude mít kapacitou přes 2 500 serverových racků a příkon 26 megawattů.
„Hlavními atributy našeho projektu jsou inovativnost, udržitelnost, efektivita, spolehlivost a bezpečnost. Našim cílem je přivést do Česka velké společnosti, které zde dosud nemohly služeb datacenter využít z kapacitních důvodů s ohledem na jejich velikost či obsazenost,“ upřesňuje Miloš Mastník, generální ředitel Českých Radiokomunikací. „Nyní máme platné územní rozhodnutí a to znamená, že můžeme znovu pokročit s finálními přípravami,“ doplňuje Miloš Mastník.
Datové centrum bude mít rozlohu 5 622 m² s rozměry budovy 320 × 45 metrů a vyroste na revitalizovaných pozemcích, kde stály původně tři středovlnné rozhlasové vysílače CRA. Bude vybaveno kapacitou 2 500 serverových míst (racků) s příkonem 26 MW z dvou nezávislých tras pro bezpečné ukládání a správu dat. Prostory půjde přizpůsobit specifickým potřebám jednotlivých zákazníků. Každá místnost bude mít také vlastní kancelářské a úložné prostory, čímž se centrum stane komplexním řešením pro technologické potřeby firem.
Datové centrum bude splňovat nejpřísnější technologické i ekologické standardy. Bude plně napájené z obnovitelných zdrojů, konkrétně ze solární článků umístěných na střeše budovy. Díky strategické poloze, inovativnímu systému chlazení s hodnotou GWP <10, využívání zbytkového tepla a optimalizovanou výkonovou kapacitou bude efektivita provozu na špičkové úrovni s hodnotou PUE (Power Usage Effectiveness) 1,25. Například pro lepší distribuci vzduchu a hygienické standardy budou využity deskové podlahy, což zlepší chlazení a zároveň umožní výkonové zatížení jednotlivých racků až na 20 kW bez nutnosti dodatečného posílení chlazení.
CRA plánují splnit certifikace LEED Gold a dodržet standardy ASHRAE, projekt vzniká v souladu s principy ESG.
Projekt získal podporu Ministerstva průmyslu a obchodu, které se společností CRA podepsalo memorandum o porozumění. Memorandum stanovuje rámec spolupráce mezi státem a CRA v rámci pravomocí a platných předpisů s cílem podpořit digitální transformaci, výzkum a vývoj technologií a zajistit nezbytnou infrastrukturu pro další růst ekonomiky.
CRA již provozují osm datových center v České republice, například na pražském Žižkově, Strahově a Cukráku, stejně jako v Brně, Ostravě, Pardubicích a Zlíně. Zájem o pronájem kapacit stále roste, proto CRA otevřely nový datový sál letos na jaře v rámci vysílače Cukrák, koupily datové centrum Lužice a chystají modernizaci a rozšíření DC Tower na Žižkově.
Zbraslavské datové centrum má být ve spolupráci s mateřskou firmou Cordiant Digital Infrastructure dokončeno v roce 2026. Stavební a další nezbytná povolení od různých regulačních orgánů plánují CRA získat na jaře 2025. Samotná výstavba potrvá přibližně 24 měsíců. Díky již existující infrastruktuře včetně připojení na optickou síť, silniční napojení a dostupné energie, bude projekt schopen rychlé realizace.
NAS Performance: NFS vs. SMB vs. SSHFS | Jake’s Blog
Source: https://blog.ja-ke.tech/2019/
NAS Performance: NFS vs. SMB vs. SSHFS
~ updated:
This is a performance comparison of the the three most useful protocols for networks file shares on Linux with the latest software. I have run sequential and random benchmarks and tests with rsync. The main reason for this post is that i could not find a proper test that includes SSHFS.
NAS Setup
The hardware side of the server is based on an Dell mainboard with an Intel i3-3220, so a fairly old 2 core / 4 threads CPU. It also does not support the AES-NI extensions (which would increase the AES performance noticeably) the encryption happens completely in software.
As storage two HDDs in BTRFS RAID1 were used, it does not make a difference though, because the tests are staged to hit almost always the cache on the server, so only the protocol performance counts.
I installed Fedora 30 Server on it and updated it to the latest software versions.
Everything was tested over a local Gigabit Ethernet Network. The client is a quadcore desktop machine running Arch Linux, so this should not be a bottleneck.
SSHFS (also known as SFTP)
Relevant package/version: OpenSSH_8.0p1, OpenSSL 1.1.1c, sshfs 3.5.2
OpenSSH is probably running anyway on all servers, so this is by far the simplest setup: just install sshfs (fuse based) on the clients and mount it. Also it is per default encrypted with ChaCha20-Poly1305. As second test i did choose AES128, because it is the most popular cipher, disabling encryption is not possible (without patching ssh). Then i added some mount options (suggested here) for convenience and ended with:
sshfs -o
Ciphers=aes128-ctr -o Compression=no -o ServerAliveCountMax=2 -o
ServerAliveInterval=15 remoteuser@server:/mnt/share/ /media/mountpoint
NFSv4
Relevant package/version: Linux Kernel 5.2.8
The plaintext setup is also easy, specify the exports, start the server and open the ports. I used these options on the server: (rw,async,all_squash,anonuid=1000,anongid=1000)
And mounted with:
mount.nfs4 -v nas-server:/mnt/share /media/mountpoint
But getting encryption to work can be a nightmare, first setting up
kerberos is more complicated than other solutions and then dealing with
idmap on both server an client(s)…
After that you can choose from different levels, i set sec=krb5p
to encrypt all traffic for this test (most secure, slowest).
SMB3
Relevant package/version: Samba 4.10.6
The setup is mostly done with installing, creating the user DB, adding a share to smb.conf
and starting the smb service. Encryption is disabled by default, for the encrypted test i set
smb encrypt = required
on the server globally.
It uses AES128-CCM then (visible in smbstatus
).
IDmapping on the client can be simply done as mount option, i used as complete mount command:
mount -t cifs -o username=jk,password=xyz,uid=jk,gid=jk //nas-server/media /media/mountpoint
Test Methodology
The main test block was done with the flexible I/O tester (fio), written by Jens Axboe (current maintainer of the Linux block layer). It has many options, so i made a short script to run reproducible tests:
#!/bin/bash
OUT=$HOME/logs
fio --name=job-w --rw=write --size=2G --ioengine=libaio --iodepth=4 --bs=128k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-write.log
sleep 5
fio --name=job-r --rw=read --size=2G --ioengine=libaio --iodepth=4 --bs=128K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-read.log
sleep 5
fio --name=job-randw --rw=randwrite --size=2G --ioengine=libaio --iodepth=32 --bs=4k --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randwrite.log
sleep 5
fio --name=job-randr --rw=randread --size=2G --ioengine=libaio --iodepth=32 --bs=4K --direct=1 --filename=bench.file --output-format=normal,terse --output=$OUT/fio-randread.log
First two are classic read/write sequential tests, with 128 KB block size an a queue depth of 4. The last are small 4 KB random read/writes, but with are 32 deep queue. The direct flag means direct IO, to make sure that no caching happens on the client.
For the real world tests i used rsync in archive mode (-rlptgoD
) and the included measurements:
rsync --info=progress2 -a sshfs/TMU /tmp/TMU
Synthetic Performance
Sequential
Most are maxing out the network, the only one falling behind in the read test is SMB with encryption enabled, looking at the CPU utilization reveals that it uses only one core/thread, which causes a bottleneck here.
NFS handles the compute intensive encryption better with multiple threads, but using almost 200% CPU and getting a bit weaker on the write test.
SSHFS provides a surprisingly good performance with both encryption options, almost the same as NFS or SMB in plaintext! It also put less stress on the CPU, with up to 75% for the ssh process and 15% for sftp.
Random
On small random accesses NFS is the clear winner, even with encryption enabled very good. SMB almost the same, but only without encryption. SSHFS quite a bit behind.
NFS still the fastest in plaintext, but has a problem again when combining writes with encryption. SSHFS is getting more competitive, even the fastest from the encrypted options, overall in the mid.
The latency mostly resembles the inverse IOPS/bandwith. Only notable point is the pretty good(low) write latency with encrypted NFS, getting most requests a bit faster done than SSHFS in this case.
Real World Performance
This test consists of transfering a folder with rsync from/to the mounted share and a local tmpfs (RAM backed). It contains the installation of a game (Trackmania United Forever) and is about 1,7 GB in size with 2929 files total, so a average file size of 600 KB, but not evenly distributed.
After all no big surprises here, NFS fastest in plaintext, SSHFS fastest in encryption. SMB always somewhat behind NFS.
Conclusion
In trusted home networks NFS without encryption is the best choice on Linux for maximum performance. If you want encryption i would recommend SSHFS, it is a much simpler setup (compared to Kerberos), more cpu efficient and often only slightly slower than plaintext NFS. Samba/SMB is also not too far behind, but only really makes sense in a mixed (Windows/Linux) environment.
Thanks for reading, i hope it was helpful.
Best DevOps tools
Source: https://www.
Best Containers for DevOps in 2025
A
look at the top Docker containers for DevOps in 2025. Streamline your
code projects and automation with these cool and robust containers

I use a LOT of Docker containers in the home lab and in my DevOps journey to continually work with various code projects, automation, and just running applications in containers. There are myriads of DevOps containers to be aware of that provide a lot of value and can help you achieve various business and technical objectives. There are several DevOps containers that I want to share with you that I use. Let’s look at the best Docker containers for DevOps in 2025 and see which ones I am using.
Why run Docker Containers?
There may be a question as to why you would run containers for DevOps tools instead of VMs? That is a great question. Virtual Machines are still very important and provide the foundation for virtual infrastructure and container hosts. I don’t think they will go away for a long time. However, containers are my favorite way to run apps and DevOps solutions.
Docker containers allow you to easily spin up new applications in seconds and not minutes or hours. You can simply pull an application container and spin it up with a one-line docker command instead of having to install a VM operating system, install all the prerequisites, and meet all the requirements of the application, which might take a couple of hours total.
Instead, spin up a Docker container host on a virtual machine and then spin up your applications in containers on top of your container host.
Best Docker Containers for DevOps beginning in 2025
Below is my list of best Docker containers for DevOps in 2025 broken out in sections. You will note a few repeats in the sections as some solutions do more than one thing.
CI/CD:
- GitLab
- Jenkins
- Gitea
- ArgoCD
Container registries
- GitLab
- Gitea
- Harbor
Secrets management
- Hashicorp Vault
- CyberArk Conjur
- OpenBAO
Code Quality
- Sonarqube
- Trivvy
Monitoring stack
- Telegraf
- InfluxDB
- Prometheus
- Grafana
Ingress
- Nginx Proxy Manager
- Traefik
- Envoy by Lyft
CI/CD and Container Registries
GitLab
GitLab is the CI/CD solution and code management repo that I have been using to version my DevOps code in the home lab and in production environments. If you want to self-host your code repos and do extremely cool CI/CD pipelines for infrastructure as code, GitLab is a free solution that is easy to stand up in a Docker container.
You can use it to automate testing, build and automate, and deployment to your environments. You can also integrate third-party solutions in GitLab, which is a great way to extend what it can do
Pros:
- It is an all in one solution for DevOps and code
- Good CI/CD pipeline features
- Has third-party tools and integrations
- Good community support
Cons:
- Can be resource-intensive
- Some features may be in the paid product
- Is rumored to be in talks of a buyout by someone?
Docker Compose Code:
version: '3'
services:
gitlab:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- './config:/etc/gitlab'
- './logs:/var/log/gitlab'
- './data:/var/opt/gitlab'
Learn more about GitLab here: The most-comprehensive AI-powered DevSecOps platform | GitLab
Jenkins
Jenkins is an OSS tool that most know. It will come up in just about any DevOps conversation around a self-hosted code repo. Many have a love/hate relationship with Jenkins. It can literally do anything you want it to, which is a plus. But the downside is, it can literally do anything. You can use it to build your projects, test code, and deploy to your infrastructure.
It also has a ton of third-party apps you can integrate with the solution and the CI/CD pipeline. Just about every other DevOps solution has an integration with Jenkins so it is supported across the board.
Pros:
- It has been around forever so great support
- Active community
- distributed builds are supported
- Everything seems to integrate with Jenkins
Cons:
- Can be complex to set up and manage
- Interface feels a little outdated
Docker Compose Code:
version: '3'
services:
jenkins:
image: 'jenkins/jenkins:lts'
restart: always
ports:
- '8080:8080'
- '50000:50000'
volumes:
- './jenkins_home:/var/jenkins_home'
Learn more about Jenkins here: Jenkins
Gitea
Gitea is a newcomer on the block. It has a modern feel about it, but isn’t as fully featured as other solutions like GitLab or Jenkins. It is easy to deploy and manage for Git repos. It has features that include issue tracking, CI/CD, and code reviews.
Pros:
- Lightweight and easy to configure
- Has CI/CD pipelines
- Lower resource requirements compared to other solutions
Cons:
- Fewer features compared to other solutions like GitLab and Jenkins
- Smaller community
Docker Compose Code:
version: '3'
services:
gitea:
image: 'gitea/gitea:latest'
restart: always
ports:
- '3000:3000'
- '222:22'
volumes:
- './gitea:/data'
Learn more about Gitea here: Gitea Official Website
ArgoCD
ArgoCD is a more Kubernetes-centric solution for GitOps. Its purpose is to supply continuous delivery for Kubernetes. It automates application deployment by tracking changes in a Git repository. It continuously monitors and synchronizes Kubernetes clusters which is a more proactive solution to make sure that applications are always deployed in the desired state.
Pros:
- GitOps-centric
- Real-time synchronization
- Kubernetes native solutions
Cons:
- Can be complex with GitOps and Kubernetes knowledge needed
Docker Compose Code: ArgoCD is usually installed using Kubernetes manifests or with Helm charts. So, not typically Docker Compose. Here is an example of a manifest:
apiVersion: v1
kind: Namespace
metadata:
name: argocd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: argocd-server
namespace: argocd
---
apiVersion: v1
kind: Service
metadata:
name: argocd-server
namespace: argocd
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
selector:
app: argocd-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
spec:
replicas: 1
selector:
matchLabels:
app: argocd-server
template:
metadata:
labels:
app: argocd-server
spec:
serviceAccountName: argocd-server
containers:
- name: argocd-server
image: argoproj/argocd:v2.0.0
ports:
- containerPort: 8080
command:
- argocd-server
args:
- --staticassets
- /shared/app
- --repo-server
- argocd-repo-server:8081
- --dex-server
- argocd-dex-server:5556
volumeMounts:
- name: static-files
mountPath: /shared/app
volumes:
- name: static-files
emptyDir: {}
Learn more about ArgoCD here: Argo CD – Declarative GitOps CD for Kubernetes (argo-cd.readthedocs.io).
Harbor
Harbor is a well-known container registry solution. It supports features that most want for their registries like role-based access control, image replication, multiple registries, vulnerability scans, and others.
Pros:
- Good security
- Role-based access control (RBAC)
- Image replication and vulnerability scanning
Cons:
- More complex setup
- No less than 6 containers for the solution
- Requires additional resources
Docker Compose Code:
version: '3.5'
services:
log:
image: goharbor/harbor-log:v2.0.0
restart: always
volumes:
- /var/log/harbor/:/var/log/docker/:z
registry:
image: goharbor/registry-photon:v2.0.0
restart: always
core:
image: goharbor/harbor-core:v2.0.0
restart: always
portal:
image: goharbor/harbor-portal:v2.0.0
restart: always
jobservice:
image: goharbor/harbor-jobservice:v2.0.0
restart: always
proxy:
image: goharbor/nginx-photon:v2.0.0
restart: always
Learn more about Harbor registry here: Harbor (goharbor.io).
Secrets Management
HashiCorp Vault
The Vault solution allows you to store secrets and pull these dynamically when you are using IaC solutions like Terraform. You can store many types of secrets, including things like API keys, passwords, and certificates. It is easy to stand up as a solution in either Docker or Kubernetes.
It provides a secure way for code builds and other things like CI/CD to grab secrets on the fly from the secrets vault.
Pros:
- Secure secrets management
- Dynamic secrets can be used
- Audit logging
Cons:
- It can get complex to build out
- Learning curve
You can see my full blog post on how to install Hashicorp Vault inside Docker here: Hashicorp Vault Docker Install Guide.
Docker Compose Code:
version: '3.8'
services:
vault:
image: hashicorp/vault:latest
container_name: vault
ports:
- "8200:8200"
volumes:
- ./config:/vault/config
- ./data:/vault/file
cap_add:
- IPC_LOCK
command: "vault server -config=/vault/config/vault-config.json"
vault-config.json
{
"storage": {
"file": {
"path": "/vault/file"
}
},
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": true
}
},
"ui": true
}
Learn more about Hashicorp Vault here: Vault by HashiCorp (vaultproject.io).
CyberArk Conjur
CyberArk Conjur provides a community edition for secrets management. It focuses on CI/CD pipelines. You can integrate various tools and platforms for credentials, API keys, and other secrets.
It has detailed audit logging and other features that can help with security.
Pros:
- Strong integration with DevOps tools
- Robust access controls
- Detailed auditing
Cons:
- Added features may require enterprise version (paid)
- Complicated setup and management for those not familiar with the solution
Docker Compose Code:
version: '3'
services:
conjur:
image: cyberark/conjur:latest
restart: always
environment:
CONJUR_AUTHENTICATORS: authn
ports:
- "443:443"
volumes:
- ./conjur/data:/var/lib/conjur
Learn more about CyberArk Conjur here: Secrets Management | Conjur.
OpenBAO
If you are a looking for a free and open source secrets management solution, then OpenBAO is one to try. It is from the Linux Foundation and allows you to store passwords and other secret information. Like Vault, you can use it to store things such as API keys, passwords, etc.
Pros:
- Simple solution that is lightweight
- Encryption support and RBAC
- Open-source and free
Cons:
- Limited features
- Smaller community
Docker Compose Code:
version: '3'
services:
openbao:
image: openbao/openbao:latest
restart: always
ports:
- "8080:8080"
volumes:
- ./openbao/data:/var/openbao
Learn more about OpenBAO here: OpenBao | OpenBao.
Code Quality
SonarQube
You can use SonarQube for scanning code quality, and things like linting, etc. It can help do automatic code reviews and detect bugs in code. You can also use it as a vulnerability scanner and find code smells.
It supports many different programming languages and scripting languages. You can integrate it with CI/CD pipelines and give you a report of what it finds, etc.
Pros:
- Code quality analysis
- Multiple languages supported
- Integrates with CI/CD
Cons:
- Can be resource-intensive
- Doesn’t support some languages like PowerShell
Docker Compose Code:
version: '3'
services:
sonarqube:
image: sonarqube:latest
restart: always
ports:
- "9000:9000"
volumes:
- ./sonarqube/conf:/opt/sonarqube/conf
- ./sonarqube/data:/opt/sonarqube/data
- ./sonarqube/logs:/opt/sonarqube/logs
- ./sonarqube/extensions:/opt/sonarqube/extensions
Learn more about SonarQube here: Code Quality, Security & Static Analysis Tool with SonarQube | Sonar (sonarsource.com).
Trivvy
Trivvy is another solution I have used that allows you to scan for vulnerabilities (CVEs) and also for misconfigurations in your code (IaC). You can scan things like repositories, artifacts, container images, and you can even scan things like Kubernetes clusters.
Take a look at the example Docker compose code below:
version: '3.8'
services:
trivy:
image: aquasec/trivy:latest
container_name: trivy
entrypoint: ["trivy"]
volumes:
- ./trivy-cache:/root/.cache
- /var/run/docker.sock:/var/run/docker.sock
environment:
- TRIVY_SEVERITY=HIGH,CRITICAL
- TRIVY_EXIT_CODE=1
- TRIVY_IGNORE_UNFIXED=true
command: --help #Replace this with what you want to scan like, "image <image-name>"
Learn more about Trivvy on the official site here: Trivy.
Monitoring Stack
Telegraf
Telegraf collects and reports on metrics. It is part of the very well known “TICK” stack that many use for monitoring.
Pros:
- Many plugins to extend its features
- Lightweight
- Integrates with various systems
Cons:
- Requires configuration that is customized for different solutions
- Learning curve
Docker Compose Code:
version: '3'
services:
telegraf:
image: telegraf:latest
restart: always
volumes:
- ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf
Learn more about Telegraf here: Telegraf Documentation (influxdata.com).
InfluxDB
InfluxDB is an open-source time series database. It is also part of the “TICK” stack. It is often used for housing metrics, events, and logs. There are many integrations with InfluxDB and you will find a lot of community projects using it.
Pros:
- Great for time-series data
- High performance
- Integrates with many solutions
Cons:
- Can require large resources depending on data
- Complex queries may result in a learning curve
Docker Compose Code:
version: '3'
services:
influxdb:
image: influxdb:latest
restart: always
ports:
- "8086:8086"
volumes:
- ./influxdb/data:/var/lib/influxdb
Learn more about InfluxDB here: InfluxDB Time Series Data Platform | InfluxData.
Grafana
Grafana is the de facto tool that is used in the open-source world to visualize data gathered from other solutions. It is commonly used in solution “stacks” of things like InfluxDB, Prometheus, etc. Combined with other tools it makes a great open-source monitoring solution that can replace even enterprise solutions for data views.
Pros:
- Powerful for dashboarding and visualizing data
- Many integrations
- Intuitive interface
- Thousands of community dashboards available
Cons:
- Configuration may be complex depending on the integration
- Learning curve
Docker Compose Code:
version: '3'
services:
grafana:
image: grafana/grafana:latest
restart: always
ports:
- "3000:3000"
volumes:
- ./grafana/data:/var/lib/grafana
Learn more about Grafana here: Grafana: The open observability platform | Grafana Labs.
Ingress
Nginx Proxy Manager
Nginx Proxy Manager is a great solution that I use a lot in the home lab and it provides an easy way to add SSL termination to your Docker containers. Instead of having to configure SSL inside the container you are hosting, you configure the SSL cert in Nginx Proxy Manager and then proxy the requests for your containers inside the proxy network.
Pros:
- User-friendly
- Lots of features
- Easy SSL configuration for Docker containers
Cons:
- Limited to Nginx features
- May need more advanced configuration for complex setups
Docker Compose Code:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
Learn more about Nginx Proxy Manager here: Nginx Proxy Manager.
Traefik
Similar to Nginx Proxy Manager, Traefik is a way to provide reverse proxy features for containers. It is also a load balancer and can automatically discover services and apply routing to your containers. You can use it to manage SSL certificates as well like LetsEncrypt to automatically provision those.
It is more difficult to use than Nginx Proxy Manager since most configuration is done in the Traefik configuration itself which can be tedious.
Pros:
- Automatic service discovery
- Great integration with Docker and Kubernetes
- Lightweight
Cons:
- Configuration can be complicated
- Certificates can be complex to get working
- More complicated to use than Nginx Proxy Manager
Docker Compose Code:
version: '3'
services:
traefik:
image: traefik:v2.4
restart: always
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
Learn more about Traefik here: Traefik Labs.
Envoy by Lyft
Envoy is a reverse proxy solution that was originally developed by Lyft and it is now part of the Cloud Native Computing Foundation (CNCF). It is built with distributed communication systems in mind. It can be used as a sidecar proxy that can be used in things like service meshes. Also, it can simply be used as a standalone proxy solution.
Note the following example Docker compose code below:
version: '3.8'
services:
envoy:
image: envoyproxy/envoy:v1.26.0
container_name: envoy
ports:
- "9901:9901" # Admin interface
- "10000:10000" # Example listener port
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml:ro
command: ["-c", "/etc/envoy/envoy.yaml"]
networks:
- envoy-net
restart: unless-stopped
networks:
envoy-net:
driver: bridge
Below is an example of the envoy.yaml configuration file:
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
config:
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: service_backend
http_filters:
- name: envoy.filters.http.router
clusters:
- name: service_backend
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_backend
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
admin:
access_log_path: /dev/stdout
address:
socket_address:
address: 0.0.0.0
port_value: 9901
Learn more about Envoy here: Envoy proxy.
Wrapping up
Hopefully this list of what I think are some of the best DevOps containers in 2025 will help you discover some solutions that you may not have used before. All of these solutions are a great way to start learning DevOps practices and workflows and it will take your home lab or production environments to the next level.
JetKVM - Control any computer remotely by JetKVM
JetKVM - Control any computer remotely by JetKVM — Kickstarter
Wednesday, January 29, 2025
CRA acquires Cloud4com, a leading cloud computing provider
https://www.cra.cz/cra-acquires-cloud4com-a-leading-cloud-computing-provider
CRA acquires Cloud4com, a leading cloud computing provider
A significant deal on the Czech IT scene, ARICOMA Group and České Radiokomunikace (CRA), the subsidiary of Cordiant Digital Infrastructure Limited (CORD), a specialist investor in digital infrastructure, announce that CRA are acquiring Cloud4com (C4C) from ARICOMA Group, along with its data centre in Lužice (together “the Transactions”). The price of the Transactions are partly conditional on 2024’s results, but expected to exceed CZK 1 billion. The Transactions, which took legal effect upon signature, also includes the conclusion of a strategic cooperation between ARICOMA Group and České Radiokomunikace.
Cloud4com is being acquired for an initial consideration of CZK [870 million] (£[30.6 million]), subject to customary adjustments and with a further amount payable up to a maximum of CZK [485 million] (£[17.1 million]), depending on Cloud4com’s EBITDA for the year ending 31 December 2024. The data centre in Lužice (DC Lužice) is being acquired for CZK [130 million] (£4.6 million), also subject to customary adjustments. Both businesses are unlevered.
C4C is one of the largest domestic cloud computing specialists in the Czech Republic, offering its customers sophisticated and secure infrastructure as a service (IaaS) solutions.
The Transactions represent a significant advance in both the scale and capability of CRA in the Czech Republic’s fast-growing data services market. The Transactions also clearly demonstrate the implementation of the CORD’s “Buy, Build, and Grow” model.
"These transactions are an important milestone for CRA as we continue to diversify our operations into high growth areas such as data centres and cloud services. Cloud4com has achieved a leading market position in the Czech Republic and we see a clear strategic fit and synergistic value in adding Cloud4com and DC Lužice to CRA. We expect these assets to contribute strong revenue growth going forward and we anticipate capturing margin expansion due to increasing operating leverage. We look forward to working with the Cloud4com management team to further develop and grow the combined data centres and cloud businesses, and cement CRA’s leadership position in that area of the market,“ said Miloš Mastník, CEO of České Radiokomunikace.
Benn Mikula, CEO of Cordiant Capital, gave his assessment of the Transactions: “These acquisitions mark an important step in CRA’s continued growth in the Czech data centre and cloud services market. They add both capabilities and capacity to an already strong team. This market segment is increasingly important to CRA’s revenue mix.”
Steven Marshall, Chairman of Cordiant Digital Infrastructure Management, added: "We are delighted to have agreed to acquire these attractive data centre assets, which are being funded by organic cash flow at CRA. These transactions are highly complementary to CRA’s existing data centre and cloud businesses, enhancing CRA’s market leading position in its respective areas of operations and further demonstrating our active management approach through our ‘Buy, Build, and Grow’ strategy.”
"It has been great to watch C4C grow, evolve, and improve its products for nearly a decade, gaining loyal, satisfied customers as it matures. I'm very glad we took a chance on the quality team led by Tomas Knoll back then and today we are selling a company we are rightly proud of. I am very pleased that C4C's new owner, České Radiokomunikace, has such a strong position," said Michal Tománek, KKCG's Investment Director, who led the first investment in C4C by KKCG (which includes ARICOMA Group) in 2015.
"This divestment fits into ARICOMA Group's long-term strategy of further strengthening our position in consulting and implementing third-party cloud environments. We are happy to leave the actual provision of cloud technology as an investment-intensive industry to infrastructure players who focus on this area. The deal also includes a data centre in South Moravia, which we repaired in a flash after a devastating tornado struck the region in 2021, that meets the highest operational criteria. Of course, count on continued cooperation with České Radiokomunikace, as our portfolios complement each other well," added Milan Sameš, CEO of ARICOMA.
C4C's main business activity is providing infrastructure for the operation of applications and data storage as a service. The vPDC (virtual private data centre) service is offered through its own automated cloud platform, Virtix, to which most of Aricoma's cloud customers have been gradually migrated. Virtix enables the provisioning of infrastructure as a service, as well as many overhead services such as backup to the cloud (Veeam Cloud Connect) or infrastructure for critical SAP S/4 HANA systems.
In August 2023, C4C will have approximately 28 employees, which are mainly based in the Czech Republic.
About Cordiant Digital Infrastructure Limited
Cordiant Digital Infrastructure Limited primarily invests in the core infrastructure of the digital economy – data centres, fibre-optic networks and telecommunication and broadcast towers in Europe and North America. Further details about the Company can be found on its website at www.cordiantdigitaltrust.com.
In total, the Company has successfully raised £795 million in equity, along with a further €200m through a Eurobond with four European institutions; deploying the proceeds into four acquisitions: CRA, Hudson Interxchange, Emitel and Speed Fibre, which together offer stable, often index-linked income, and the opportunity for growth, in line with the Company's Buy, Build & Grow model.
About CRA
CRA is a diversified digital infrastructure company, operating mobile towers, a broadcast network, data centres, a fibre network and Internet of Things networks serving utilities. The company has a history of offering superior customer service, increasingly through integrated solutions spanning the spectrum of digital infrastructure.
About Aricoma Group
ARICOMA Group is a leading European provider of IT services. It serves over 6,000 clients in thirty countries and employs over five thousand professionals. The group comprises companies operating under two main brands – Aricoma and Qinshift. Aricoma offers services in the areas of IT infrastructure, cloud technologies, enterprise applications, cybersecurity, public sector digitization, and system integration. Qinshift brings together companies specializing in software design for the commercial sector, outsourcing, and consulting.
About KKCG
KKCG is an investment and innovation group with expertise in lotteries and gaming, energy, technology, and real estate. Founded by entrepreneur, investor, and philanthropist Karel Komárek, KKCG employs over 10,000 people in 36 countries across its portfolio companies, with more than €8 billion in assets under management. Its businesses include, amongst others, ARICOMA GROUP Holding a.s., comprehensive IT services provider and custom software development worldwide; Allwyn, a multi-national lottery operator with leading market positions in Austria, Czech Republic, Greece, Cyprus, Italy, the United Kingdom and the United States (Illinois); MND Group, an international producer and supplier of traditional and renewable energy, active in drilling and exploration, energy storage, retail, and trading; and KKCG Real Estate which creates internationally recognized, award-winning architecture in the residential, commercial, and industrial sectors with a focus on innovative and sustainable development. With operations on several continents, KKCG businesses draw on capital, networks, and insights from across the group to enable profitable, sustainable growth for the long term. KKCG is committed to supporting the communities where it operates, contributing to the societies it works within.
FreeBSD X11 config in Virtual Box
PACKAGES
pkg install virtualbox-ose-additions
pkg install drm-kmod
/etc/rc.conf
ifconfig_em0="DHCP"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
kld_list="/boot/modules/vboxvideo.ko"
zfs_enable="YES"
dbus_enable="YES"
moused_enable="YES"
devd_enable="YES"
vboxguest_enable="YES"
vboxservice_enable="YES"
tailscaled_enable="YES"
/boot/load.conf
drm_load="YES"
vboxdrv_load="YES"
vboxguest_load="YES"
vboxvideo_load="YES"
Thursday, January 23, 2025
The secrets of MTU - L2 MTU vs. L3 MTU - Where does the fragmentation happen?
Source: https://www.packetstreams.net/2018/07/the-secrets-of-mtu-l2-mtu-vs-l3-mtu.html
"The
Maximum Transmission Unit (MTU) is the largest possible frame size of a
communications Protocol Data Unit (PDU) on an OSI Model Layer 2 data
network." In today's network the standard MTU for Layer 3 IP packet is
1500 bytes. Meanwhile, the standard MTU for Layer 2 Ethernet frame is
1514 bytes ( 6 bytes source MAC + 6 bytes destination MAC + 2 bytes
EtherType + 1500 bytes IP packet). For the Dot1Q trunk frame, extra 4
bytes for Dot1Q tag is added. So up to here, we understand that there
are two types of MTUs. MTU for layer 2 frames and MTU for layer 3
packets.
Click to Expand |
Ok, now we understand L2 MTU vs L3 MTU. Another interesting question comes up. When a L3 IP packet passing through a router, where the MTU will be checked and where the fragmentation will happen? The inbound interface or the outbound interface. In order to find out the answer of this question, let's do a simple Lab as the following topology.
Click to Expand |
Let's send another IP packet with size 1500 from R3 to R1. This time, we can see the packet has been fragmented into two by R2.
Click to Expand |
by doing this lab, we understand that the IP MTU is checked by the outgoing interface of the router. If fragmentation is required, the packet is fragmented by the outgoing interface of the router. So if the DF bit is set, then the packet will be dropped on the outbound interface as well.
Reference:
Thursday, January 16, 2025
How to simulate HDD serial number on VMware Workstation or Fusion
Question: Is possible to emulate HDD serial number on VMware Workstation?
Answer ...
Yes, it is possible to emulate or specify a custom HDD serial number on VMware Workstation. You can do this by editing the virtual machine's configuration file (.vmx
). Here's how:
Steps to Set a Custom HDD Serial Number
-
Power Off the Virtual Machine
Ensure the VM is powered off before making changes. -
Locate the
.vmx
File
Find the.vmx
file for your virtual machine. It is typically located in the directory where the VM files are stored. -
Edit the
.vmx
File
Open the.vmx
file in a text editor (e.g., Notepad on Windows or Vim on Linux). -
Add or Modify the Following Parameter
Add a line specifying the desired serial number for the virtual hard disk:disk.enableUUID = "TRUE"
This setting enables the generation of a unique virtual disk serial number for the VM. If you want to specify a custom UUID:
scsi0:0.virtualDev = "lsilogic" scsi0:0.deviceType = "disk" scsi0:0.uuid = "12345678-90ab-cdef-1234-567890abcdef"
Replace
scsi0:0
with the appropriate device name for your virtual disk and12345678-90ab-cdef-1234-567890abcdef
with the desired UUID or serial number. -
Save the
.vmx
File
Save the file and ensure no syntax errors are introduced. -
Start the Virtual Machine
Power on the VM. The specified serial number or UUID will now be reflected.
Verify the Serial Number
You can verify the serial number within the guest operating system. For example:
- On Windows: Use
wmic diskdrive get SerialNumber
. - On Linux: Use
lsblk -o NAME,SERIAL
or similar commands.
Let me know if you encounter any issues!
Yes, it is possible to emulate or specify a custom HDD serial number on VMware Workstation. You can do this by editing the virtual machine's configuration file (.vmx
). Here's how:
Steps to Set a Custom HDD Serial Number
-
Power Off the Virtual Machine
Ensure the VM is powered off before making changes. -
Locate the
.vmx
File
Find the.vmx
file for your virtual machine. It is typically located in the directory where the VM files are stored. -
Edit the
.vmx
File
Open the.vmx
file in a text editor (e.g., Notepad on Windows or Vim on Linux). -
Add or Modify the Following Parameter
Add a line specifying the desired serial number for the virtual hard disk:disk.enableUUID = "TRUE"
This setting enables the generation of a unique virtual disk serial number for the VM. If you want to specify a custom UUID:
scsi0:0.virtualDev = "lsilogic" scsi0:0.deviceType = "disk" scsi0:0.uuid = "12345678-90ab-cdef-1234-567890abcdef"
Replace
scsi0:0
with the appropriate device name for your virtual disk and12345678-90ab-cdef-1234-567890abcdef
with the desired UUID or serial number. -
Save the
.vmx
File
Save the file and ensure no syntax errors are introduced. -
Start the Virtual Machine
Power on the VM. The specified serial number or UUID will now be reflected.
Verify the Serial Number
You can verify the serial number within the guest operating system. For example:
- On Windows: Use
wmic diskdrive get SerialNumber
. - On Linux: Use
lsblk -o NAME,SERIAL
or similar commands.
Let me know if you encounter any issues!
Tuesday, January 14, 2025
Broadcom (VMware) Useful Links for Technical Designer and/or Architect
Lot of URLs have been changed after Broadcom acquisition of VMware. That's the reason I have started to document some of useful links for me.
VMware Product Configuration Maximums - https://configmax.broadcom.com (aka https://vmware.com/go/hcl)
Network (IP) ports Needed by VMware Products and Solutions - https://ports.broadcom.com/
VMware Compatibility Guide - https://compatibilityguide.broadcom.com/ (aka https://www.vmware.com/go/hcl)
VMware Product Lifecycle - https://support.broadcom.com/group/ecx/productlifecycle (aka https://lifecycle.vmware.com/)
VMware Hands-On Lab - https://labs.hol.vmware.com/HOL/catalog
Broadcom (VMware) Education / Learning - https://www.broadcom.com/education
VMware Validated Solutions - https://vmware.github.io/validated-solutions-for-cloud-foundation/
If you are independent consultant and have to open support ticket related to VMware Education or Certification you can use form at https://broadcomcms-software.wolkenservicedesk.com/web-form
VMware Health Analyzer
- Full VHA download: https://docs.broadcom.com/docs/VHA-FULL-OVF10
- Collector VHA download: https://docs.broadcom.com/docs/VHA-COLLECTOR-OVF10
- Full VHA license Register Tool: https://pstoolhub.broadcom.com/
VMware Products Licensing
- VMware vSphere Foundation (VVF) Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/vcenter-and-host-management-8-0/license-management-host-management/what-is-vmware-vsphere-foundation-vvf-solution-licensing-host-management.html
- vSphere Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/8-0/vcenter-and-host-management-8-0/license-management-host-management/licensing-for-products-in-vsphere-host-management.html
- vSAN Licensing - https://techdocs.broadcom.com/es/es/vmware-cis/vsan/vsan/8-0/licensing-for-vsphere-iaas-control-plane.html
- Avi Load Balancer Licensing - https://techdocs.broadcom.com/us/en/vmware-security-load-balancing/avi-load-balancer/avi-load-balancer/22-1/vmware-avi-load-balancer-administration-guide/licensing.html
- vSphere Supervisor 8.0 Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-supervisor/8-0/vsphere-supervisor-concepts-and-planning/vsphere-iaas-control-plane-concepts/licensing-for-vsphere-iaas-control-plane.html
- VMware Aria Operations 8.18 Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-operations/8-18/vmware-aria-operations-configuration-guide-8-18/about-vmware-aria-operation-licenses.html
- VMware Live Site Recovery 9.0 Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/live-recovery/live-site-recovery/9-0/overview/site-recovery-manager-system-requirements/srm-licensing.html
- VMware HCX 4.11 Licensing - https://techdocs.broadcom.com/us/en/vmware-cis/hcx/vmware-hcx/4-11/vmware-hcx-user-guide-4-11/preparing-for-hcx-installations/hcx-activation-and-licensing/about-hcx-licensing.html
- VMware Edge Intelligence (VeloCloud) - https://techdocs.broadcom.com/us/en/vmware-sde/velocloud-sase/vmware-edge-intelligence/GA/edge-licensing-with-new-orchestrator-ui.html
Do you know any other helpful link? Use comments below to let me know. Thanks.
Sunday, January 5, 2025
Waydroid
A container-based approach to boot a full Android system on regular GNU/Linux systems running Wayland based desktop environments.
Subscribe to:
Posts (Atom)