Understanding Project Lifecycles and Docker Deployment

This document outlines the typical stages of a project's lifecycle and explores modern approaches, with a focus on Docker deployment.

Project Lifecycles

Software projects, like all endeavors, progress through distinct phases. The evolution of the internet has significantly impacted these lifecycles. We can broadly categorize project lifecycles into two perspectives: a narrow, specific view focusing on execution, and a broad view encompassing inception to termination. This guide primarily addreses the narrower, execution-oriented lifecycle, using traditional software projects as an example.

Traditional Software Project Lifecycle

A traditional software project typically involves five key stages:

  1. Research/Ideation:
    • Objective: Strategic foresight and solution exploration.
    • Involves: Stakeholders, with Product Managers playing a central role.
    • Outcome: Diverse ideas and proposals, finalized by leadership approval.
  2. Design:
    • Objective: Visualizing the solution.
    • Involves: Product teams leading, with Development, Testing, and Operations contributing.
    • Outcome: Product Requirement Documents and project milestones.
  3. Development:
    • Objective: Implementing the designed solution.
    • Involves: Development teams primarily, with Operations involvement.
    • Outcome: A functional, stage-ready product.
  4. Testing:
    • Objective: Ensuring functional completeness and quality.
    • Involves: Testing teams primarily, with Operations and Development support.
    • Outcome: Project functions meeting defined requirements.
  5. Operations:
    • Objective: Deployment and ongoing maintenance.
    • Involves: Operations teams primarily, with Product and Development support.
    • Outcome: Project completion, feature iteration, or retirement.

Docker Fundamentals

Installation Prerequisites

Ensure your system has internet access.

Uninstalling Older Docker Versions (CentOS/RHEL)


yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-selinux \
           docker-engine-selinux \
           docker-engine

If the command reports that none of these packages are installed, it's expected behavior.

Installing Docker on Linux (Ubuntu/Debian)


# Update package list
sudo apt-get update

# Install prerequisite packages
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Add Docker's APT repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Update package list again
sudo apt-get update

# Install Docker
sudo apt-get install -y docker-ce

# Verify installation
sudo docker --version

Installing Docker Compose on Linux

Docker Compose is used for managing multi-container Docker applications.


# Download Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# Grant execute permissions
sudo chmod +x /usr/local/bin/docker-compose

# Verify installation
docker-compose --version

Installing Docker on Linux (CentOS/RHEL using Yum)


# Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# Set up the stable Docker repository (choose one)
# Official Docker Hub
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Alibaba Cloud mirror
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# Update yum package index
sudo yum makecache fast

# Install a specific version (optional)
# List available versions
yum list docker-ce --showduplicates | sort -r
# Example: Install a specific version
# sudo yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

# Install the latest version
sudo yum -y install docker-ce

# Start the Docker service
sudo systemctl start docker.service

# Enable Docker to start on boot
sudo systemctl enable docker.service

# Verify Docker status
sudo systemctl status docker.service

# Check Docker version
docker version

# View Docker information
docker info

Note: Docker currently only supports 64-bit systems.

Before installation, it's recommended to disable the firewall and SELinux:


sudo systemctl stop firewalld.service
sudo setenforce 0

Package Explanations:

  • yum-utils: Provides the yum-config-manager utility.
  • device-mapper: A Linux kernel framework for logical volume management. The device-mapper storage driver requires device-mapper-persistent-data and lvm2.

Docker Service Management:


# Start Docker service
sudo systemctl start docker.service

# Enable Docker on boot
sudo systemctl enable docker.service

Docker consists of a server (daemon) that manages containers and a client that communicates with the server. Typically, both run on the same machine.

Docker Image Acceleration

To speed up image downloads, configure a registry mirror. You can obtain configuration details from platforms like Aliyun's Container Registry.


mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://YOUR_MIRROR_URL.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

Installing Docker using npm (Less common for server deployments)

While not the standard server installation method, you can install Docker and Docker Compose via npm:


# Install Docker CLI wrapper
npm i docker -g

# Install Docker Compose CLI wrapper
npm i docker-compose -g

Deploying a Flask Project as a Web Service with Docker

Containerizing a Flask application using Docker is a standard practice for portability and maintainability. Here's a comprehensive guide:

1. Project Setup

Ensure your Flask application is ready, with an entry point like app.py.

2. Generate Requirements File

List your project's dependencies:


pip freeze > requirements.txt

3. Prepare Project Directory

Create a dedicated directory for your Docker setup and copy your project files into it.


mkdir flask_docker_project
cd flask_docker_project
cp -r /path/to/your/flask_app/* .

4. Create a Dockerfile

In the root of your project directory, create a file named Dockerfile with the following content for a Python 3.8 application:


# Use an official Python runtime as a parent image
FROM python:3.8-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Make port 8000 available to the world outside this container
EXPOSE 8000

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

Example Dockerfile for .NET Core:


FROM mcr.microsoft.com/dotnet/core/aspnet:2.1
EXPOSE 9401
WORKDIR /app
COPY . /app
ENTRYPOINT ["dotnet", "helloworld.dll"]

5. Build the Docker Image

Navigate to the directory containing your Dockerfile and run:


docker build -t your-flask-app:v1.0 .

6. Run the Docker Container

a) Single Container Execution

Start a container from your newly built image:


docker run -d -p 8000:8000 --name my-flask-app your-flask-app:v1.0

  • -d: Run in detached mode (background).
  • -p 8000:8000: Map host port 8000 to container port 8000.
  • --name my-flask-app: Assign a name to the container.

b) Using Docker Compose

For multi-container applications or more complex configurations, create a docker-compose.yml file:


version: '3.8'

services:
  app:
    image: your-flask-app:v1.0
    container_name: my-flask-app-compose
    ports:
      - "8000:8000"
    environment:
      - FLASK_ENV=production
    volumes:
      - ./data:/app/data # Example volume mount

Start the services defined in the compose file:


docker-compose up -d

Server Deployment

Transferring Docker Images

You can transfer Docker images between machines by saving them to a tar archive and then loading them on the target server.

1. Save Image to Tarball (Source Machine)

First, ensure the image is built on the source machine.


# Example: Build an image named dify-web with tag 1.2
# sudo docker build -t dify-web:1.2 .

# Save the image to a tar file
sudo docker save -o dify-web.tar dify-web:1.2

You can then copy this dify-web.tar file to your target server using tools like scp.


scp dify-web.tar user@target-server:/path/to/destination

2. Load Image on Target Server

On the target server, load the image from the tarball:


sudo docker load -i /path/to/destination/dify-web.tar

3. Verify Image Loading

Check if the image is available:


docker images | grep dify-web

4. Run the Container

Start a container from the loaded image:


sudo docker run -d -p 3000:3000 --name dify-web-container dify-web:1.2

Adjust port mappings (-p) as needed.

5. Deploying with Docker Compose

If your deployment uses Docker Compose:

  1. Copy the docker-compose.yaml file to the target server using scp.
  2. Navigate to the directory containing the docker-compose.yaml file on the target server.
  3. Start the services:

cd /path/to/destination
sudo docker-compose up -d

Alternative: Pushing to a Registry

For easier management across multiple servers, consider pushing your image to a Docker registry like Docker Hub or a private registry.


# Log in to Docker Hub
docker login

# Tag the image for the registry
docker tag your-local-image:tag your-dockerhub-username/your-repo-name:tag

# Push the image
docker push your-dockerhub-username/your-repo-name:tag

On the target server, you can then pull and run the image:


# Pull the image
docker pull your-dockerhub-username/your-repo-name:tag

# Run the container
docker run -d -p 8000:8000 your-dockerhub-username/your-repo-name:tag

Automated Deployment Script Example

A shell script can automate the build, stop, remove, and run process for a container.


#!/bin/bash
IMAGE_NAME="helloworld"
CONTAINER_NAME="helloworld_con"
CURRENT_TIME=$(date "+%Y%m%d%H%M%S")
TAG="${IMAGE_NAME}:${CURRENT_TIME}"

# Build the Docker image with a timestamp tag
echo "Building Docker image: ${TAG}"
docker build -t ${TAG} .

# Stop and remove the existing container if it's running
echo "Stopping and removing existing container: ${CONTAINER_NAME}"
docker stop ${CONTAINER_NAME} || true
docker rm ${CONTAINER_NAME} || true

# Run the new container
echo "Running new container: ${CONTAINER_NAME} with image ${TAG}"
docker run \
  --name ${CONTAINER_NAME} \
  -d \
  --restart=always \
  -p 9401:9401 \
  ${TAG}
  # --network=test_default \ # Example: specify network
  # --link mysql \ # Example: link to another container

echo "Deployment complete."

Understanding Docker Images

What is a Docker Image?

A Docker image is a read-only template containing the application's filesystem, code, libraries, dependencies, and tools required for execution. It functions as a blueprint from which multiple containers can be instantiated. Images are composed of stacked layers, forming a virtual file system through Union File Systems (UnionFS).

Docker Image Loading Principle

Docker images are built in layers:

  • bootfs: The bottom layer, containing the bootloader and kernel, similar to a base Linux system. This layer is mounted temporarily during the OS boot process and then unmounted once the kernel is running in memory.
  • rootfs: Sits above bootfs and includes the standard Linux directories and files (e.g., /dev, /proc, /bin, /etc). This layer defines the operating system distribution (e.g., Ubuntu, CentOS).

Viewing Image and Container Status

List available images:


docker images

List running containers:


docker ps

List all containers (including stopped ones):


docker ps -a

Accessing Applications

You can access a deployed application via its mapped host IP address and port, or potentially by container name if within the same Docker network.

Basic Docker Commands

  • Create & Run Container: docker run [options] image:tag
  • List Containers: docker ps (running), docker ps -a (all)
  • View Logs: docker logs -f [container_name_or_id]
  • Remove Container: docker rm [container_name_or_id]
  • Inspect Container: docker inspect [container_name_or_id]

Carefully plan port mappings and other configurations when creating containers.

Running MySQL with Docker

Pulling MySQL Image

Download the MySQL 5.7 image:


docker pull mysql:5.7

Starting MySQL Container

Run a MySQL 5.7 container with persistent storage and custom configurations:


docker run -d \
  -p 3306:3306 \
  --name mysql \
  -v /mydata/mysql/log:/var/log/mysql \
  -v /mydata/mysql/data:/var/lib/mysql \
  -v /mydata/mysql/conf:/etc/mysql \
  -e MYSQL_ROOT_PASSWORD=root \
  --network=my_custom_network \
  mysql:5.7

  • -p 3306:3306: Maps host port 3306 to container port 3306.
  • -v ...: Mounts host directories for logs, data, and configuration.
  • -e MYSQL_ROOT_PASSWORD=root: Sets the root password for initialization.
  • --network=my_custom_network: Connects the container to a specific Docker network.

Accessing the MySQL Container

Enter the running MySQL container:


docker exec -it mysql /bin/bash

Log in to the MySQL client:


mysql -uroot -proot --default-character-set=utf8

Create a new user ('reader' with password '123456') and grant privileges for remote access:


GRANT ALL PRIVILEGES ON *.* TO 'reader'@'%' IDENTIFIED BY '123456';
FLUSH PRIVILEGES;

Create a project database (e.g., fecmall):


CREATE DATABASE fecmall CHARACTER SET utf8;

Tags: docker dockerfile docker-compose Flask deployment

Posted on Wed, 13 May 2026 01:29:21 +0000 by sapna