“New Trading Platform” From Scratch #1 – Infra Setup

I will start a new series called “New Trading Platform”. In this series I will design a transportable renewed platform from scratch. I will start from barebone metal server and build step by step a fully functional framework.

Walkthrough: Proxmox Post-Installation Configuration

I have successfully configured your fresh Proxmox installation on x.59.252.177. The system is now up-to-date and ready for use with the free No-Subscription repository.

Changes Made

1. Repository Configuration

  • Disabled Enterprise Repo: Prevented error messages when updating without a subscription.
  • Enabled No-Subscription Repo: Switched to the community-supported repository for Proxmox VE.
  • Debian Version: Detected as Debian 13 (Trixie).

2. System Update

  • Executed apt-get update and apt-get dist-upgrade.
  • Proxmox is now running version 9.1.0 with Kernel 6.17.9-1-pve.

3. Subscription Nag (Reverted for Stability)

  • Note: An attempt to remove the subscription nag was reverted as it caused a blank screen, likely due to a syntax conflict with the version of Proxmox running on Debian 13.
  • Status: The default login warning remains active to ensure 100% Web UI stability.

Validation Results

I verified the installation state using pveversion -v. Here are the highlights:

ComponentVersion
Proxmox VE9.1.0
PVE Manager9.1.5
Kernel6.17.9-1-pve
Windows VM ID100
StatusRUNNING

VM Provisioning: Windows Server 2025 (ID 100)

I have created and started a high-performance VM for your Windows Server installation.

1. Hardware Specifications

  • CPU: 8 Cores (Host type)
  • RAM: 32 GB
  • Disk: 100 GB (VirtIO SCSI)
  • BIOS/TPM: UEFI with TPM 2.0 (Required for Server 2025)
  • Network: VirtIO Bridge (vmbr0)

2. Installation Media

The VM is configured with two virtual CD drives:

  • Drive 1 (IDE 2)win2025.iso (Installer)
  • Drive 2 (IDE 0)virtio-win.iso (Drivers)

3. How to Complete Installation

  1. Log in to the Proxmox Web UI.
  2. Select VM 100 (WinServer2025) from the left sidebar.
  3. Click on the Console tab.
  4. If it asks to “Press any key to boot from CD”, do so immediately.
  5. Important: When the installer asks “Where do you want to install Windows?” and no disk appears:
    • Click Load driver.
    • Browse to the VirtIO CD Drive -> amd64 -> w11 -> viostor.
    • Once the driver is loaded, your 100GB disk will appear.
  6. (Optional) Load the NetKVM driver from the same VirtIO disk for networking.

Post-Installation: Networking & Drivers

After Windows is installed, you need to install the drivers for the network adapter and the Proxmox Guest Agent.

1. Install Networking Driver (NetKVM)

  1. Open Device Manager in Windows.
  2. Find Ethernet Controller (likely under “Other devices”).
  3. Right-click -> Update driver -> Browse my computer for drivers.
  4. Browse to the VirtIO CD Drive (D:) and select:
    • D:\NetKVM\2k25\amd64
  5. Click Next and install. Your networking will now be active.

This installs everything (Memory ballooning, Guest Agent for IP display, etc.) in one go:

  1. Open the VirtIO CD Drive (D:) in File Explorer.
  2. Run virtio-win-guest-tools.exe.
  3. Follow the wizard and install everything.
  4. Reboot the VM.

Final Access: Secure Tailscale Only

Your system is now fully secured. All public “doors” (SSH, RDP, Proxmox UI) have been closed to the public internet.

1. Connection Addresses (Tailscale Required)

To access your servers, ensure Tailscale is ON and use these private addresses:

ResourceTailscale AddressSpecifications
Proxmox Dashboardhttps://100.91.184.31:8006Host Management
Server 1 (VM 100)10.10.10.100180GB Disk, 32GB RAM, 8 Cores
Server 2 (VM 101)10.10.10.101180GB Disk, 64GB RAM, 16 Cores

2. Networking (Subnet Routing)

The Proxmox host is acting as a Subnet Router for the 10.10.10.0/24 network.

  • You can reach any VM on the 10.10.10.x range directly.
  • Internal traffic between VMs is isolated from the public internet for maximum security.

3. Identity Note after Cloning

VM 101 is a clone of VM 100. I have renamed it to server02, but you should check the Windows Computer Name inside the OS to ensure they don’t conflict if they join the same domain.


Setup 100% Complete. Your private server cluster is ready for production.

Scripts Used:

Project Report: Proxmox Dual-VM Cluster & Security Lockdown

This document provides a chronological overview of the implementation process for your Proxmox server, including the successful scripts and commands used to achieve the final secure state.

1. Phase 1: Proxmox Host Preparation

Proxmox Repository Configuration

We updated the repository list to use the free No-Subscription repository to ensure system updates without errors.

# Disable Enterprise repo
# Sed command used to comment out pve-enterprise.list
sed -i 's/^deb/#deb/g' /etc/apt/sources.list.d/pve-enterprise.list

# Add No-Subscription repo
echo "deb http://download.proxmox.com/debian/pve trixie pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list

# Update system
apt update && apt dist-upgrade -y

2. Phase 2: Virtual Machine Provisioning

VM 100 (Primary Instance)

We created a Windows Server 2025 instance with UEFI and TPM 2.0 support.

  • Hardware: 32GB RAM, 8 Cores (Host type), 180GB NVMe Disk.
  • Drivers: VirtIO guest tools installed for networking and storage performance.

VM 101 (Performance Instance)

Created as a Full Clone of VM 100 to save configuration time.

  • Upgraded Resources: 64GB RAM, 16 Cores.
  • Networking: Static IP 10.10.10.101.

3. Phase 3: Secure Remote Access (Tailscale)

Tailscale Installation & Subnet Routing

Tailscale was installed on the Proxmox host to act as a secure gateway for all internal resources.

# Installation
curl -fsSL https://tailscale.com/install.sh | sh

# Authentication & Subnet Routing
# Advertises the 10.10.10.0/24 subnet to your other Tailscale devices
tailscale up --advertise-routes=10.10.10.0/24

4. Phase 4: Networking & NAT

We configured a private NAT network (vmbr1) for the VMs so they share the host’s single public IP.

# Bridge Configuration (/etc/network/interfaces)
auto vmbr1
iface vmbr1 inet static
        address 10.10.10.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE

5. Phase 5: Security Lockdown (Port Closure)

To maximize security, we blocked all public management ports on the external interface (vmbr0).

Persistent IPTable Rules

# Block Public RDP (3389, 3390)
# (Done by removing DNAT rules or explicitly dropping INPUT on vmbr0)

# Block Public Proxmox UI (8006)
iptables -A INPUT -i vmbr0 -p tcp --dport 8006 -j DROP

# Block Public SSH (22)
iptables -A INPUT -i vmbr0 -p tcp --dport 22 -j DROP

6. Phase 6: Storage Optimization (Disk Resizing)

We expanded the virtual disks to 180GB each. To fix the “Extend Volume Greyed Out” issue caused by the Windows Recovery Partition, the following sequence was used:

Windows Partition Fix (Successful Script)

Run this inside each Windows VM in PowerShell (Admin):

# 1. Disable Recovery Environment
reagentc /disable

# 2. Force delete the blocking Recovery partition and expand C: drive
# (Note: This assumes Partition 3 layout. Adjust number if needed)
"select disk 0", "select partition 3", "delete partition override", "select volume c", "extend" | diskpart

Status: 100% Complete. Final Setup: 128GB Host | 2x Windows VMs | Tailscale Subnet Router | Full Security Lockdown.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.