I will start a new series called “New Trading Platform”. In this series I will design a transportable renewed platform from scratch. I will start from barebone metal server and build step by step a fully functional framework.
Walkthrough: Proxmox Post-Installation Configuration
I have successfully configured your fresh Proxmox installation on x.59.252.177. The system is now up-to-date and ready for use with the free No-Subscription repository.
Changes Made
1. Repository Configuration
- Disabled Enterprise Repo: Prevented error messages when updating without a subscription.
- Enabled No-Subscription Repo: Switched to the community-supported repository for Proxmox VE.
- Debian Version: Detected as Debian 13 (Trixie).
2. System Update
- Executed
apt-get updateandapt-get dist-upgrade. - Proxmox is now running version 9.1.0 with Kernel 6.17.9-1-pve.
3. Subscription Nag (Reverted for Stability)
- Note: An attempt to remove the subscription nag was reverted as it caused a blank screen, likely due to a syntax conflict with the version of Proxmox running on Debian 13.
- Status: The default login warning remains active to ensure 100% Web UI stability.
Validation Results
I verified the installation state using pveversion -v. Here are the highlights:
| Component | Version |
|---|---|
| Proxmox VE | 9.1.0 |
| PVE Manager | 9.1.5 |
| Kernel | 6.17.9-1-pve |
| Windows VM ID | 100 |
| Status | RUNNING |
VM Provisioning: Windows Server 2025 (ID 100)
I have created and started a high-performance VM for your Windows Server installation.
1. Hardware Specifications
- CPU: 8 Cores (Host type)
- RAM: 32 GB
- Disk: 100 GB (VirtIO SCSI)
- BIOS/TPM: UEFI with TPM 2.0 (Required for Server 2025)
- Network: VirtIO Bridge (
vmbr0)
2. Installation Media
The VM is configured with two virtual CD drives:
- Drive 1 (IDE 2):
win2025.iso(Installer) - Drive 2 (IDE 0):
virtio-win.iso(Drivers)
3. How to Complete Installation
- Log in to the Proxmox Web UI.
- Select VM 100 (WinServer2025) from the left sidebar.
- Click on the Console tab.
- If it asks to “Press any key to boot from CD”, do so immediately.
- Important: When the installer asks “Where do you want to install Windows?” and no disk appears:
- Click Load driver.
- Browse to the VirtIO CD Drive ->
amd64->w11->viostor. - Once the driver is loaded, your 100GB disk will appear.
- (Optional) Load the
NetKVMdriver from the same VirtIO disk for networking.
Post-Installation: Networking & Drivers
After Windows is installed, you need to install the drivers for the network adapter and the Proxmox Guest Agent.
1. Install Networking Driver (NetKVM)
- Open Device Manager in Windows.
- Find Ethernet Controller (likely under “Other devices”).
- Right-click -> Update driver -> Browse my computer for drivers.
- Browse to the VirtIO CD Drive (D:) and select:
D:\NetKVM\2k25\amd64
- Click Next and install. Your networking will now be active.
2. Install All Drivers & Guest Agent (Recommended)
This installs everything (Memory ballooning, Guest Agent for IP display, etc.) in one go:
- Open the VirtIO CD Drive (D:) in File Explorer.
- Run
virtio-win-guest-tools.exe. - Follow the wizard and install everything.
- Reboot the VM.
Final Access: Secure Tailscale Only
Your system is now fully secured. All public “doors” (SSH, RDP, Proxmox UI) have been closed to the public internet.
1. Connection Addresses (Tailscale Required)
To access your servers, ensure Tailscale is ON and use these private addresses:
| Resource | Tailscale Address | Specifications |
|---|---|---|
| Proxmox Dashboard | https://100.91.184.31:8006 | Host Management |
| Server 1 (VM 100) | 10.10.10.100 | 180GB Disk, 32GB RAM, 8 Cores |
| Server 2 (VM 101) | 10.10.10.101 | 180GB Disk, 64GB RAM, 16 Cores |
2. Networking (Subnet Routing)
The Proxmox host is acting as a Subnet Router for the 10.10.10.0/24 network.
- You can reach any VM on the
10.10.10.xrange directly. - Internal traffic between VMs is isolated from the public internet for maximum security.
3. Identity Note after Cloning
VM 101 is a clone of VM 100. I have renamed it to server02, but you should check the Windows Computer Name inside the OS to ensure they don’t conflict if they join the same domain.
Setup 100% Complete. Your private server cluster is ready for production.
Scripts Used:
Project Report: Proxmox Dual-VM Cluster & Security Lockdown
This document provides a chronological overview of the implementation process for your Proxmox server, including the successful scripts and commands used to achieve the final secure state.
1. Phase 1: Proxmox Host Preparation
Proxmox Repository Configuration
We updated the repository list to use the free No-Subscription repository to ensure system updates without errors.
# Disable Enterprise repo
# Sed command used to comment out pve-enterprise.list
sed -i 's/^deb/#deb/g' /etc/apt/sources.list.d/pve-enterprise.list
# Add No-Subscription repo
echo "deb http://download.proxmox.com/debian/pve trixie pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
# Update system
apt update && apt dist-upgrade -y
2. Phase 2: Virtual Machine Provisioning
VM 100 (Primary Instance)
We created a Windows Server 2025 instance with UEFI and TPM 2.0 support.
- Hardware: 32GB RAM, 8 Cores (Host type), 180GB NVMe Disk.
- Drivers: VirtIO guest tools installed for networking and storage performance.
VM 101 (Performance Instance)
Created as a Full Clone of VM 100 to save configuration time.
- Upgraded Resources: 64GB RAM, 16 Cores.
- Networking: Static IP
10.10.10.101.
3. Phase 3: Secure Remote Access (Tailscale)
Tailscale Installation & Subnet Routing
Tailscale was installed on the Proxmox host to act as a secure gateway for all internal resources.
# Installation
curl -fsSL https://tailscale.com/install.sh | sh
# Authentication & Subnet Routing
# Advertises the 10.10.10.0/24 subnet to your other Tailscale devices
tailscale up --advertise-routes=10.10.10.0/24
4. Phase 4: Networking & NAT
We configured a private NAT network (vmbr1) for the VMs so they share the host’s single public IP.
# Bridge Configuration (/etc/network/interfaces)
auto vmbr1
iface vmbr1 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
5. Phase 5: Security Lockdown (Port Closure)
To maximize security, we blocked all public management ports on the external interface (vmbr0).
Persistent IPTable Rules
# Block Public RDP (3389, 3390)
# (Done by removing DNAT rules or explicitly dropping INPUT on vmbr0)
# Block Public Proxmox UI (8006)
iptables -A INPUT -i vmbr0 -p tcp --dport 8006 -j DROP
# Block Public SSH (22)
iptables -A INPUT -i vmbr0 -p tcp --dport 22 -j DROP
6. Phase 6: Storage Optimization (Disk Resizing)
We expanded the virtual disks to 180GB each. To fix the “Extend Volume Greyed Out” issue caused by the Windows Recovery Partition, the following sequence was used:
Windows Partition Fix (Successful Script)
Run this inside each Windows VM in PowerShell (Admin):
# 1. Disable Recovery Environment
reagentc /disable
# 2. Force delete the blocking Recovery partition and expand C: drive
# (Note: This assumes Partition 3 layout. Adjust number if needed)
"select disk 0", "select partition 3", "delete partition override", "select volume c", "extend" | diskpart
Status: 100% Complete. Final Setup: 128GB Host | 2x Windows VMs | Tailscale Subnet Router | Full Security Lockdown.