Category Archives: NAT in ESXi
I am setting up a demo system on a leased dedicated server. System consists of :
- ESXI host with 1 NIC assigned (so 1 Static IP)
- 64 bit Ubuntu guest as server
I installed and configured the system and the current network topology has :
- physical adapter
- A management network where I see the public IP assigned that is given to me by dedicated server provider
- Virtual Machine Port Group where my guest is running
- Finally a vSwitch between physical adapter and networks mentioned above
I can access to ESXI host, from the vSphere client I can access my Ubuntu guest as well. Guest has access to web (verified by pinging).
My question :What kind of basic setup would allow external users to access services running on the Ubuntu guest ?
Before asking this question I browsed a bit and scanned through VMWare documentations. I have seen:
port forwarding via router however I do not have control over the router.
using pfSense well this one is looking like a solution but a bit more complicated then I expected.
Are there any simpler ways to accomplish my goal ?
Note : I am a software developer with a bit familiarity of computer networks, virtualization and linux. Therefore I would really appreciate simple solutions (if possible), explanations/directions on the topic.
It is doable, but the bad news is vSwitches till today still don’t support NAT. So some kind of setup like pfSense VM as a routing front end is needed.
On the other hand, pfSense is not the only choice, you can use a plain linux(eg, Ubuntu server) VM with iptables to do the job.
Additional IP – Simplest Way
Another choice is get additional IP and setup VM to use bridge mode, which is the easiest. But may (very likely) come with additional cost.
I am fairly new to ESXi but have decided to dive into this, but have found out that things are not as easy as I had expected them to be (no doubt this is primarily caused by my lack of knowledge on the matter at this time).
What I have:
- A dedicated server with 1 NIC running ESXi
- A single (public) IP address for the host
- A set of (public) IP addresses intended for any use I see them fit. To keep things simple, let’s imagine a single webserver for now.
What I want to achieve:
- Secure ESXi management; I really feel that a publicly accessible management host is wrong.
- I don’t have any physical routers at my disposal so I cannot hide the host behind a physical VPN.
- Public access to some of my guest systems
- Additional guests need to sit on a private network.
- Public and private guests should optionally be able to communicate via the private network.
Currently, I’m a bit lost on how I should tackle this. I’d probably be able to get something running, but I don’t want to start on the wrong basis or make choices that end up to be insecure.
Any help is appreciated.
UPDATE: what I have achieved so far (and network screenshot):
- ESXi is up and running, still on the public interface
- I have configured a pfSense guest
- I have configured a DSL desktop to reach the pfSense guest through the private network.
I still feel that hiding ESXi behind a virtual VPN is quite risky, since I do not have console access. If I am overlooking something, or any alternatives are possible, I’d really like to know.
- Create (at least) two vSwitches, one “public”, connected to one of the server NICs and one “private”, which is not attached to any physical NIC.
- Pick an RFC1918 subnet to use on the private vSwitch, say
- Install pfSense in a VM, assign its WAN interface to the public vSwitch and its LAN interface to the private vSwitch. Additionally, assign the VMware vKernel management port to the private vSwitch.
- Set up a VPN in pfSense along with appropriate routing to get to the private network. OpenVPN is quite easy to set up, but IPsec would be fine as well.
- For any server VMs you have, assign their interface to the private network.
- Create Virtual IPs in pfSense for the rest of your public IP addresses, then set up port forwards for any services you need people to be able to access from outside the host.
At this point, the pfSense VM will be the only way traffic can get from the outside to the rest of your servers and management interfaces. As such, you can specify very specific rules about which traffic is allowed and which is blocked. You will be able to use the vSphere Client after connecting to the VPN you configured in step 4.
In our last article, we went through the workflow to add each host to our Distributed Virtual Switch, as well as adding the requisite VMkernel ports for each host. In this article, we’ll build upon that as we configure software iSCSI on our hosts.
Add the Software iSCSI Adapter
1. In the vSphere Web Client Home page, click on the Host and Clusters icon.
2. Click the Host you want to configure (1), Manage (2), then Storage (3).
3. Click the green + symbol (1), then Software iSCSI Adapter (2).
4. Click OK on the Add Software iSCSI Adapter popup.
5. You should now see the software iSCSI adapter, named vmhba33 or something similar.
Configure Network Adapters
Since we already added and configured the VMkernel adapters in an earlier article, there’s only one thing we need to do here – set each iSCSI distributed port group to use only a single uplink. Since this is the same procedure for both distributed port groups, I’ll only show it once.
1. In the vSphere Web Client Home page, click on the Networking icon.
2. Right-click on iSCSI1 (1), then click Edit Settings (2).
3. Click Teaming and failover, then move all uplinks but one into Unused uplinks. In this case, iSCSI1 will have Uplink 3 and iSCSI2 will have Uplink 4. Click OK.
Add the iSCSI Software Adapter Network Port Binding
1. Back in Hosts and Clusters > Host > Manage > Storage, click on vmhba33, Network Port Binding, then the green + to add binding.
2. Choose the first VMkernel adapter in the list, then click OK. Repeat these steps for the second VMkernel adapter.
3. Your Software iSCSI adapter should now show two unused VMkernel adapters.
Add New iSCSI Targets
1. On the iSCSI Software Adapter, click Targets, then Add.
2. Fill in the IP address or FQDN of your iSCSI target. Click OK. Repeat for each additional target.
3. Click Rescan to pick up the new block devices.
4. Click OK to scan for new devices and VMFS volumes.
5. We now see the expected LUNs…
Across both paths
6. Now, rinse and repeat for the other hosts in your cluster.
And that’s it for this article. Check back next time for when we actually do something with these LUNs.