Tuesday, June 24, 2014

Embedded "Security" with IPMI and UPnP

First, let me say that I've been outspoken about UPnP on gateway devices since UPnP was first released - it is simply a Bad IdeaTM.

Recently released information on an IPMI vulnerability involving UPnP on server motherboards has been published by Cari.net here.  Basically, it details how the BMC authentication details of almost 32,000 servers are available online, easily, in plain text - from the servers themselves.  Add to this the older Linux kernel versions some BMCs were running (any old version of any operating system will contain unpatched vulnerabilities that can be exploited for nefarious purposes) and you have a great recipe for easy and effective hacking of servers.

Not good.  Not good at all.

So, again, can I ask that people administering systems actually do their jobs properly and keep up to date with patches and updates and - particularly - disable vulnerable services from gateway devices and implement decent firewall rules to limit access to systems that are supposed to be protected behind these firewalls.


Regards,

The Outspoken Wookie

Wednesday, June 18, 2014

CPU Cores, NUMA Nodes and Performance Issues

I have a client who has been suffering from performance issues on a Remote Desktop Server guest that's running on a Hyper-V server.  I suppose some details may help here:

Original Configuration
Dell PowerEdge T410 Server
2 * Intel Xeon E5649 CPUs @ 2.53GHz
4 * 8GB 1333MHz DDR3 modules (32GB total)
Windows Server 2008 R2 Standard SP1 as the Hyper-V Host OS
 - Windows SBS 2011 (Server 2008 R2 SP1) as a Hyper-V Guest
 - Windows Server 2008 R2 SP1 (Remote Desktop and LOB Server) as a Hyper-V Guest

Current Configuration
Dell PowerEdge T410 Server
2 * Intel Xeon E5649 CPUs @ 2.53GHz
8 * 8GB 1333MHz DDR3 modules (64GB total)
Windows Server 2012 R2 Standard as the Hyper-V Host OS
 - Original Windows SBS 2011 (Server 2008 R2 SP1) as a Hyper-V Guest
 - Original Windows Server 2008 R2 SP1 as a Hyper-V Guest
 - New Windows SBS 2011 (Server 2008 R2 SP1) as a Hyper-V Guest (will replace original instance)
 - New Windows Server 2008 R2 SP1 (RDS) as a Hyper-V Guest (will replace original instance(1))
 - New Windows Server 2008 R2 SP1 (LOB) as a Hyper-V Guest
 - New Windows Server 2012 R2 (LOB) as a Hyper-V Guest

Now, we took this particular client over recently and they have been suffering various performance-related issues as well as LOB-related issues since the new system was installed (Aug-Sep, 2012). We'll just speak about the performance-related issues here...

This system has always been under-performing, sluggish and unstable. None of those are good things and we found a few causes for some of the issues, but realistically we felt the best result would be achieved by upgrading the RAM in the server and by rebuilding all the servers (software) and adding some more for application isolation purposes - we're not fans of doing what was originally done here (running LOB applications on an SBS 2011 box) or what was then tried as a fix (running LOB applications on a Remote Desktop Server). So, as Server 2012 R2 is the current Windows Server release, that's what we decided to run with - and also because its Hyper-V implementation gives us a lot more options such as live exports and much improved Hyper-V replication.

The one remaining major issue, after the RAM and Host OS upgrade, is still the sluggish performance of the original 2008 R2 RDS guest. It was topping its CPU out (in the guest) whilst barely using any host CPU resources (16%). Yes, the latest Hyper-V Integration Components are installed (for those wondering).

So, under the original Hyper-V Host (ie, 2008 R2), there were 4 Virtual Processors assigned to each guest, which is the maximum number of Virtual Processors that any guest can have under 2008 R2 Hyper-V.

Now, as this is a 2*6-core host (i.e. 12 real cores), which means a total of 24 Logical Processors including HyperThreading, we assigned 8 cores to the original RDS guest and 4 cores to the original SBS guest and moved on to other things such as building the new servers. Apparently, that's not all we needed to do - the SBS box was running fine using 16% of the host CPU resources however the RDS box was CPU-starved.

After a fair bit of investigation, fiddling, Googling, asking questions of people such as Kyle Rosenthal from WindowsPCGuy and general head scratching, hair pulling and frustration (all-round, from both the client and ourselves) I found the issue earlier this afternoon.

But first, things that COULD well have been the issue, but weren't:

1. I thought I bumped the CPU count up but hadn't
2. The host was actually flat-lining its CPU
3. I needed to install the latest Hyper-V Integration components
4. I needed to reinstall the latest Hyper-V Integration components over the top of the existing (latest) components (yet to find a way to actually achieve this)
5. I needed to uninstall and reinstall the latest components (again, yet to find a way to uninstall them)
6. I needed to drop the number of assigned logical processors from 8 back to 4, reboot, then bump from 4 to 8 and reboot again
7. I needed to drop back to a single logical processor, reboot, then up to 8 and reboot again

And now what I found to be the actual issue: "msconfig" seems to have been run in that 2008 R2 RDS virtual guest and then under Boot/Advanced, the # processors was limited to 4. I first thought about something like this after seeing that Device Manager showed all 8 virtual processors, but Task Manager/Perfmon only showed 4. So I had a look in "msconfig" and lo and behold - there was a limit of 4 CPUs set. I unchecked this option, rebooted and amazingly (well, OK, not really), all 8 CPUs were showing.

So, for good measure, I increased this to 12 virtual processors, rebooted again, and all 12 were showing in Task Manager. WOOHOO!!!

Once you go past 12 virtual processors (in this dual 6-core server), NUMA comes into play. NUMA (Non-Uniform Memory Access) is a way of allowing a processor (in hardware) or a virtual machine (in software) to access local memory faster than remote memory - in hardware, "remote" memory counts as memory connected directly to the bus of a different physical CPU. Now, NUMA not only comes into play when you start adding a large number of cores to a virtual guest, but also when you add more RAM than is physically available on one CPU to a guest - you get an approximately 20% performance hit when crossing a NUMA boundary. Because of this performance hit, you can actually get a performance reduction by adding too many logical processors and/or too much RAM to a virtual guest.

Microsoft has some information on NUMA that states (basically) the maximum memory in a NUMA node is the amount of physical RAM divided by the number of logical processors. That information seems to be rather outdated when you use current multi-core CPUs. If you're looking for some more updated information regarding NUMA node boundaries, I strongly suggest having a read of the article on Aidan Finn's blog found at http://www.aidanfinn.com/?p=11945 which refers to this blog post
http://www.benjaminathawes.com/2011/11/09/determining-numa-node-boundaries-for-modern-cpus/

Regards,

The Outspoken Wookie

Firewall with Hyper-V Synthetic NIC Support

I've been looking for a while to try and find an Open Source firewall for use in a Hyper-V environment. There's a number out there that will install and work, however trying to find a product with support for the Hyper-V Synthetic NICs instead of just the slower Legacy NICs has been... well... tedious.

I've looked at IPFire before - mainly as it was originally a fork of the IPCop project which is a fork of the SmoothWall project that I was on the development team of for some time. I was re-introduced to IPFire recently on a site visit to try and see if we could find causes for ADSL speed issues and then again at a friend's place last week who was running it in his home office.

So, on a whim, I thought I'd give it another go and see if it supported the Hyper-V Synthetic NICs on Hyper-V 2012 R2 and, well, kinda. During the installation, both the legacy NICs I added to the Hyper-V Guest were detected as was the default installed Synthetic NIC I'd not removed. So, I stopped the install, removed the Legacy NICs, added another Synthetic NIC and restarted the install. During installation, I chose the appropriate NIC for Green and Red, and off we went.

The first boot was great - the IPFire VM came up NICs blazing. :) I fiddled with the web interface a bit then rebooted. That's when it started to look like things weren't quite as I'd hoped - during the boot, I received an error message stating that "Interface green0 doesn't exist", however red0 worked fine. I re-ran "setup" and re-assigned the NICs and rebooted and this time "Interface red0 doesn't exist" was reported. Hhmmm...

So it seemed that Hyper-V Synthetic NICs were kinda supported. I had a look to see if the modules were being loaded properly and noticed a distinct lack of Hyper-V modules in /etc/sysconfig/modules. After a little Googling, I found the found the following information on the IPFire Install Guide:

Hyper-V
IPFire includes the modules required to work properly in a Hyper-V environment, but those modules are not enabled by default. To enable those modules, add the following four lines to the file /etc/sysconfig/modules and reboot:
hv_blkvsc
hv_netvsc
hv_storvsc
hv_vmbus

So, after adding these modules and rebooting a few times to test, all seems fine and IPFire is running with Hyper-V Synthetic NICs. :)

Now, for the speed testing results. I ran this test using 35.0GB of data consisting of some .iso files of around 4GB and also 7.8GB of smaller files of varying sizes (ie, extracted Windows Server 2012 R2 Standard and Windows Server 2012 R2 Essentials ISOs) with the following results.

Test 1 - Legacy NICs, across a 1GbE L3 switch from a physical server's USB-attached drive to this 2012 R2 Hyper-V Guest on a RAID-5 SSD Array on a 2012 R2 Standard server running Hyper-V

Test 2- Synthetic NICs, across a 1GbE L3 switch from a physical server's USB-attached drive to this 2012 R2 Hyper-V Guest on a RAID-5 SSD Array on a 2012 R2 Standard server running Hyper-V

Test 3- Synthetic NICs, across a 1GbE L3 switch from a physical server's SAS RAID-5 HDD Array to this 2012 R2 Hyper-V Guest on a RAID-5 SSD Array on a 2012 R2 Standard server running Hyper-V

Results:

ScenarioNIC TypeSourceSpeedComments
Test 1LegacyRemote USB20.113GB/minYes, Legacy NICs are as slow as this!
Test 2SyntheticRemote USB21.62GB/minuteThis is about the speed expected from USB2
Test 3SyntheticRemote SAS Array4.32GB/minuteThis is much more bearable!

So, it shows that the implementation of the Hyper-V Synthetic NIC drivers in IPFire definitely live up to expectations and provide much better performance than the old Legacy NICs can ever dream of.


Regards,

The Outspoken Wookie

Internet and WAN Connectivity - WTF are all these terms?

As we are all aware, the ICT (Information and Communication Technology) industry seems to exist solely to create new TLAs (Three Letter Acronyms) and ETLAs (Extended Three Letter Acronyms) and one of the most confusing places that these TLAs and ETLAs are used and abused is relating to networking in general and more specifically to Internet and WAN (Wide Area Internet) connectivity. So, I'll make an attempt here to help and clarify this a bit...

There are many ways to break Internet and WAN connectivity down, however there are two main types of Internet connection available today – asynchronous and synchronous. ADSL (Asynchronous Digital Subscriber Line), ADSL2 and ADSL2+ are all asynchronous connections – they have a faster download (inbound) speed than upload (outbound) speed. ADSL speeds are generally available up to 8192Kbps down and 384Kbps up and ADSL2+ speeds are generally available up to 24Mbps down and 1.2Mbps up. There’s also an ADSL2+ Annex M standard that is available in maximum speeds of around 20Mbps down and 3.2Mbps up. All these speeds are "best case" and basically will only be achieved if you are located in a building next to a telephone exchange, with these speeds dropping off as the distance from the exchange increases.

A business grade ADSL2+ service with 200GB or so of data is likely to cost in the vicinity of $100-$150 per month.

ADSL and ADSL2+ connections can also be configured to sacrifice a lot more of their download speed for increased upload speed, hence the availability of ADSL connections at 0.5Mbps/0.5Mbps and ADSL 2+ connections at 2Mbps/2Mbps. As you can see, these speeds are now synchronous, yet still delivered over one form of ADSL connection - way to help keep things clear...

Synchronous connections are delivered in 2 main formats – over copper or over fiber. Fiber connections can go faster (up to 1Gbps and higher), but the installation costs can be in the region of $5000. Synchronous connections over copper are generally available up to 40Mbps and are often referred to as “Ethernet in the First Mile” (EFM), “Ethernet Over Copper” (EOC) and “Mid-Band Ethernet” (MBE) and these terms are, to all intents and purposes, interchangeable.  Installation costs on an EOC connection are in the region of $1200.

A 10Mbps EOC connection with 200GB or so of data is likely to cost in the vicinity of $250-$500 per month, and on Fiber this will likely cost around $500-$750 per month depending on the service provider and the distance from the exchange.

Historically, asynchronous connections were more than acceptable for most individuals and businesses as most of the time people were *downloading* things like files and web pages from the Internet and rarely *uploading* information, but as time has progressed this has become more the exception than the norm for many businesses as they are using online storage for documents, photos and so on, online email and groupware servers and connections between multiple offices. This is where synchronous connections have become more popular for businesses.

The National Broadband Network (NBN) here in Australia is a bit of an odd one out. It is an asynchronous connection, but delivered at speeds of up to 100Mbps down and 40Mbps up – so it delivers very decent outbound speeds, with even faster inbound speeds. Of course, this is for those lucky enough to have had this rolled out before the Federal Liberal/National Party decided that high speed Internet was too scary - people get easy access to educational material - and destroyed Australia's chance at decent Internet speeds.

A 100/40Mbps NBN connection with 200GB or so of data is likely to cost in the vicinity of $120-$170 per month, depending on the service provider and extras included in the plan.

Now, just because you have an ADSL2+ or EOC connection doesn't necessarily mean it is connected directly to the Internet. Many larger, geographically diverse businesses will have an ADSL2+, Fiber or EOC tail connected to their multiple locations that are all brought back into their ISP's network and from there, connected to the Internet. This is the "WAN" part of the "Internet and WAN" in the article title. These sorts of connections are often referred to as an MPLS (Multi-Protocol Label Switching) Network, VPLS (Virtual Private LAN Service) Network or simply a Private Network.

Another way to interconnect multiple locations is across the Internet using a VPN (Virtual Private Network). This is where each site is connected directly to the Internet and across this Internet connection is run a secure pipe (the VPN) that connects the multiple sites. Types of VPN connection include EoIP (Ethernet over Internet Protocol), IPSEC (Internet Protocol Security), L2TP (Layer 2 Tunneling Protocol), PPTP (Point-To-Point Tunneling Protocol) and SSTP (Secure Sockets Tunneling Protocol).

Another term you may have heard is "Contention Ratio". What this means, basically, is the number of customers of a particular ISP (Internet Service Provider) who are sharing the bandwidth that you are paying for. So, if you see a 1:1 contention ratio, this means that the speed of the connection you are paying for is reserved for you into the ISP's core network. A contention ratio of 4:1 means that you're sharing that bandwidth into the ISP's core network with 3 other customers. Residential contention ratios are significantly higher than those for business customers, which is one of the reasons that residential connections are priced lower than business grade connections.

Finally, you may have heard the terms "Layer 2 Connection" or "Layer 3 Connection" which are a little more complex to explain as compared to the previous terms, however the simple way to look at it is a L3 Connection will have all traffic from the ISP's network sent to you at a single QoS (Quality of Service) level - you can't ask for inbound VoIP (Voice over Internet Protocol) to have higher priority than general web browsing, email, file downloading or anything else. A L2 Connection will allow you to have inbound traffic prioritized from the ISP to your network, so a large inbound email won't stomp on a VoIP call from a potential client. This may help explain why L2 Connections are often a little more expensive than a L3 Connection.

So, back to the big question - what do I want and when do I want it?

If your Internet requirements don't involve a lot of VoIP, video conferencing, Private Network, VPN or general outbound traffic, an ADSL2+ service will likely suit quite nicely. However, if you do utilise VoIP, VPNs to other sites or remote workers, video conferencing or have various cloud-based services such as hosted email, hosted file services or have a requirement for a better SLA (Service Level Agreement) than "we'll try to get it running again... sometime", some form of EoC or Fiber network connectivity (Internet or WAN) would likely be a better option.


Regards,

The Outspoken Wookie

Tuesday, June 17, 2014

pfSense in Hyper-V 2012 R2

As of May 2012, Microsoft has supported FreeBSD running as a guest on Hyper-V (see this article for more info).  That's nice as pfSense runs on a FreeBSD base, and if all was well in the world, the recently released pfSense 2.1 would have supported these new drivers.  If.

Unfortunately, pfSense 2.1 doesn't include the required drivers, so we're still stuck with Legacy NICs.  :(  Oh, well...

So, if you want to configure a pfSense Hyper-V 2012 R2 guest, you'll have to stick with the 100Mbps limitation of the Legacy NICs and a little bit of time synchronization funkiness due to the Hyper-V Host CPUs entering into low power mode and pfSense not handling this all that well, resulting in a number of "calcru: runtime went backwards" error messages.  :(

So, at this point in time pfSense 2.1 works adequately for a testing environment under Hyper-V, but I wouldn't recommend using it for a production environment.


  1. The latest pfSense is available from: http://mirror.optus.net/pub/pfSense/downloads/ - choose the LiveCD-x.y-RELEASE-amd64.iso.gz or LiveCD-x.y-RELEASE-i386.iso.gz file, check its checksum after downloading, and extract the ISO image
  2. Create a Gen 1 Hyper-V Guest with one CPU, 512MB RAM, 2 * Legacy NICs (and no Synthetic/native ones) and disable the Time Synchronization option.  Make a 5GB or so fixed VHDX file and assign the ISO as the DVD.  Boot away
  3. After the LiveCD boots and the two NICs (de0 and de1 have been assigned), you have the option to install to HDD - take this option and remove the ISO after the install and before the reboot happens
  4. Ensure the IPs of the two interfaces are configured appropriately.  I configured de0 to connect to the physical interface and de1 to connect to a Private Network for the guests inside the pfSense firewall.  Check that you can ping 8.8.8.8 from the console.
  5. Configure a guest on the Private Network, check it can ping 8.8.8.8 and www.google.com
  6. Hit the pfSense web page from inside the network and configure any options you need.
  7. On the pfSense console, you may need to type the following to ensure the NICs are restarted properly.  This used to be a significant issue with earlier pfSense releases, however it seems to have been fixed in 2.1 - YMMV:
    echo "ifconfig de0 down" >> /etc/rc.local
    echo "ifconfig de0 up" >> /etc/rc.local
    echo "ifconfig de1 down" >> /etc/rc.local
    echo "ifconfig de1 up" >> /etc/rc.local
  8. To try and help a little with the time sync issues, you will likely also need to type:
    echo "sysctl kern.timecounter.hardware=TSC" >> /etc/sysctl.conf
  9. That's pretty much it.  You'll have a somewhat functional pfSense Hyper-V guest.  It would be nice if the pfSense team had incorporated the Hyper-V drivers - let's hope they actually do this for pfSense 2.2.


Regards,

The Outspoken Wookie